Apr 13 19:24:22.264873 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 13 19:24:22.264917 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:24:22.264942 kernel: KASLR disabled due to lack of seed Apr 13 19:24:22.264959 kernel: efi: EFI v2.7 by EDK II Apr 13 19:24:22.264975 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 13 19:24:22.264991 kernel: ACPI: Early table checksum verification disabled Apr 13 19:24:22.265009 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 13 19:24:22.265024 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 19:24:22.265041 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 19:24:22.265056 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 19:24:22.265077 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 19:24:22.265094 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 13 19:24:22.265109 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 13 19:24:22.265126 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 13 19:24:22.265145 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 19:24:22.265165 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 13 19:24:22.265217 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 13 19:24:22.265235 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 13 19:24:22.265252 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 13 19:24:22.265268 kernel: printk: bootconsole [uart0] enabled Apr 13 19:24:22.265285 kernel: NUMA: Failed to initialise from firmware Apr 13 19:24:22.265302 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:24:22.265319 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 13 19:24:22.265336 kernel: Zone ranges: Apr 13 19:24:22.265352 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:24:22.265368 kernel: DMA32 empty Apr 13 19:24:22.265391 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 13 19:24:22.265408 kernel: Movable zone start for each node Apr 13 19:24:22.265424 kernel: Early memory node ranges Apr 13 19:24:22.265441 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 13 19:24:22.265457 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 13 19:24:22.265474 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 13 19:24:22.265490 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 13 19:24:22.265507 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 13 19:24:22.265523 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 13 19:24:22.265540 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 13 19:24:22.265556 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 13 19:24:22.265572 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:24:22.265593 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 13 19:24:22.265610 kernel: psci: probing for conduit method from ACPI. Apr 13 19:24:22.265634 kernel: psci: PSCIv1.0 detected in firmware. Apr 13 19:24:22.265652 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:24:22.265670 kernel: psci: Trusted OS migration not required Apr 13 19:24:22.265691 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:24:22.265709 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 13 19:24:22.265727 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:24:22.265744 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:24:22.265763 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:24:22.265781 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:24:22.265798 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:24:22.265816 kernel: CPU features: detected: Spectre-v2 Apr 13 19:24:22.265833 kernel: CPU features: detected: Spectre-v3a Apr 13 19:24:22.265851 kernel: CPU features: detected: Spectre-BHB Apr 13 19:24:22.265868 kernel: CPU features: detected: ARM erratum 1742098 Apr 13 19:24:22.265889 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 13 19:24:22.265907 kernel: alternatives: applying boot alternatives Apr 13 19:24:22.265927 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:24:22.265945 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:24:22.265963 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:24:22.265981 kernel: Fallback order for Node 0: 0 Apr 13 19:24:22.265998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 13 19:24:22.266016 kernel: Policy zone: Normal Apr 13 19:24:22.266033 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:24:22.266051 kernel: software IO TLB: area num 2. Apr 13 19:24:22.266069 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 13 19:24:22.266091 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 13 19:24:22.266109 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:24:22.266127 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:24:22.266146 kernel: rcu: RCU event tracing is enabled. Apr 13 19:24:22.266164 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:24:22.267077 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:24:22.267099 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:24:22.267118 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:24:22.267136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:24:22.267154 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:24:22.267200 kernel: GICv3: 96 SPIs implemented Apr 13 19:24:22.267229 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:24:22.267248 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:24:22.267266 kernel: GICv3: GICv3 features: 16 PPIs Apr 13 19:24:22.267284 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 13 19:24:22.267302 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 13 19:24:22.267321 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:24:22.267340 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:24:22.267358 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 13 19:24:22.267376 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 13 19:24:22.267395 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 13 19:24:22.267415 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:24:22.267433 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 13 19:24:22.267457 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 13 19:24:22.267475 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 13 19:24:22.267494 kernel: Console: colour dummy device 80x25 Apr 13 19:24:22.267513 kernel: printk: console [tty1] enabled Apr 13 19:24:22.267532 kernel: ACPI: Core revision 20230628 Apr 13 19:24:22.267551 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 13 19:24:22.267571 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:24:22.267589 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:24:22.267607 kernel: landlock: Up and running. Apr 13 19:24:22.267629 kernel: SELinux: Initializing. Apr 13 19:24:22.267648 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:24:22.267666 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:24:22.267684 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:24:22.267702 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:24:22.267720 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:24:22.267739 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:24:22.267757 kernel: Platform MSI: ITS@0x10080000 domain created Apr 13 19:24:22.267776 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 13 19:24:22.267799 kernel: Remapping and enabling EFI services. Apr 13 19:24:22.267818 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:24:22.267836 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:24:22.267855 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 13 19:24:22.267873 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 13 19:24:22.267891 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 13 19:24:22.267909 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:24:22.267927 kernel: SMP: Total of 2 processors activated. Apr 13 19:24:22.267945 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:24:22.267967 kernel: CPU features: detected: 32-bit EL1 Support Apr 13 19:24:22.267985 kernel: CPU features: detected: CRC32 instructions Apr 13 19:24:22.268003 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:24:22.268033 kernel: alternatives: applying system-wide alternatives Apr 13 19:24:22.268056 kernel: devtmpfs: initialized Apr 13 19:24:22.268075 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:24:22.268094 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:24:22.268113 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:24:22.268132 kernel: SMBIOS 3.0.0 present. Apr 13 19:24:22.268155 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 13 19:24:22.268198 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:24:22.268220 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:24:22.268240 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:24:22.268260 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:24:22.268279 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:24:22.268297 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Apr 13 19:24:22.268316 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:24:22.268341 kernel: cpuidle: using governor menu Apr 13 19:24:22.268360 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:24:22.268379 kernel: ASID allocator initialised with 65536 entries Apr 13 19:24:22.268398 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:24:22.268416 kernel: Serial: AMBA PL011 UART driver Apr 13 19:24:22.268436 kernel: Modules: 17488 pages in range for non-PLT usage Apr 13 19:24:22.268455 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:24:22.268474 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:24:22.268492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:24:22.268516 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:24:22.268535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:24:22.268554 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:24:22.268575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:24:22.268596 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:24:22.268616 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:24:22.268636 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:24:22.268655 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:24:22.268674 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:24:22.268697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:24:22.268716 kernel: ACPI: Interpreter enabled Apr 13 19:24:22.268735 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:24:22.268754 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:24:22.268773 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 13 19:24:22.269108 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:24:22.269467 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:24:22.269684 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:24:22.269898 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 13 19:24:22.270106 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 13 19:24:22.270132 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 13 19:24:22.270151 kernel: acpiphp: Slot [1] registered Apr 13 19:24:22.270200 kernel: acpiphp: Slot [2] registered Apr 13 19:24:22.270224 kernel: acpiphp: Slot [3] registered Apr 13 19:24:22.270243 kernel: acpiphp: Slot [4] registered Apr 13 19:24:22.270262 kernel: acpiphp: Slot [5] registered Apr 13 19:24:22.270287 kernel: acpiphp: Slot [6] registered Apr 13 19:24:22.270306 kernel: acpiphp: Slot [7] registered Apr 13 19:24:22.270324 kernel: acpiphp: Slot [8] registered Apr 13 19:24:22.270343 kernel: acpiphp: Slot [9] registered Apr 13 19:24:22.270362 kernel: acpiphp: Slot [10] registered Apr 13 19:24:22.270380 kernel: acpiphp: Slot [11] registered Apr 13 19:24:22.270399 kernel: acpiphp: Slot [12] registered Apr 13 19:24:22.270417 kernel: acpiphp: Slot [13] registered Apr 13 19:24:22.270436 kernel: acpiphp: Slot [14] registered Apr 13 19:24:22.270454 kernel: acpiphp: Slot [15] registered Apr 13 19:24:22.270477 kernel: acpiphp: Slot [16] registered Apr 13 19:24:22.270496 kernel: acpiphp: Slot [17] registered Apr 13 19:24:22.270515 kernel: acpiphp: Slot [18] registered Apr 13 19:24:22.270533 kernel: acpiphp: Slot [19] registered Apr 13 19:24:22.270552 kernel: acpiphp: Slot [20] registered Apr 13 19:24:22.270570 kernel: acpiphp: Slot [21] registered Apr 13 19:24:22.270589 kernel: acpiphp: Slot [22] registered Apr 13 19:24:22.270608 kernel: acpiphp: Slot [23] registered Apr 13 19:24:22.270627 kernel: acpiphp: Slot [24] registered Apr 13 19:24:22.270649 kernel: acpiphp: Slot [25] registered Apr 13 19:24:22.270668 kernel: acpiphp: Slot [26] registered Apr 13 19:24:22.270687 kernel: acpiphp: Slot [27] registered Apr 13 19:24:22.270705 kernel: acpiphp: Slot [28] registered Apr 13 19:24:22.270724 kernel: acpiphp: Slot [29] registered Apr 13 19:24:22.270742 kernel: acpiphp: Slot [30] registered Apr 13 19:24:22.270760 kernel: acpiphp: Slot [31] registered Apr 13 19:24:22.270779 kernel: PCI host bridge to bus 0000:00 Apr 13 19:24:22.270994 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 13 19:24:22.271246 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:24:22.271443 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 13 19:24:22.271632 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 13 19:24:22.271882 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 13 19:24:22.272111 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 13 19:24:22.272368 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 13 19:24:22.272609 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 19:24:22.272827 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 13 19:24:22.273046 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:24:22.273390 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 19:24:22.273608 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 13 19:24:22.273819 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 13 19:24:22.274028 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 13 19:24:22.278406 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:24:22.278626 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 13 19:24:22.278811 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:24:22.278996 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 13 19:24:22.279042 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:24:22.279064 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:24:22.279084 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:24:22.279104 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:24:22.279133 kernel: iommu: Default domain type: Translated Apr 13 19:24:22.279153 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:24:22.279212 kernel: efivars: Registered efivars operations Apr 13 19:24:22.279234 kernel: vgaarb: loaded Apr 13 19:24:22.279254 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:24:22.279274 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:24:22.279304 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:24:22.279326 kernel: pnp: PnP ACPI init Apr 13 19:24:22.279563 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 13 19:24:22.279599 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:24:22.279620 kernel: NET: Registered PF_INET protocol family Apr 13 19:24:22.279639 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:24:22.279659 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:24:22.279678 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:24:22.279697 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:24:22.279717 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:24:22.279736 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:24:22.279761 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:24:22.279781 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:24:22.279800 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:24:22.279819 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:24:22.279838 kernel: kvm [1]: HYP mode not available Apr 13 19:24:22.279857 kernel: Initialise system trusted keyrings Apr 13 19:24:22.279875 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:24:22.279895 kernel: Key type asymmetric registered Apr 13 19:24:22.279914 kernel: Asymmetric key parser 'x509' registered Apr 13 19:24:22.279937 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:24:22.279957 kernel: io scheduler mq-deadline registered Apr 13 19:24:22.279975 kernel: io scheduler kyber registered Apr 13 19:24:22.279994 kernel: io scheduler bfq registered Apr 13 19:24:22.282369 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 13 19:24:22.282416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:24:22.282436 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:24:22.282456 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 13 19:24:22.282475 kernel: ACPI: button: Sleep Button [SLPB] Apr 13 19:24:22.282505 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:24:22.282525 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:24:22.282758 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 13 19:24:22.282786 kernel: printk: console [ttyS0] disabled Apr 13 19:24:22.282806 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 13 19:24:22.282826 kernel: printk: console [ttyS0] enabled Apr 13 19:24:22.282845 kernel: printk: bootconsole [uart0] disabled Apr 13 19:24:22.282864 kernel: thunder_xcv, ver 1.0 Apr 13 19:24:22.282883 kernel: thunder_bgx, ver 1.0 Apr 13 19:24:22.282921 kernel: nicpf, ver 1.0 Apr 13 19:24:22.282963 kernel: nicvf, ver 1.0 Apr 13 19:24:22.286049 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:24:22.286302 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:24:21 UTC (1776108261) Apr 13 19:24:22.286330 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:24:22.286350 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 13 19:24:22.286369 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:24:22.286388 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:24:22.286417 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:24:22.286436 kernel: Segment Routing with IPv6 Apr 13 19:24:22.286455 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:24:22.286474 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:24:22.286492 kernel: Key type dns_resolver registered Apr 13 19:24:22.286511 kernel: registered taskstats version 1 Apr 13 19:24:22.286530 kernel: Loading compiled-in X.509 certificates Apr 13 19:24:22.286549 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:24:22.286567 kernel: Key type .fscrypt registered Apr 13 19:24:22.286590 kernel: Key type fscrypt-provisioning registered Apr 13 19:24:22.286609 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:24:22.286627 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:24:22.286646 kernel: ima: No architecture policies found Apr 13 19:24:22.286665 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:24:22.286683 kernel: clk: Disabling unused clocks Apr 13 19:24:22.286702 kernel: Freeing unused kernel memory: 39424K Apr 13 19:24:22.286720 kernel: Run /init as init process Apr 13 19:24:22.286739 kernel: with arguments: Apr 13 19:24:22.286761 kernel: /init Apr 13 19:24:22.286780 kernel: with environment: Apr 13 19:24:22.286798 kernel: HOME=/ Apr 13 19:24:22.286816 kernel: TERM=linux Apr 13 19:24:22.286840 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:24:22.286863 systemd[1]: Detected virtualization amazon. Apr 13 19:24:22.286884 systemd[1]: Detected architecture arm64. Apr 13 19:24:22.286904 systemd[1]: Running in initrd. Apr 13 19:24:22.286929 systemd[1]: No hostname configured, using default hostname. Apr 13 19:24:22.286948 systemd[1]: Hostname set to . Apr 13 19:24:22.286969 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:24:22.286990 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:24:22.287029 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:24:22.287063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:24:22.287089 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:24:22.287111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:24:22.287138 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:24:22.287159 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:24:22.287211 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:24:22.287245 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:24:22.287269 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:24:22.287291 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:24:22.287318 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:24:22.287339 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:24:22.287360 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:24:22.287380 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:24:22.287410 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:24:22.287437 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:24:22.287458 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:24:22.287479 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:24:22.287500 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:24:22.287528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:24:22.287549 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:24:22.287570 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:24:22.287591 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:24:22.287613 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:24:22.287633 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:24:22.287654 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:24:22.287675 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:24:22.287696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:24:22.287722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:24:22.287743 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:24:22.287763 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:24:22.287784 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:24:22.287806 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:24:22.287873 systemd-journald[251]: Collecting audit messages is disabled. Apr 13 19:24:22.287917 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:24:22.287937 kernel: Bridge firewalling registered Apr 13 19:24:22.287963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:24:22.287985 systemd-journald[251]: Journal started Apr 13 19:24:22.288024 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e52f309cb8eacd4527c7c65eb81e1) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:24:22.242727 systemd-modules-load[252]: Inserted module 'overlay' Apr 13 19:24:22.294930 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:24:22.284211 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 13 19:24:22.297824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:24:22.301665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:24:22.318612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:24:22.327435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:24:22.341589 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:24:22.357405 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:24:22.381765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:24:22.391851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:24:22.400951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:24:22.415642 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:24:22.420707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:24:22.434532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:24:22.462259 dracut-cmdline[286]: dracut-dracut-053 Apr 13 19:24:22.469376 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:24:22.524390 systemd-resolved[288]: Positive Trust Anchors: Apr 13 19:24:22.526566 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:24:22.531308 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:24:22.619207 kernel: SCSI subsystem initialized Apr 13 19:24:22.627234 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:24:22.641238 kernel: iscsi: registered transport (tcp) Apr 13 19:24:22.664291 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:24:22.664372 kernel: QLogic iSCSI HBA Driver Apr 13 19:24:22.752253 kernel: random: crng init done Apr 13 19:24:22.752849 systemd-resolved[288]: Defaulting to hostname 'linux'. Apr 13 19:24:22.757352 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:24:22.762506 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:24:22.787386 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:24:22.799489 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:24:22.834696 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:24:22.834772 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:24:22.834800 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:24:22.902223 kernel: raid6: neonx8 gen() 6771 MB/s Apr 13 19:24:22.919205 kernel: raid6: neonx4 gen() 6602 MB/s Apr 13 19:24:22.936216 kernel: raid6: neonx2 gen() 5491 MB/s Apr 13 19:24:22.953210 kernel: raid6: neonx1 gen() 3972 MB/s Apr 13 19:24:22.970206 kernel: raid6: int64x8 gen() 3811 MB/s Apr 13 19:24:22.987210 kernel: raid6: int64x4 gen() 3706 MB/s Apr 13 19:24:23.004203 kernel: raid6: int64x2 gen() 3618 MB/s Apr 13 19:24:23.022275 kernel: raid6: int64x1 gen() 2765 MB/s Apr 13 19:24:23.022324 kernel: raid6: using algorithm neonx8 gen() 6771 MB/s Apr 13 19:24:23.041235 kernel: raid6: .... xor() 4815 MB/s, rmw enabled Apr 13 19:24:23.041308 kernel: raid6: using neon recovery algorithm Apr 13 19:24:23.050446 kernel: xor: measuring software checksum speed Apr 13 19:24:23.050517 kernel: 8regs : 10958 MB/sec Apr 13 19:24:23.051674 kernel: 32regs : 11942 MB/sec Apr 13 19:24:23.052989 kernel: arm64_neon : 9567 MB/sec Apr 13 19:24:23.053021 kernel: xor: using function: 32regs (11942 MB/sec) Apr 13 19:24:23.138233 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:24:23.157835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:24:23.172513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:24:23.207878 systemd-udevd[471]: Using default interface naming scheme 'v255'. Apr 13 19:24:23.216129 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:24:23.237462 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:24:23.269856 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Apr 13 19:24:23.331288 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:24:23.347479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:24:23.477477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:24:23.493573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:24:23.545038 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:24:23.550778 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:24:23.553721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:24:23.556540 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:24:23.576712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:24:23.612503 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:24:23.697328 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:24:23.697432 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 13 19:24:23.706841 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:24:23.706905 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 19:24:23.707245 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 19:24:23.709702 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 19:24:23.712501 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:24:23.712839 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:24:23.715708 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:24:23.715995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:24:23.716260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:24:23.742843 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:aa:62:df:58:e7 Apr 13 19:24:23.716409 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:24:23.734739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:24:23.756232 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 19:24:23.768842 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:24:23.768913 kernel: GPT:9289727 != 33554431 Apr 13 19:24:23.770260 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:24:23.772219 kernel: GPT:9289727 != 33554431 Apr 13 19:24:23.772254 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:24:23.773196 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:24:23.777062 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:23.779664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:24:23.793665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:24:23.845842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:24:23.857254 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (522) Apr 13 19:24:23.902292 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (535) Apr 13 19:24:23.946842 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 19:24:23.992472 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 19:24:23.996256 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 19:24:24.031689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:24:24.050404 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 19:24:24.064535 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:24:24.084013 disk-uuid[663]: Primary Header is updated. Apr 13 19:24:24.084013 disk-uuid[663]: Secondary Entries is updated. Apr 13 19:24:24.084013 disk-uuid[663]: Secondary Header is updated. Apr 13 19:24:24.089200 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:24:25.115732 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:24:25.117055 disk-uuid[664]: The operation has completed successfully. Apr 13 19:24:25.299455 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:24:25.301837 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:24:25.361472 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:24:25.374820 sh[1010]: Success Apr 13 19:24:25.402212 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:24:25.531867 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:24:25.537359 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:24:25.547349 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:24:25.595382 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:24:25.595444 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:24:25.595471 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:24:25.597294 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:24:25.598675 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:24:25.626221 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:24:25.629751 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:24:25.630126 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:24:25.647557 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:24:25.655524 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:24:25.678922 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:24:25.678987 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:24:25.680711 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:24:25.703226 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:24:25.724532 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:24:25.728928 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:24:25.739414 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:24:25.753539 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:24:25.844987 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:24:25.857511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:24:25.915806 systemd-networkd[1202]: lo: Link UP Apr 13 19:24:25.915826 systemd-networkd[1202]: lo: Gained carrier Apr 13 19:24:25.918784 systemd-networkd[1202]: Enumeration completed Apr 13 19:24:25.919325 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:24:25.923590 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:24:25.923599 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:24:25.927581 systemd-networkd[1202]: eth0: Link UP Apr 13 19:24:25.927589 systemd-networkd[1202]: eth0: Gained carrier Apr 13 19:24:25.927606 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:24:25.938471 systemd[1]: Reached target network.target - Network. Apr 13 19:24:25.952261 systemd-networkd[1202]: eth0: DHCPv4 address 172.31.17.32/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:24:26.174867 ignition[1123]: Ignition 2.19.0 Apr 13 19:24:26.176637 ignition[1123]: Stage: fetch-offline Apr 13 19:24:26.179619 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:26.179659 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:26.184533 ignition[1123]: Ignition finished successfully Apr 13 19:24:26.188410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:24:26.199521 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:24:26.225885 ignition[1211]: Ignition 2.19.0 Apr 13 19:24:26.226456 ignition[1211]: Stage: fetch Apr 13 19:24:26.227132 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:26.227157 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:26.227336 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:26.253378 ignition[1211]: PUT result: OK Apr 13 19:24:26.256474 ignition[1211]: parsed url from cmdline: "" Apr 13 19:24:26.256496 ignition[1211]: no config URL provided Apr 13 19:24:26.256512 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:24:26.256566 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:24:26.256600 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:26.259288 ignition[1211]: PUT result: OK Apr 13 19:24:26.259361 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 19:24:26.265622 ignition[1211]: GET result: OK Apr 13 19:24:26.265835 ignition[1211]: parsing config with SHA512: a818ede4208bc3c32893a732bda5d5d772bfc016cd0570fc51e64ab68a6e43c835d175e0673f1cb978b2f9598a500478332d33ec7e011f27500dccdbc9d9bca3 Apr 13 19:24:26.278793 unknown[1211]: fetched base config from "system" Apr 13 19:24:26.278824 unknown[1211]: fetched base config from "system" Apr 13 19:24:26.278839 unknown[1211]: fetched user config from "aws" Apr 13 19:24:26.287906 ignition[1211]: fetch: fetch complete Apr 13 19:24:26.287938 ignition[1211]: fetch: fetch passed Apr 13 19:24:26.292728 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:24:26.288060 ignition[1211]: Ignition finished successfully Apr 13 19:24:26.306490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:24:26.339322 ignition[1218]: Ignition 2.19.0 Apr 13 19:24:26.339856 ignition[1218]: Stage: kargs Apr 13 19:24:26.340547 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:26.340573 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:26.340747 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:26.347278 ignition[1218]: PUT result: OK Apr 13 19:24:26.355281 ignition[1218]: kargs: kargs passed Apr 13 19:24:26.355384 ignition[1218]: Ignition finished successfully Apr 13 19:24:26.358984 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:24:26.369564 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:24:26.407028 ignition[1224]: Ignition 2.19.0 Apr 13 19:24:26.407053 ignition[1224]: Stage: disks Apr 13 19:24:26.410779 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:26.410821 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:26.412995 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:26.419132 ignition[1224]: PUT result: OK Apr 13 19:24:26.426625 ignition[1224]: disks: disks passed Apr 13 19:24:26.426745 ignition[1224]: Ignition finished successfully Apr 13 19:24:26.431492 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:24:26.440234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:24:26.444896 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:24:26.450143 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:24:26.452535 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:24:26.456822 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:24:26.468457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:24:26.516464 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 19:24:26.522218 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:24:26.534467 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:24:26.618394 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:24:26.619450 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:24:26.623331 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:24:26.641347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:24:26.654697 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:24:26.659891 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 19:24:26.659976 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:24:26.660023 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:24:26.688262 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1251) Apr 13 19:24:26.692979 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:24:26.693056 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:24:26.695288 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:24:26.696546 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:24:26.707638 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:24:26.728219 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:24:26.731031 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:24:27.004294 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:24:27.013284 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:24:27.021801 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:24:27.031080 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:24:27.117474 systemd-networkd[1202]: eth0: Gained IPv6LL Apr 13 19:24:27.267330 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:24:27.277376 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:24:27.281326 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:24:27.318685 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:24:27.321671 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:24:27.349665 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:24:27.365711 ignition[1366]: INFO : Ignition 2.19.0 Apr 13 19:24:27.365711 ignition[1366]: INFO : Stage: mount Apr 13 19:24:27.365711 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:27.365711 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:27.365711 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:27.379193 ignition[1366]: INFO : PUT result: OK Apr 13 19:24:27.384033 ignition[1366]: INFO : mount: mount passed Apr 13 19:24:27.385914 ignition[1366]: INFO : Ignition finished successfully Apr 13 19:24:27.389619 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:24:27.401564 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:24:27.626644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:24:27.663246 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1377) Apr 13 19:24:27.663307 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:24:27.666963 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:24:27.667025 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:24:27.675223 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:24:27.677251 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:24:27.714928 ignition[1395]: INFO : Ignition 2.19.0 Apr 13 19:24:27.714928 ignition[1395]: INFO : Stage: files Apr 13 19:24:27.718630 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:27.718630 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:27.723508 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:27.727227 ignition[1395]: INFO : PUT result: OK Apr 13 19:24:27.731598 ignition[1395]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:24:27.735986 ignition[1395]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:24:27.735986 ignition[1395]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:24:27.747582 ignition[1395]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:24:27.751238 ignition[1395]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:24:27.755004 unknown[1395]: wrote ssh authorized keys file for user: core Apr 13 19:24:27.757433 ignition[1395]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:24:27.762872 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:24:27.767138 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:24:27.876328 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:24:28.046507 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:24:28.046507 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:24:28.055041 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 13 19:24:28.285395 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:24:28.401259 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:24:28.401259 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:24:28.401259 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:24:28.401259 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:24:28.401259 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:24:28.401259 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:24:28.429302 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Apr 13 19:24:29.769701 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 19:24:30.206741 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:24:30.211389 ignition[1395]: INFO : files: files passed Apr 13 19:24:30.211389 ignition[1395]: INFO : Ignition finished successfully Apr 13 19:24:30.213398 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:24:30.245761 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:24:30.257298 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:24:30.273641 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:24:30.275258 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:24:30.308890 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:24:30.308890 initrd-setup-root-after-ignition[1423]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:24:30.316226 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:24:30.322639 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:24:30.330006 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:24:30.341531 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:24:30.397437 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:24:30.397870 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:24:30.405625 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:24:30.408611 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:24:30.411024 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:24:30.427580 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:24:30.457673 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:24:30.470535 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:24:30.496102 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:24:30.501521 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:24:30.506889 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:24:30.507368 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:24:30.507606 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:24:30.508519 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:24:30.509212 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:24:30.509560 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:24:30.509928 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:24:30.510314 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:24:30.510671 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:24:30.514205 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:24:30.517277 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:24:30.518344 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:24:30.518932 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:24:30.522611 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:24:30.522868 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:24:30.524471 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:24:30.525155 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:24:30.525401 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:24:30.539984 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:24:30.540412 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:24:30.540725 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:24:30.592426 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:24:30.595112 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:24:30.602074 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:24:30.602569 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:24:30.616680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:24:30.623533 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:24:30.627915 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:24:30.633389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:24:30.642287 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:24:30.642540 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:24:30.667553 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:24:30.667830 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:24:30.698889 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:24:30.705234 ignition[1447]: INFO : Ignition 2.19.0 Apr 13 19:24:30.705234 ignition[1447]: INFO : Stage: umount Apr 13 19:24:30.711202 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:24:30.711202 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:24:30.711202 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:24:30.711202 ignition[1447]: INFO : PUT result: OK Apr 13 19:24:30.712717 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:24:30.730811 ignition[1447]: INFO : umount: umount passed Apr 13 19:24:30.730811 ignition[1447]: INFO : Ignition finished successfully Apr 13 19:24:30.713003 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:24:30.732557 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:24:30.732815 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:24:30.736304 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:24:30.736464 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:24:30.742906 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:24:30.743026 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:24:30.745480 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:24:30.745591 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:24:30.748896 systemd[1]: Stopped target network.target - Network. Apr 13 19:24:30.752965 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:24:30.753086 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:24:30.755880 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:24:30.757913 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:24:30.771155 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:24:30.774201 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:24:30.776850 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:24:30.779084 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:24:30.779196 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:24:30.781457 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:24:30.781539 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:24:30.784293 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:24:30.784386 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:24:30.786636 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:24:30.786717 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:24:30.789235 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:24:30.789342 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:24:30.792158 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:24:30.794533 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:24:30.823757 systemd-networkd[1202]: eth0: DHCPv6 lease lost Apr 13 19:24:30.829678 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:24:30.829885 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:24:30.846854 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:24:30.847328 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:24:30.878844 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:24:30.879321 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:24:30.893405 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:24:30.895610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:24:30.895758 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:24:30.898966 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:24:30.899093 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:24:30.903528 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:24:30.903635 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:24:30.906326 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:24:30.906435 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:24:30.909631 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:24:30.955821 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:24:30.956131 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:24:30.967007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:24:30.967128 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:24:30.974580 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:24:30.974658 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:24:30.977080 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:24:30.977187 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:24:30.980410 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:24:30.980502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:24:30.996260 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:24:30.996362 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:24:31.011444 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:24:31.014088 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:24:31.014223 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:24:31.017387 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 19:24:31.017471 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:24:31.020545 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:24:31.020635 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:24:31.024369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:24:31.024458 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:24:31.027806 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:24:31.028110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:24:31.078490 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:24:31.078887 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:24:31.086841 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:24:31.095508 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:24:31.128538 systemd[1]: Switching root. Apr 13 19:24:31.168696 systemd-journald[251]: Journal stopped Apr 13 19:24:34.014035 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 13 19:24:34.014163 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:24:34.014260 kernel: SELinux: policy capability open_perms=1 Apr 13 19:24:34.014293 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:24:34.014325 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:24:34.014356 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:24:34.014392 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:24:34.014423 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:24:34.014461 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:24:34.014491 kernel: audit: type=1403 audit(1776108272.442:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:24:34.014532 systemd[1]: Successfully loaded SELinux policy in 51.769ms. Apr 13 19:24:34.014583 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.432ms. Apr 13 19:24:34.014618 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:24:34.016236 systemd[1]: Detected virtualization amazon. Apr 13 19:24:34.016303 systemd[1]: Detected architecture arm64. Apr 13 19:24:34.016346 systemd[1]: Detected first boot. Apr 13 19:24:34.016380 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:24:34.016412 zram_generator::config[1489]: No configuration found. Apr 13 19:24:34.016451 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:24:34.016484 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:24:34.016517 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:24:34.016551 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:24:34.019331 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:24:34.019389 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:24:34.019425 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:24:34.019461 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:24:34.019495 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:24:34.019527 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:24:34.019571 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:24:34.019604 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:24:34.019636 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:24:34.019667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:24:34.019702 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:24:34.019735 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:24:34.019767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:24:34.019798 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:24:34.019829 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 19:24:34.019862 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:24:34.019899 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:24:34.019932 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:24:34.019975 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:24:34.020011 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:24:34.020041 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:24:34.020074 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:24:34.020107 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:24:34.020138 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:24:34.020200 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:24:34.020239 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:24:34.020273 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:24:34.020311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:24:34.020344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:24:34.020379 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:24:34.020409 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:24:34.020441 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:24:34.020474 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:24:34.020506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:24:34.020536 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:24:34.020565 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:24:34.020605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:24:34.020639 systemd[1]: Reached target machines.target - Containers. Apr 13 19:24:34.020671 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:24:34.020702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:24:34.020735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:24:34.020765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:24:34.020798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:24:34.020828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:24:34.020864 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:24:34.020897 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:24:34.020929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:24:34.020962 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:24:34.020993 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:24:34.021027 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:24:34.021059 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:24:34.021088 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:24:34.021121 kernel: fuse: init (API version 7.39) Apr 13 19:24:34.021153 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:24:34.021218 kernel: loop: module loaded Apr 13 19:24:34.021254 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:24:34.021285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:24:34.021318 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:24:34.021354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:24:34.021386 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:24:34.021419 systemd[1]: Stopped verity-setup.service. Apr 13 19:24:34.021448 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:24:34.021483 kernel: ACPI: bus type drm_connector registered Apr 13 19:24:34.021512 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:24:34.021590 systemd-journald[1578]: Collecting audit messages is disabled. Apr 13 19:24:34.021653 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:24:34.021684 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:24:34.021715 systemd-journald[1578]: Journal started Apr 13 19:24:34.021766 systemd-journald[1578]: Runtime Journal (/run/log/journal/ec2e52f309cb8eacd4527c7c65eb81e1) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:24:33.436560 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:24:33.460549 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 19:24:33.461348 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:24:34.032327 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:24:34.037019 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:24:34.040114 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:24:34.047697 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:24:34.050860 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:24:34.054688 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:24:34.055007 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:24:34.060103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:24:34.060655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:24:34.064477 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:24:34.064921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:24:34.073020 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:24:34.073513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:24:34.077046 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:24:34.079317 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:24:34.082703 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:24:34.083147 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:24:34.086566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:24:34.091748 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:24:34.095635 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:24:34.126308 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:24:34.138367 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:24:34.155071 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:24:34.158749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:24:34.158814 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:24:34.168298 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:24:34.178645 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:24:34.189492 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:24:34.196504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:24:34.206465 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:24:34.221475 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:24:34.224146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:24:34.228607 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:24:34.231383 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:24:34.241509 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:24:34.259611 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:24:34.273571 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:24:34.281684 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:24:34.288295 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:24:34.293115 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:24:34.315614 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:24:34.319651 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:24:34.342866 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:24:34.362080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:24:34.370330 systemd-journald[1578]: Time spent on flushing to /var/log/journal/ec2e52f309cb8eacd4527c7c65eb81e1 is 112.625ms for 907 entries. Apr 13 19:24:34.370330 systemd-journald[1578]: System Journal (/var/log/journal/ec2e52f309cb8eacd4527c7c65eb81e1) is 8.0M, max 195.6M, 187.6M free. Apr 13 19:24:34.530692 systemd-journald[1578]: Received client request to flush runtime journal. Apr 13 19:24:34.530796 kernel: loop0: detected capacity change from 0 to 200864 Apr 13 19:24:34.530850 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:24:34.377548 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:24:34.428931 udevadm[1627]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:24:34.439989 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:24:34.446315 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:24:34.456436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:24:34.472446 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Apr 13 19:24:34.472471 systemd-tmpfiles[1619]: ACLs are not supported, ignoring. Apr 13 19:24:34.482277 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:24:34.501655 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:24:34.537931 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:24:34.568211 kernel: loop1: detected capacity change from 0 to 52536 Apr 13 19:24:34.599325 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:24:34.626527 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:24:34.670854 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Apr 13 19:24:34.670904 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Apr 13 19:24:34.679897 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:24:34.723224 kernel: loop2: detected capacity change from 0 to 114432 Apr 13 19:24:34.826332 kernel: loop3: detected capacity change from 0 to 114328 Apr 13 19:24:34.888208 kernel: loop4: detected capacity change from 0 to 200864 Apr 13 19:24:34.931210 kernel: loop5: detected capacity change from 0 to 52536 Apr 13 19:24:34.953211 kernel: loop6: detected capacity change from 0 to 114432 Apr 13 19:24:34.968222 kernel: loop7: detected capacity change from 0 to 114328 Apr 13 19:24:34.982871 (sd-merge)[1647]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 19:24:34.983998 (sd-merge)[1647]: Merged extensions into '/usr'. Apr 13 19:24:34.993589 systemd[1]: Reloading requested from client PID 1618 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:24:34.993624 systemd[1]: Reloading... Apr 13 19:24:35.191248 ldconfig[1613]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:24:35.229217 zram_generator::config[1673]: No configuration found. Apr 13 19:24:35.509374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:35.623076 systemd[1]: Reloading finished in 628 ms. Apr 13 19:24:35.668445 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:24:35.672224 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:24:35.683055 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:24:35.696516 systemd[1]: Starting ensure-sysext.service... Apr 13 19:24:35.704521 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:24:35.710556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:24:35.737380 systemd[1]: Reloading requested from client PID 1726 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:24:35.737409 systemd[1]: Reloading... Apr 13 19:24:35.766449 systemd-tmpfiles[1727]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:24:35.767164 systemd-tmpfiles[1727]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:24:35.772019 systemd-tmpfiles[1727]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:24:35.777304 systemd-tmpfiles[1727]: ACLs are not supported, ignoring. Apr 13 19:24:35.777463 systemd-tmpfiles[1727]: ACLs are not supported, ignoring. Apr 13 19:24:35.786765 systemd-tmpfiles[1727]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:24:35.786795 systemd-tmpfiles[1727]: Skipping /boot Apr 13 19:24:35.802805 systemd-udevd[1728]: Using default interface naming scheme 'v255'. Apr 13 19:24:35.820088 systemd-tmpfiles[1727]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:24:35.820118 systemd-tmpfiles[1727]: Skipping /boot Apr 13 19:24:35.956213 zram_generator::config[1755]: No configuration found. Apr 13 19:24:36.042331 (udev-worker)[1753]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:36.224208 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1758) Apr 13 19:24:36.449492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:36.624935 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 19:24:36.625815 systemd[1]: Reloading finished in 887 ms. Apr 13 19:24:36.652324 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:24:36.666256 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:24:36.702237 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:24:36.706292 systemd[1]: Finished ensure-sysext.service. Apr 13 19:24:36.744837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:24:36.771513 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:24:36.781532 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:24:36.785587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:24:36.794665 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:24:36.801555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:24:36.817509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:24:36.826559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:24:36.836043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:24:36.838720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:24:36.842509 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:24:36.854567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:24:36.870666 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:24:36.879198 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:24:36.881910 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:24:36.888736 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:24:36.895666 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:24:36.899819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:24:36.903302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:24:36.906678 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:24:36.907018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:24:36.915753 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:24:36.979022 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:24:36.985288 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:24:36.986387 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:24:36.994523 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:24:37.015640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:24:37.018343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:24:37.022100 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:24:37.024730 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:24:37.029692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:24:37.029823 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:24:37.032798 lvm[1952]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:24:37.053003 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:24:37.068353 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:24:37.077588 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:24:37.090538 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:24:37.117621 augenrules[1964]: No rules Apr 13 19:24:37.123445 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:24:37.153031 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:24:37.180409 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:24:37.195711 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:24:37.201606 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:24:37.202091 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:24:37.253279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:24:37.325567 systemd-networkd[1939]: lo: Link UP Apr 13 19:24:37.326070 systemd-networkd[1939]: lo: Gained carrier Apr 13 19:24:37.329062 systemd-networkd[1939]: Enumeration completed Apr 13 19:24:37.329501 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:24:37.332793 systemd-networkd[1939]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:24:37.332809 systemd-networkd[1939]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:24:37.335301 systemd-networkd[1939]: eth0: Link UP Apr 13 19:24:37.335819 systemd-networkd[1939]: eth0: Gained carrier Apr 13 19:24:37.335861 systemd-networkd[1939]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:24:37.342562 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:24:37.350043 systemd-resolved[1940]: Positive Trust Anchors: Apr 13 19:24:37.350084 systemd-resolved[1940]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:24:37.350149 systemd-resolved[1940]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:24:37.360310 systemd-networkd[1939]: eth0: DHCPv4 address 172.31.17.32/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:24:37.366922 systemd-resolved[1940]: Defaulting to hostname 'linux'. Apr 13 19:24:37.370405 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:24:37.373079 systemd[1]: Reached target network.target - Network. Apr 13 19:24:37.375061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:24:37.377724 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:24:37.380282 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:24:37.383249 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:24:37.386483 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:24:37.389249 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:24:37.392745 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:24:37.395572 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:24:37.395625 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:24:37.397885 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:24:37.401389 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:24:37.406479 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:24:37.416681 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:24:37.420240 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:24:37.422884 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:24:37.425074 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:24:37.427332 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:24:37.427387 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:24:37.435522 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:24:37.445032 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:24:37.451518 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:24:37.462930 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:24:37.470293 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:24:37.473356 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:24:37.478657 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:24:37.486736 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 19:24:37.504148 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:24:37.521375 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 19:24:37.531412 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:24:37.536364 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:24:37.548332 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:24:37.553575 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:24:37.554498 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:24:37.560966 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:24:37.567223 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:24:37.609922 dbus-daemon[1989]: [system] SELinux support is enabled Apr 13 19:24:37.612588 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:24:37.617688 jq[1990]: false Apr 13 19:24:37.620705 dbus-daemon[1989]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1939 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 19:24:37.622062 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:24:37.623284 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:24:37.627015 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:24:37.627126 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:24:37.630512 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:24:37.630569 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:24:37.642631 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 19:24:37.688915 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 19:24:37.754135 jq[2002]: true Apr 13 19:24:37.760886 (ntainerd)[2019]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:24:37.770904 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:24:37.773750 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:24:37.791455 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: ---------------------------------------------------- Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: corporation. Support and training for ntp-4 are Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: available at https://www.nwtime.org/support Apr 13 19:24:37.795940 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: ---------------------------------------------------- Apr 13 19:24:37.793289 ntpd[1993]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:24:37.793311 ntpd[1993]: ---------------------------------------------------- Apr 13 19:24:37.794005 ntpd[1993]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:24:37.794032 ntpd[1993]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:24:37.794051 ntpd[1993]: corporation. Support and training for ntp-4 are Apr 13 19:24:37.794070 ntpd[1993]: available at https://www.nwtime.org/support Apr 13 19:24:37.794089 ntpd[1993]: ---------------------------------------------------- Apr 13 19:24:37.800270 tar[2004]: linux-arm64/LICENSE Apr 13 19:24:37.800270 tar[2004]: linux-arm64/helm Apr 13 19:24:37.800767 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: proto: precision = 0.096 usec (-23) Apr 13 19:24:37.800020 ntpd[1993]: proto: precision = 0.096 usec (-23) Apr 13 19:24:37.801603 systemd-logind[2000]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:24:37.801656 systemd-logind[2000]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 13 19:24:37.804678 ntpd[1993]: basedate set to 2026-04-01 Apr 13 19:24:37.806583 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: basedate set to 2026-04-01 Apr 13 19:24:37.806583 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: gps base set to 2026-04-05 (week 2413) Apr 13 19:24:37.804722 ntpd[1993]: gps base set to 2026-04-05 (week 2413) Apr 13 19:24:37.809338 systemd-logind[2000]: New seat seat0. Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Listen normally on 3 eth0 172.31.17.32:123 Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Listen normally on 4 lo [::1]:123 Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: bind(21) AF_INET6 fe80::4aa:62ff:fedf:58e7%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: unable to create socket on eth0 (5) for fe80::4aa:62ff:fedf:58e7%2#123 Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: failed to init interface for address fe80::4aa:62ff:fedf:58e7%2 Apr 13 19:24:37.812332 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Apr 13 19:24:37.810454 ntpd[1993]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:24:37.810528 ntpd[1993]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:24:37.810785 ntpd[1993]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:24:37.810851 ntpd[1993]: Listen normally on 3 eth0 172.31.17.32:123 Apr 13 19:24:37.810918 ntpd[1993]: Listen normally on 4 lo [::1]:123 Apr 13 19:24:37.811011 ntpd[1993]: bind(21) AF_INET6 fe80::4aa:62ff:fedf:58e7%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:24:37.811053 ntpd[1993]: unable to create socket on eth0 (5) for fe80::4aa:62ff:fedf:58e7%2#123 Apr 13 19:24:37.811082 ntpd[1993]: failed to init interface for address fe80::4aa:62ff:fedf:58e7%2 Apr 13 19:24:37.811140 ntpd[1993]: Listening on routing socket on fd #21 for interface updates Apr 13 19:24:37.827482 jq[2027]: true Apr 13 19:24:37.831873 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:24:37.837612 extend-filesystems[1991]: Found loop4 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found loop5 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found loop6 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found loop7 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1p1 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1p2 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1p3 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found usr Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1p4 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1p6 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1p7 Apr 13 19:24:37.837612 extend-filesystems[1991]: Found nvme0n1p9 Apr 13 19:24:37.837612 extend-filesystems[1991]: Checking size of /dev/nvme0n1p9 Apr 13 19:24:37.840652 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:37.896519 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:37.896519 ntpd[1993]: 13 Apr 19:24:37 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:37.840713 ntpd[1993]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:24:37.919433 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:24:37.923294 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:24:37.963641 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 19:24:37.966427 extend-filesystems[1991]: Resized partition /dev/nvme0n1p9 Apr 13 19:24:37.986406 extend-filesystems[2041]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:24:38.014203 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 19:24:38.017105 coreos-metadata[1988]: Apr 13 19:24:38.015 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:24:38.017614 coreos-metadata[1988]: Apr 13 19:24:38.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 19:24:38.019823 coreos-metadata[1988]: Apr 13 19:24:38.017 INFO Fetch successful Apr 13 19:24:38.019823 coreos-metadata[1988]: Apr 13 19:24:38.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 19:24:38.019823 coreos-metadata[1988]: Apr 13 19:24:38.018 INFO Fetch successful Apr 13 19:24:38.019823 coreos-metadata[1988]: Apr 13 19:24:38.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 19:24:38.021595 coreos-metadata[1988]: Apr 13 19:24:38.020 INFO Fetch successful Apr 13 19:24:38.021595 coreos-metadata[1988]: Apr 13 19:24:38.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 19:24:38.021595 coreos-metadata[1988]: Apr 13 19:24:38.021 INFO Fetch successful Apr 13 19:24:38.021595 coreos-metadata[1988]: Apr 13 19:24:38.021 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 19:24:38.025779 coreos-metadata[1988]: Apr 13 19:24:38.022 INFO Fetch failed with 404: resource not found Apr 13 19:24:38.025779 coreos-metadata[1988]: Apr 13 19:24:38.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 19:24:38.027481 coreos-metadata[1988]: Apr 13 19:24:38.026 INFO Fetch successful Apr 13 19:24:38.027481 coreos-metadata[1988]: Apr 13 19:24:38.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 19:24:38.028706 coreos-metadata[1988]: Apr 13 19:24:38.027 INFO Fetch successful Apr 13 19:24:38.028706 coreos-metadata[1988]: Apr 13 19:24:38.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 19:24:38.028706 coreos-metadata[1988]: Apr 13 19:24:38.028 INFO Fetch successful Apr 13 19:24:38.028706 coreos-metadata[1988]: Apr 13 19:24:38.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 19:24:38.028706 coreos-metadata[1988]: Apr 13 19:24:38.029 INFO Fetch successful Apr 13 19:24:38.028706 coreos-metadata[1988]: Apr 13 19:24:38.029 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 19:24:38.036146 coreos-metadata[1988]: Apr 13 19:24:38.034 INFO Fetch successful Apr 13 19:24:38.058396 update_engine[2001]: I20260413 19:24:38.049849 2001 main.cc:92] Flatcar Update Engine starting Apr 13 19:24:38.083705 update_engine[2001]: I20260413 19:24:38.083631 2001 update_check_scheduler.cc:74] Next update check in 11m55s Apr 13 19:24:38.095116 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:24:38.108357 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:24:38.113495 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:24:38.120275 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:24:38.208201 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 19:24:38.208308 bash[2067]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:24:38.235394 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:24:38.240625 extend-filesystems[2041]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 19:24:38.240625 extend-filesystems[2041]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 19:24:38.240625 extend-filesystems[2041]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 19:24:38.266208 extend-filesystems[1991]: Resized filesystem in /dev/nvme0n1p9 Apr 13 19:24:38.246134 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:24:38.262817 systemd[1]: Starting sshkeys.service... Apr 13 19:24:38.268068 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:24:38.271275 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:24:38.324728 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 19:24:38.325036 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 19:24:38.338079 dbus-daemon[1989]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2013 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 19:24:38.371207 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1759) Apr 13 19:24:38.384325 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 19:24:38.405737 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:24:38.415799 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:24:38.480511 polkitd[2080]: Started polkitd version 121 Apr 13 19:24:38.546499 polkitd[2080]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 19:24:38.546636 polkitd[2080]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 19:24:38.559269 polkitd[2080]: Finished loading, compiling and executing 2 rules Apr 13 19:24:38.563065 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 19:24:38.563502 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 19:24:38.568768 polkitd[2080]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 19:24:38.677749 locksmithd[2065]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:24:38.678448 systemd-hostnamed[2013]: Hostname set to (transient) Apr 13 19:24:38.678449 systemd-resolved[1940]: System hostname changed to 'ip-172-31-17-32'. Apr 13 19:24:38.694087 containerd[2019]: time="2026-04-13T19:24:38.687672877Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:24:38.694558 coreos-metadata[2090]: Apr 13 19:24:38.694 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:24:38.707161 coreos-metadata[2090]: Apr 13 19:24:38.703 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 19:24:38.707161 coreos-metadata[2090]: Apr 13 19:24:38.706 INFO Fetch successful Apr 13 19:24:38.707161 coreos-metadata[2090]: Apr 13 19:24:38.706 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 19:24:38.707161 coreos-metadata[2090]: Apr 13 19:24:38.707 INFO Fetch successful Apr 13 19:24:38.711224 unknown[2090]: wrote ssh authorized keys file for user: core Apr 13 19:24:38.769301 update-ssh-keys[2168]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:24:38.788909 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:24:38.801163 ntpd[1993]: bind(24) AF_INET6 fe80::4aa:62ff:fedf:58e7%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:24:38.801251 ntpd[1993]: unable to create socket on eth0 (6) for fe80::4aa:62ff:fedf:58e7%2#123 Apr 13 19:24:38.801675 ntpd[1993]: 13 Apr 19:24:38 ntpd[1993]: bind(24) AF_INET6 fe80::4aa:62ff:fedf:58e7%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:24:38.801675 ntpd[1993]: 13 Apr 19:24:38 ntpd[1993]: unable to create socket on eth0 (6) for fe80::4aa:62ff:fedf:58e7%2#123 Apr 13 19:24:38.801675 ntpd[1993]: 13 Apr 19:24:38 ntpd[1993]: failed to init interface for address fe80::4aa:62ff:fedf:58e7%2 Apr 13 19:24:38.801281 ntpd[1993]: failed to init interface for address fe80::4aa:62ff:fedf:58e7%2 Apr 13 19:24:38.813529 systemd[1]: Finished sshkeys.service. Apr 13 19:24:38.880199 containerd[2019]: time="2026-04-13T19:24:38.879078806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:38.891648 containerd[2019]: time="2026-04-13T19:24:38.891575846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:38.893686 containerd[2019]: time="2026-04-13T19:24:38.891795374Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:24:38.893686 containerd[2019]: time="2026-04-13T19:24:38.891842162Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.892211414Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.893939186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.894108566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.894139046Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.894501110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.894537950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.894569282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:38.894995 containerd[2019]: time="2026-04-13T19:24:38.894594158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:38.896314 containerd[2019]: time="2026-04-13T19:24:38.896271026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:38.899233 containerd[2019]: time="2026-04-13T19:24:38.897065126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:24:38.899233 containerd[2019]: time="2026-04-13T19:24:38.897347066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:24:38.899233 containerd[2019]: time="2026-04-13T19:24:38.897382814Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:24:38.899233 containerd[2019]: time="2026-04-13T19:24:38.897597470Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:24:38.899233 containerd[2019]: time="2026-04-13T19:24:38.897699362Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:24:38.909093 containerd[2019]: time="2026-04-13T19:24:38.909037538Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:24:38.909643 containerd[2019]: time="2026-04-13T19:24:38.909604370Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:24:38.912642 containerd[2019]: time="2026-04-13T19:24:38.912566738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.912819806Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.912902066Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.913261298Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.913705526Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.913997114Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.914038874Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.914071922Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.914103830Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.914224 containerd[2019]: time="2026-04-13T19:24:38.914134922Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.921045 containerd[2019]: time="2026-04-13T19:24:38.920591270Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.921045 containerd[2019]: time="2026-04-13T19:24:38.920718134Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.921045 containerd[2019]: time="2026-04-13T19:24:38.920786666Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.921045 containerd[2019]: time="2026-04-13T19:24:38.920964698Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.921045 containerd[2019]: time="2026-04-13T19:24:38.921002090Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.922303 containerd[2019]: time="2026-04-13T19:24:38.921433322Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:24:38.922303 containerd[2019]: time="2026-04-13T19:24:38.921620546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.922303 containerd[2019]: time="2026-04-13T19:24:38.921809354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.922303 containerd[2019]: time="2026-04-13T19:24:38.921848990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.922303 containerd[2019]: time="2026-04-13T19:24:38.921995642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.922734254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924296666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924338462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924396494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924430946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924492950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924559118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924599138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924654194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.926293 containerd[2019]: time="2026-04-13T19:24:38.924705806Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:24:38.927721 containerd[2019]: time="2026-04-13T19:24:38.927504134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.929222 containerd[2019]: time="2026-04-13T19:24:38.927938834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.929560 containerd[2019]: time="2026-04-13T19:24:38.929524166Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:24:38.930325 containerd[2019]: time="2026-04-13T19:24:38.929933078Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:24:38.930325 containerd[2019]: time="2026-04-13T19:24:38.929979266Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:24:38.930325 containerd[2019]: time="2026-04-13T19:24:38.930032318Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:24:38.930325 containerd[2019]: time="2026-04-13T19:24:38.930063278Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:24:38.930325 containerd[2019]: time="2026-04-13T19:24:38.930129722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.930325 containerd[2019]: time="2026-04-13T19:24:38.930240710Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:24:38.930325 containerd[2019]: time="2026-04-13T19:24:38.930271706Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:24:38.932141 containerd[2019]: time="2026-04-13T19:24:38.930297386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:24:38.938152 containerd[2019]: time="2026-04-13T19:24:38.936253670Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:24:38.938152 containerd[2019]: time="2026-04-13T19:24:38.936408986Z" level=info msg="Connect containerd service" Apr 13 19:24:38.938152 containerd[2019]: time="2026-04-13T19:24:38.936474410Z" level=info msg="using legacy CRI server" Apr 13 19:24:38.938152 containerd[2019]: time="2026-04-13T19:24:38.936528854Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:24:38.938152 containerd[2019]: time="2026-04-13T19:24:38.936681374Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:24:38.947462 containerd[2019]: time="2026-04-13T19:24:38.944659226Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:24:38.953615 containerd[2019]: time="2026-04-13T19:24:38.951000902Z" level=info msg="Start subscribing containerd event" Apr 13 19:24:38.953615 containerd[2019]: time="2026-04-13T19:24:38.951127922Z" level=info msg="Start recovering state" Apr 13 19:24:38.953615 containerd[2019]: time="2026-04-13T19:24:38.953332226Z" level=info msg="Start event monitor" Apr 13 19:24:38.953615 containerd[2019]: time="2026-04-13T19:24:38.953367842Z" level=info msg="Start snapshots syncer" Apr 13 19:24:38.953615 containerd[2019]: time="2026-04-13T19:24:38.953391794Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:24:38.953615 containerd[2019]: time="2026-04-13T19:24:38.953422466Z" level=info msg="Start streaming server" Apr 13 19:24:38.956342 containerd[2019]: time="2026-04-13T19:24:38.956282942Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:24:38.956455 containerd[2019]: time="2026-04-13T19:24:38.956418914Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:24:38.956658 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:24:38.971460 containerd[2019]: time="2026-04-13T19:24:38.971396858Z" level=info msg="containerd successfully booted in 0.289945s" Apr 13 19:24:39.012619 sshd_keygen[2023]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:24:39.059318 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:24:39.073766 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:24:39.083395 systemd[1]: Started sshd@0-172.31.17.32:22-4.175.71.9:35552.service - OpenSSH per-connection server daemon (4.175.71.9:35552). Apr 13 19:24:39.101453 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:24:39.101974 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:24:39.113642 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:24:39.161466 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:24:39.175705 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:24:39.193924 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 19:24:39.200208 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:24:39.213330 systemd-networkd[1939]: eth0: Gained IPv6LL Apr 13 19:24:39.220367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:24:39.226637 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:24:39.241856 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 19:24:39.255521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:39.273605 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:24:39.348108 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:24:39.388036 amazon-ssm-agent[2212]: Initializing new seelog logger Apr 13 19:24:39.388791 amazon-ssm-agent[2212]: New Seelog Logger Creation Complete Apr 13 19:24:39.388791 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.388791 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.389834 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 processing appconfig overrides Apr 13 19:24:39.389834 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.389834 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.389834 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 processing appconfig overrides Apr 13 19:24:39.390058 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.390058 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.390058 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 processing appconfig overrides Apr 13 19:24:39.391517 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO Proxy environment variables: Apr 13 19:24:39.400230 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.400230 amazon-ssm-agent[2212]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:24:39.400230 amazon-ssm-agent[2212]: 2026/04/13 19:24:39 processing appconfig overrides Apr 13 19:24:39.491272 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO https_proxy: Apr 13 19:24:39.589847 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO http_proxy: Apr 13 19:24:39.687928 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO no_proxy: Apr 13 19:24:39.698544 tar[2004]: linux-arm64/README.md Apr 13 19:24:39.723555 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:24:39.786491 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO Checking if agent identity type OnPrem can be assumed Apr 13 19:24:39.885937 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO Checking if agent identity type EC2 can be assumed Apr 13 19:24:39.984058 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO Agent will take identity from EC2 Apr 13 19:24:40.083266 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:24:40.133326 sshd[2202]: Accepted publickey for core from 4.175.71.9 port 35552 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:40.141667 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:40.165418 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:24:40.178670 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:24:40.182323 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:24:40.187978 systemd-logind[2000]: New session 1 of user core. Apr 13 19:24:40.224503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:24:40.243818 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:24:40.266501 (systemd)[2236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:24:40.282324 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:24:40.284931 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 19:24:40.284931 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [Registrar] Starting registrar module Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:39 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:40 INFO [EC2Identity] EC2 registration was successful. Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:40 INFO [CredentialRefresher] credentialRefresher has started Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:40 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 19:24:40.285098 amazon-ssm-agent[2212]: 2026-04-13 19:24:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 19:24:40.383293 amazon-ssm-agent[2212]: 2026-04-13 19:24:40 INFO [CredentialRefresher] Next credential rotation will be in 32.48332348943333 minutes Apr 13 19:24:40.501725 systemd[2236]: Queued start job for default target default.target. Apr 13 19:24:40.514245 systemd[2236]: Created slice app.slice - User Application Slice. Apr 13 19:24:40.514315 systemd[2236]: Reached target paths.target - Paths. Apr 13 19:24:40.514349 systemd[2236]: Reached target timers.target - Timers. Apr 13 19:24:40.524504 systemd[2236]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:24:40.542525 systemd[2236]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:24:40.542773 systemd[2236]: Reached target sockets.target - Sockets. Apr 13 19:24:40.542807 systemd[2236]: Reached target basic.target - Basic System. Apr 13 19:24:40.542896 systemd[2236]: Reached target default.target - Main User Target. Apr 13 19:24:40.542981 systemd[2236]: Startup finished in 264ms. Apr 13 19:24:40.543268 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:24:40.552660 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:24:41.270692 systemd[1]: Started sshd@1-172.31.17.32:22-4.175.71.9:35568.service - OpenSSH per-connection server daemon (4.175.71.9:35568). Apr 13 19:24:41.320718 amazon-ssm-agent[2212]: 2026-04-13 19:24:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 19:24:41.424030 amazon-ssm-agent[2212]: 2026-04-13 19:24:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2250) started Apr 13 19:24:41.506626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:41.513828 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:41.524791 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:24:41.528362 amazon-ssm-agent[2212]: 2026-04-13 19:24:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 19:24:41.530259 systemd[1]: Startup finished in 1.209s (kernel) + 10.622s (initrd) + 9.139s (userspace) = 20.971s. Apr 13 19:24:41.796257 ntpd[1993]: Listen normally on 7 eth0 [fe80::4aa:62ff:fedf:58e7%2]:123 Apr 13 19:24:41.796819 ntpd[1993]: 13 Apr 19:24:41 ntpd[1993]: Listen normally on 7 eth0 [fe80::4aa:62ff:fedf:58e7%2]:123 Apr 13 19:24:42.275072 sshd[2247]: Accepted publickey for core from 4.175.71.9 port 35568 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:42.276945 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:42.284742 systemd-logind[2000]: New session 2 of user core. Apr 13 19:24:42.300421 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:24:42.532351 kubelet[2261]: E0413 19:24:42.532140 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:42.536138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:42.536591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:42.537100 systemd[1]: kubelet.service: Consumed 1.279s CPU time. Apr 13 19:24:42.968503 sshd[2247]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:42.974795 systemd[1]: sshd@1-172.31.17.32:22-4.175.71.9:35568.service: Deactivated successfully. Apr 13 19:24:42.979991 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:24:42.981753 systemd-logind[2000]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:24:42.983916 systemd-logind[2000]: Removed session 2. Apr 13 19:24:43.142525 systemd[1]: Started sshd@2-172.31.17.32:22-4.175.71.9:35570.service - OpenSSH per-connection server daemon (4.175.71.9:35570). Apr 13 19:24:44.145216 sshd[2282]: Accepted publickey for core from 4.175.71.9 port 35570 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:44.146987 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:44.155529 systemd-logind[2000]: New session 3 of user core. Apr 13 19:24:44.165439 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:24:44.605520 systemd-resolved[1940]: Clock change detected. Flushing caches. Apr 13 19:24:44.631109 sshd[2282]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:44.636006 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:24:44.636044 systemd-logind[2000]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:24:44.637573 systemd[1]: sshd@2-172.31.17.32:22-4.175.71.9:35570.service: Deactivated successfully. Apr 13 19:24:44.645308 systemd-logind[2000]: Removed session 3. Apr 13 19:24:44.818544 systemd[1]: Started sshd@3-172.31.17.32:22-4.175.71.9:35572.service - OpenSSH per-connection server daemon (4.175.71.9:35572). Apr 13 19:24:45.862319 sshd[2289]: Accepted publickey for core from 4.175.71.9 port 35572 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:45.864934 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:45.874327 systemd-logind[2000]: New session 4 of user core. Apr 13 19:24:45.885510 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:24:46.576538 sshd[2289]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:46.582047 systemd[1]: sshd@3-172.31.17.32:22-4.175.71.9:35572.service: Deactivated successfully. Apr 13 19:24:46.582539 systemd-logind[2000]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:24:46.585491 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:24:46.590448 systemd-logind[2000]: Removed session 4. Apr 13 19:24:46.768508 systemd[1]: Started sshd@4-172.31.17.32:22-4.175.71.9:49964.service - OpenSSH per-connection server daemon (4.175.71.9:49964). Apr 13 19:24:47.810024 sshd[2296]: Accepted publickey for core from 4.175.71.9 port 49964 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:47.811649 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:47.821317 systemd-logind[2000]: New session 5 of user core. Apr 13 19:24:47.830245 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:24:48.371395 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:24:48.372555 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:48.389521 sudo[2299]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:48.556903 sshd[2296]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:48.564148 systemd-logind[2000]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:24:48.564933 systemd[1]: sshd@4-172.31.17.32:22-4.175.71.9:49964.service: Deactivated successfully. Apr 13 19:24:48.568755 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:24:48.572263 systemd-logind[2000]: Removed session 5. Apr 13 19:24:48.751684 systemd[1]: Started sshd@5-172.31.17.32:22-4.175.71.9:49968.service - OpenSSH per-connection server daemon (4.175.71.9:49968). Apr 13 19:24:49.782709 sshd[2304]: Accepted publickey for core from 4.175.71.9 port 49968 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:49.784574 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:49.792895 systemd-logind[2000]: New session 6 of user core. Apr 13 19:24:49.800254 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:24:50.328905 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:24:50.329672 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:50.335898 sudo[2308]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:50.346127 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:24:50.346745 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:50.368519 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:24:50.384950 auditctl[2311]: No rules Apr 13 19:24:50.387308 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:24:50.389070 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:24:50.397140 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:24:50.458901 augenrules[2329]: No rules Apr 13 19:24:50.462097 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:24:50.464477 sudo[2307]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:50.631564 sshd[2304]: pam_unix(sshd:session): session closed for user core Apr 13 19:24:50.638498 systemd[1]: sshd@5-172.31.17.32:22-4.175.71.9:49968.service: Deactivated successfully. Apr 13 19:24:50.641580 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:24:50.645857 systemd-logind[2000]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:24:50.647703 systemd-logind[2000]: Removed session 6. Apr 13 19:24:50.805383 systemd[1]: Started sshd@6-172.31.17.32:22-4.175.71.9:49976.service - OpenSSH per-connection server daemon (4.175.71.9:49976). Apr 13 19:24:51.818009 sshd[2337]: Accepted publickey for core from 4.175.71.9 port 49976 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:24:51.819679 sshd[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:24:51.828098 systemd-logind[2000]: New session 7 of user core. Apr 13 19:24:51.836247 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:24:52.348161 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:24:52.349009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:24:52.348844 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:24:52.361654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:52.764396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:52.778721 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:52.863596 kubelet[2362]: E0413 19:24:52.863506 2362 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:52.871643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:52.871970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:52.922466 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:24:52.935498 (dockerd)[2371]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:24:53.340080 dockerd[2371]: time="2026-04-13T19:24:53.339952771Z" level=info msg="Starting up" Apr 13 19:24:53.466904 systemd[1]: var-lib-docker-metacopy\x2dcheck361422237-merged.mount: Deactivated successfully. Apr 13 19:24:53.484137 dockerd[2371]: time="2026-04-13T19:24:53.484076132Z" level=info msg="Loading containers: start." Apr 13 19:24:53.648015 kernel: Initializing XFRM netlink socket Apr 13 19:24:53.681967 (udev-worker)[2392]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:53.767564 systemd-networkd[1939]: docker0: Link UP Apr 13 19:24:53.794566 dockerd[2371]: time="2026-04-13T19:24:53.794512185Z" level=info msg="Loading containers: done." Apr 13 19:24:53.821329 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3389882937-merged.mount: Deactivated successfully. Apr 13 19:24:53.824497 dockerd[2371]: time="2026-04-13T19:24:53.823371909Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:24:53.824497 dockerd[2371]: time="2026-04-13T19:24:53.823682457Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:24:53.824497 dockerd[2371]: time="2026-04-13T19:24:53.823907481Z" level=info msg="Daemon has completed initialization" Apr 13 19:24:53.874566 dockerd[2371]: time="2026-04-13T19:24:53.874337554Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:24:53.875040 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:24:54.834745 containerd[2019]: time="2026-04-13T19:24:54.834672778Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 19:24:55.493018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2931791100.mount: Deactivated successfully. Apr 13 19:24:57.211620 containerd[2019]: time="2026-04-13T19:24:57.211527838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:57.214821 containerd[2019]: time="2026-04-13T19:24:57.214751218Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=24476890" Apr 13 19:24:57.216957 containerd[2019]: time="2026-04-13T19:24:57.216876250Z" level=info msg="ImageCreate event name:\"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:57.227021 containerd[2019]: time="2026-04-13T19:24:57.225967558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:57.232993 containerd[2019]: time="2026-04-13T19:24:57.232913302Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"24473489\" in 2.398171896s" Apr 13 19:24:57.233218 containerd[2019]: time="2026-04-13T19:24:57.233188474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:63b89433458ca86408a1468b411c42a89f4660e49c87651709b5c4f063f4849f\"" Apr 13 19:24:57.234190 containerd[2019]: time="2026-04-13T19:24:57.234128890Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 19:24:59.013811 containerd[2019]: time="2026-04-13T19:24:59.013719395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:59.016013 containerd[2019]: time="2026-04-13T19:24:59.015936947Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=19139642" Apr 13 19:24:59.018015 containerd[2019]: time="2026-04-13T19:24:59.017172359Z" level=info msg="ImageCreate event name:\"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:59.022999 containerd[2019]: time="2026-04-13T19:24:59.022889639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:59.025412 containerd[2019]: time="2026-04-13T19:24:59.025363103Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"20617664\" in 1.791178413s" Apr 13 19:24:59.025683 containerd[2019]: time="2026-04-13T19:24:59.025553531Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:6660e82e8aca5f16241c2665727858d15219f0f794a62238218e253cdcecb8d7\"" Apr 13 19:24:59.026948 containerd[2019]: time="2026-04-13T19:24:59.026726531Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 19:25:00.444021 containerd[2019]: time="2026-04-13T19:25:00.443908238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:00.446143 containerd[2019]: time="2026-04-13T19:25:00.446072714Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=14195539" Apr 13 19:25:00.448014 containerd[2019]: time="2026-04-13T19:25:00.447943178Z" level=info msg="ImageCreate event name:\"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:00.453953 containerd[2019]: time="2026-04-13T19:25:00.453870026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:00.456970 containerd[2019]: time="2026-04-13T19:25:00.456360098Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"15673579\" in 1.429577215s" Apr 13 19:25:00.456970 containerd[2019]: time="2026-04-13T19:25:00.456427358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:ca0c06ae95330c4e10d8daa0957779be495432a703b748d767d63111101eed54\"" Apr 13 19:25:00.458535 containerd[2019]: time="2026-04-13T19:25:00.458367050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 19:25:01.784858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476721534.mount: Deactivated successfully. Apr 13 19:25:02.179006 containerd[2019]: time="2026-04-13T19:25:02.177694083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:02.179703 containerd[2019]: time="2026-04-13T19:25:02.179640831Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=22697099" Apr 13 19:25:02.180212 containerd[2019]: time="2026-04-13T19:25:02.180171735Z" level=info msg="ImageCreate event name:\"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:02.184337 containerd[2019]: time="2026-04-13T19:25:02.184263507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:02.185915 containerd[2019]: time="2026-04-13T19:25:02.185852547Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"22696118\" in 1.727429661s" Apr 13 19:25:02.186075 containerd[2019]: time="2026-04-13T19:25:02.185914287Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:c4c6d0b908d750e54be07f6a15d89db69fc1246039cc5e52c7eeeee886a1a713\"" Apr 13 19:25:02.187205 containerd[2019]: time="2026-04-13T19:25:02.187108275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 19:25:02.723682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453676619.mount: Deactivated successfully. Apr 13 19:25:03.098934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:25:03.112516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:03.487298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:03.500203 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:25:03.606271 kubelet[2635]: E0413 19:25:03.606200 2635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:25:03.612393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:25:03.612767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:25:04.622888 containerd[2019]: time="2026-04-13T19:25:04.622803871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:04.749347 containerd[2019]: time="2026-04-13T19:25:04.749272160Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Apr 13 19:25:04.862684 containerd[2019]: time="2026-04-13T19:25:04.862614632Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:04.999095 containerd[2019]: time="2026-04-13T19:25:04.998669421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:05.001639 containerd[2019]: time="2026-04-13T19:25:05.001486781Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.814103886s" Apr 13 19:25:05.001639 containerd[2019]: time="2026-04-13T19:25:05.001576157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Apr 13 19:25:05.002754 containerd[2019]: time="2026-04-13T19:25:05.002469437Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 19:25:06.024739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916237920.mount: Deactivated successfully. Apr 13 19:25:06.033885 containerd[2019]: time="2026-04-13T19:25:06.033821358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:06.035458 containerd[2019]: time="2026-04-13T19:25:06.035404998Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 13 19:25:06.037021 containerd[2019]: time="2026-04-13T19:25:06.036406866Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:06.041019 containerd[2019]: time="2026-04-13T19:25:06.040713594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:06.043017 containerd[2019]: time="2026-04-13T19:25:06.042606510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 1.040082965s" Apr 13 19:25:06.043017 containerd[2019]: time="2026-04-13T19:25:06.042661590Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 13 19:25:06.044501 containerd[2019]: time="2026-04-13T19:25:06.044213514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 19:25:06.585782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2000561558.mount: Deactivated successfully. Apr 13 19:25:07.685819 containerd[2019]: time="2026-04-13T19:25:07.685730902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:07.688245 containerd[2019]: time="2026-04-13T19:25:07.688135870Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21139072" Apr 13 19:25:07.691032 containerd[2019]: time="2026-04-13T19:25:07.689231638Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:07.695709 containerd[2019]: time="2026-04-13T19:25:07.695649502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:07.698369 containerd[2019]: time="2026-04-13T19:25:07.698316826Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.65404972s" Apr 13 19:25:07.698562 containerd[2019]: time="2026-04-13T19:25:07.698529682Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Apr 13 19:25:08.527564 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 19:25:13.849124 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:25:13.859174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:14.201372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:14.205861 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:25:14.282948 kubelet[2748]: E0413 19:25:14.282857 2748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:25:14.289516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:25:14.289863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:25:15.019380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:15.032482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:15.095663 systemd[1]: Reloading requested from client PID 2762 ('systemctl') (unit session-7.scope)... Apr 13 19:25:15.095701 systemd[1]: Reloading... Apr 13 19:25:15.351036 zram_generator::config[2802]: No configuration found. Apr 13 19:25:15.602660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:25:15.779545 systemd[1]: Reloading finished in 683 ms. Apr 13 19:25:15.866778 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 19:25:15.867040 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 19:25:15.867575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:15.875557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:16.209252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:16.227844 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:25:16.300252 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:25:16.300252 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:25:16.301574 kubelet[2864]: I0413 19:25:16.301490 2864 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:25:17.786626 kubelet[2864]: I0413 19:25:17.786537 2864 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:25:17.786626 kubelet[2864]: I0413 19:25:17.786606 2864 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:25:17.787329 kubelet[2864]: I0413 19:25:17.786683 2864 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:25:17.787329 kubelet[2864]: I0413 19:25:17.786700 2864 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:25:17.787329 kubelet[2864]: I0413 19:25:17.787248 2864 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:25:17.799353 kubelet[2864]: E0413 19:25:17.799270 2864 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.17.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:25:17.800233 kubelet[2864]: I0413 19:25:17.800035 2864 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:25:17.807923 kubelet[2864]: E0413 19:25:17.807858 2864 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:25:17.810041 kubelet[2864]: I0413 19:25:17.808385 2864 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:25:17.814706 kubelet[2864]: I0413 19:25:17.814654 2864 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:25:17.815165 kubelet[2864]: I0413 19:25:17.815109 2864 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:25:17.815452 kubelet[2864]: I0413 19:25:17.815164 2864 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-32","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:25:17.815452 kubelet[2864]: I0413 19:25:17.815441 2864 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:25:17.815675 kubelet[2864]: I0413 19:25:17.815460 2864 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:25:17.815675 kubelet[2864]: I0413 19:25:17.815627 2864 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:25:17.817943 kubelet[2864]: I0413 19:25:17.817908 2864 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:25:17.820264 kubelet[2864]: I0413 19:25:17.820231 2864 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:25:17.820391 kubelet[2864]: I0413 19:25:17.820272 2864 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:25:17.820391 kubelet[2864]: I0413 19:25:17.820333 2864 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:25:17.820391 kubelet[2864]: I0413 19:25:17.820362 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:25:17.823018 kubelet[2864]: E0413 19:25:17.822746 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:25:17.825044 kubelet[2864]: E0413 19:25:17.823368 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-32&limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:25:17.825044 kubelet[2864]: I0413 19:25:17.823533 2864 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:25:17.825044 kubelet[2864]: I0413 19:25:17.824468 2864 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:25:17.825044 kubelet[2864]: I0413 19:25:17.824517 2864 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:25:17.825044 kubelet[2864]: W0413 19:25:17.824572 2864 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:25:17.831438 kubelet[2864]: I0413 19:25:17.831390 2864 server.go:1262] "Started kubelet" Apr 13 19:25:17.834788 kubelet[2864]: I0413 19:25:17.834730 2864 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:25:17.836639 kubelet[2864]: I0413 19:25:17.836580 2864 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:25:17.837856 kubelet[2864]: I0413 19:25:17.836710 2864 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:25:17.838131 kubelet[2864]: I0413 19:25:17.838100 2864 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:25:17.838733 kubelet[2864]: I0413 19:25:17.838693 2864 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:25:17.844705 kubelet[2864]: E0413 19:25:17.841531 2864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.32:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-32.18a6011ad6f442e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-32,UID:ip-172-31-17-32,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-32,},FirstTimestamp:2026-04-13 19:25:17.831348969 +0000 UTC m=+1.596735633,LastTimestamp:2026-04-13 19:25:17.831348969 +0000 UTC m=+1.596735633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-32,}" Apr 13 19:25:17.846544 kubelet[2864]: I0413 19:25:17.846302 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:25:17.848256 kubelet[2864]: E0413 19:25:17.848216 2864 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:25:17.849682 kubelet[2864]: I0413 19:25:17.848221 2864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:25:17.851343 kubelet[2864]: E0413 19:25:17.849252 2864 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-32\" not found" Apr 13 19:25:17.851524 kubelet[2864]: I0413 19:25:17.850242 2864 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:25:17.852490 kubelet[2864]: I0413 19:25:17.850259 2864 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:25:17.852762 kubelet[2864]: I0413 19:25:17.852726 2864 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:25:17.853645 kubelet[2864]: I0413 19:25:17.853241 2864 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:25:17.853645 kubelet[2864]: I0413 19:25:17.853399 2864 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:25:17.854130 kubelet[2864]: E0413 19:25:17.854096 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:25:17.854812 kubelet[2864]: E0413 19:25:17.854363 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-32?timeout=10s\": dial tcp 172.31.17.32:6443: connect: connection refused" interval="200ms" Apr 13 19:25:17.858024 kubelet[2864]: I0413 19:25:17.857969 2864 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:25:17.892276 kubelet[2864]: I0413 19:25:17.892238 2864 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:25:17.892498 kubelet[2864]: I0413 19:25:17.892471 2864 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:25:17.892642 kubelet[2864]: I0413 19:25:17.892623 2864 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:25:17.899245 kubelet[2864]: I0413 19:25:17.899205 2864 policy_none.go:49] "None policy: Start" Apr 13 19:25:17.900039 kubelet[2864]: I0413 19:25:17.899393 2864 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:25:17.900039 kubelet[2864]: I0413 19:25:17.899424 2864 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:25:17.901169 kubelet[2864]: I0413 19:25:17.901118 2864 policy_none.go:47] "Start" Apr 13 19:25:17.913155 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:25:17.915617 kubelet[2864]: I0413 19:25:17.914612 2864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:25:17.921722 kubelet[2864]: I0413 19:25:17.920552 2864 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:25:17.921722 kubelet[2864]: I0413 19:25:17.921726 2864 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:25:17.921932 kubelet[2864]: I0413 19:25:17.921769 2864 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:25:17.921932 kubelet[2864]: E0413 19:25:17.921842 2864 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:25:17.923493 kubelet[2864]: E0413 19:25:17.923436 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:25:17.937426 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:25:17.951616 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:25:17.952183 kubelet[2864]: E0413 19:25:17.952014 2864 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-32\" not found" Apr 13 19:25:17.955669 kubelet[2864]: E0413 19:25:17.955415 2864 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:25:17.955909 kubelet[2864]: I0413 19:25:17.955888 2864 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:25:17.956106 kubelet[2864]: I0413 19:25:17.956057 2864 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:25:17.956758 kubelet[2864]: I0413 19:25:17.956725 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:25:17.959587 kubelet[2864]: E0413 19:25:17.959455 2864 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:25:17.959587 kubelet[2864]: E0413 19:25:17.959523 2864 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-32\" not found" Apr 13 19:25:18.044635 systemd[1]: Created slice kubepods-burstable-pod68c00809cf82c05e55b6b1475562fd30.slice - libcontainer container kubepods-burstable-pod68c00809cf82c05e55b6b1475562fd30.slice. Apr 13 19:25:18.053639 kubelet[2864]: I0413 19:25:18.053527 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccb960f5d25b0b7c38311e4fea57ed7a-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-32\" (UID: \"ccb960f5d25b0b7c38311e4fea57ed7a\") " pod="kube-system/kube-scheduler-ip-172-31-17-32" Apr 13 19:25:18.053639 kubelet[2864]: I0413 19:25:18.053593 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:18.053639 kubelet[2864]: I0413 19:25:18.053634 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:18.054485 kubelet[2864]: I0413 19:25:18.053675 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:18.054485 kubelet[2864]: I0413 19:25:18.053710 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68c00809cf82c05e55b6b1475562fd30-ca-certs\") pod \"kube-apiserver-ip-172-31-17-32\" (UID: \"68c00809cf82c05e55b6b1475562fd30\") " pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:18.054485 kubelet[2864]: I0413 19:25:18.053748 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68c00809cf82c05e55b6b1475562fd30-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-32\" (UID: \"68c00809cf82c05e55b6b1475562fd30\") " pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:18.054485 kubelet[2864]: I0413 19:25:18.053813 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68c00809cf82c05e55b6b1475562fd30-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-32\" (UID: \"68c00809cf82c05e55b6b1475562fd30\") " pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:18.054485 kubelet[2864]: I0413 19:25:18.053850 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:18.054741 kubelet[2864]: I0413 19:25:18.053888 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:18.056191 kubelet[2864]: E0413 19:25:18.055655 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:18.056191 kubelet[2864]: E0413 19:25:18.055716 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-32?timeout=10s\": dial tcp 172.31.17.32:6443: connect: connection refused" interval="400ms" Apr 13 19:25:18.061047 kubelet[2864]: I0413 19:25:18.060838 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-32" Apr 13 19:25:18.064131 kubelet[2864]: E0413 19:25:18.064070 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.32:6443/api/v1/nodes\": dial tcp 172.31.17.32:6443: connect: connection refused" node="ip-172-31-17-32" Apr 13 19:25:18.066956 systemd[1]: Created slice kubepods-burstable-pod81d995be9cc0f708587b9b055592df6c.slice - libcontainer container kubepods-burstable-pod81d995be9cc0f708587b9b055592df6c.slice. Apr 13 19:25:18.072716 kubelet[2864]: E0413 19:25:18.072661 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:18.078728 systemd[1]: Created slice kubepods-burstable-podccb960f5d25b0b7c38311e4fea57ed7a.slice - libcontainer container kubepods-burstable-podccb960f5d25b0b7c38311e4fea57ed7a.slice. Apr 13 19:25:18.083029 kubelet[2864]: E0413 19:25:18.082955 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:18.267944 kubelet[2864]: I0413 19:25:18.267900 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-32" Apr 13 19:25:18.268489 kubelet[2864]: E0413 19:25:18.268440 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.32:6443/api/v1/nodes\": dial tcp 172.31.17.32:6443: connect: connection refused" node="ip-172-31-17-32" Apr 13 19:25:18.360651 containerd[2019]: time="2026-04-13T19:25:18.360228859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-32,Uid:68c00809cf82c05e55b6b1475562fd30,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:18.376427 containerd[2019]: time="2026-04-13T19:25:18.376354051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-32,Uid:81d995be9cc0f708587b9b055592df6c,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:18.386934 containerd[2019]: time="2026-04-13T19:25:18.386570455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-32,Uid:ccb960f5d25b0b7c38311e4fea57ed7a,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:18.456882 kubelet[2864]: E0413 19:25:18.456815 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-32?timeout=10s\": dial tcp 172.31.17.32:6443: connect: connection refused" interval="800ms" Apr 13 19:25:18.506642 kubelet[2864]: E0413 19:25:18.506459 2864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.32:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-32.18a6011ad6f442e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-32,UID:ip-172-31-17-32,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-32,},FirstTimestamp:2026-04-13 19:25:17.831348969 +0000 UTC m=+1.596735633,LastTimestamp:2026-04-13 19:25:17.831348969 +0000 UTC m=+1.596735633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-32,}" Apr 13 19:25:18.671417 kubelet[2864]: I0413 19:25:18.671276 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-32" Apr 13 19:25:18.672843 kubelet[2864]: E0413 19:25:18.672777 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.32:6443/api/v1/nodes\": dial tcp 172.31.17.32:6443: connect: connection refused" node="ip-172-31-17-32" Apr 13 19:25:18.764092 kubelet[2864]: E0413 19:25:18.764033 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.17.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:25:18.825709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697437.mount: Deactivated successfully. Apr 13 19:25:18.833687 containerd[2019]: time="2026-04-13T19:25:18.831943150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:25:18.835033 containerd[2019]: time="2026-04-13T19:25:18.834960322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 19:25:18.838421 containerd[2019]: time="2026-04-13T19:25:18.838349362Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:25:18.840651 containerd[2019]: time="2026-04-13T19:25:18.840603154Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:25:18.841423 containerd[2019]: time="2026-04-13T19:25:18.841381858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:25:18.843784 containerd[2019]: time="2026-04-13T19:25:18.843741526Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:25:18.846686 containerd[2019]: time="2026-04-13T19:25:18.846639334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:25:18.852881 containerd[2019]: time="2026-04-13T19:25:18.852819178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:25:18.855179 containerd[2019]: time="2026-04-13T19:25:18.854868334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.523003ms" Apr 13 19:25:18.860279 containerd[2019]: time="2026-04-13T19:25:18.860207998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.532915ms" Apr 13 19:25:18.862276 containerd[2019]: time="2026-04-13T19:25:18.862211758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.739279ms" Apr 13 19:25:19.024412 kubelet[2864]: E0413 19:25:19.023932 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.17.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:25:19.027036 kubelet[2864]: E0413 19:25:19.026623 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.17.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:25:19.065279 containerd[2019]: time="2026-04-13T19:25:19.064077223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:19.065279 containerd[2019]: time="2026-04-13T19:25:19.064782475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:19.065279 containerd[2019]: time="2026-04-13T19:25:19.064846303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:19.065279 containerd[2019]: time="2026-04-13T19:25:19.065050939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:19.073261 containerd[2019]: time="2026-04-13T19:25:19.072907987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:19.075638 kubelet[2864]: E0413 19:25:19.075558 2864 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.17.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-32&limit=500&resourceVersion=0\": dial tcp 172.31.17.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:25:19.075836 containerd[2019]: time="2026-04-13T19:25:19.073040863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:19.075836 containerd[2019]: time="2026-04-13T19:25:19.075054715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:19.075836 containerd[2019]: time="2026-04-13T19:25:19.075256207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:19.087503 containerd[2019]: time="2026-04-13T19:25:19.086666887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:19.087503 containerd[2019]: time="2026-04-13T19:25:19.086752855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:19.087503 containerd[2019]: time="2026-04-13T19:25:19.086778667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:19.087503 containerd[2019]: time="2026-04-13T19:25:19.086951023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:19.110307 systemd[1]: Started cri-containerd-18e0edc45364166753bae8c65aad198035e6d96ee1acf9b7e9cfaf6e3fff614d.scope - libcontainer container 18e0edc45364166753bae8c65aad198035e6d96ee1acf9b7e9cfaf6e3fff614d. Apr 13 19:25:19.148315 systemd[1]: Started cri-containerd-ee658a79f748492d9fec55c6fbb2f9c59208f4b431bd9d521f83cbabaf47278d.scope - libcontainer container ee658a79f748492d9fec55c6fbb2f9c59208f4b431bd9d521f83cbabaf47278d. Apr 13 19:25:19.168348 systemd[1]: Started cri-containerd-94231f1084eee2ecfbdc072657ee04e91703ee6fa621a8b076fe070d87ec7df7.scope - libcontainer container 94231f1084eee2ecfbdc072657ee04e91703ee6fa621a8b076fe070d87ec7df7. Apr 13 19:25:19.256172 containerd[2019]: time="2026-04-13T19:25:19.255886136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-32,Uid:ccb960f5d25b0b7c38311e4fea57ed7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"18e0edc45364166753bae8c65aad198035e6d96ee1acf9b7e9cfaf6e3fff614d\"" Apr 13 19:25:19.259396 containerd[2019]: time="2026-04-13T19:25:19.258323012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-32,Uid:81d995be9cc0f708587b9b055592df6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee658a79f748492d9fec55c6fbb2f9c59208f4b431bd9d521f83cbabaf47278d\"" Apr 13 19:25:19.260904 kubelet[2864]: E0413 19:25:19.260747 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-32?timeout=10s\": dial tcp 172.31.17.32:6443: connect: connection refused" interval="1.6s" Apr 13 19:25:19.276881 containerd[2019]: time="2026-04-13T19:25:19.276724268Z" level=info msg="CreateContainer within sandbox \"18e0edc45364166753bae8c65aad198035e6d96ee1acf9b7e9cfaf6e3fff614d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:25:19.281065 containerd[2019]: time="2026-04-13T19:25:19.280798292Z" level=info msg="CreateContainer within sandbox \"ee658a79f748492d9fec55c6fbb2f9c59208f4b431bd9d521f83cbabaf47278d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:25:19.292821 containerd[2019]: time="2026-04-13T19:25:19.292741400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-32,Uid:68c00809cf82c05e55b6b1475562fd30,Namespace:kube-system,Attempt:0,} returns sandbox id \"94231f1084eee2ecfbdc072657ee04e91703ee6fa621a8b076fe070d87ec7df7\"" Apr 13 19:25:19.303609 containerd[2019]: time="2026-04-13T19:25:19.303556400Z" level=info msg="CreateContainer within sandbox \"94231f1084eee2ecfbdc072657ee04e91703ee6fa621a8b076fe070d87ec7df7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:25:19.311080 containerd[2019]: time="2026-04-13T19:25:19.310806884Z" level=info msg="CreateContainer within sandbox \"18e0edc45364166753bae8c65aad198035e6d96ee1acf9b7e9cfaf6e3fff614d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aba07e330f532bba4242b9f16fb05bb27152a1afd6c26b649f2e91befaaae59f\"" Apr 13 19:25:19.313028 containerd[2019]: time="2026-04-13T19:25:19.311941364Z" level=info msg="StartContainer for \"aba07e330f532bba4242b9f16fb05bb27152a1afd6c26b649f2e91befaaae59f\"" Apr 13 19:25:19.342184 containerd[2019]: time="2026-04-13T19:25:19.341999744Z" level=info msg="CreateContainer within sandbox \"ee658a79f748492d9fec55c6fbb2f9c59208f4b431bd9d521f83cbabaf47278d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e60177215312b1adba1968280ff6e55903ea6ae439574747806bab1bbc2a6d9\"" Apr 13 19:25:19.344035 containerd[2019]: time="2026-04-13T19:25:19.343673984Z" level=info msg="StartContainer for \"5e60177215312b1adba1968280ff6e55903ea6ae439574747806bab1bbc2a6d9\"" Apr 13 19:25:19.349436 containerd[2019]: time="2026-04-13T19:25:19.349360052Z" level=info msg="CreateContainer within sandbox \"94231f1084eee2ecfbdc072657ee04e91703ee6fa621a8b076fe070d87ec7df7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74022d358a2b2781294800bced0685230f3d926bac9e39f1126474aa68dec6d4\"" Apr 13 19:25:19.350225 containerd[2019]: time="2026-04-13T19:25:19.350070908Z" level=info msg="StartContainer for \"74022d358a2b2781294800bced0685230f3d926bac9e39f1126474aa68dec6d4\"" Apr 13 19:25:19.371128 systemd[1]: Started cri-containerd-aba07e330f532bba4242b9f16fb05bb27152a1afd6c26b649f2e91befaaae59f.scope - libcontainer container aba07e330f532bba4242b9f16fb05bb27152a1afd6c26b649f2e91befaaae59f. Apr 13 19:25:19.436334 systemd[1]: Started cri-containerd-5e60177215312b1adba1968280ff6e55903ea6ae439574747806bab1bbc2a6d9.scope - libcontainer container 5e60177215312b1adba1968280ff6e55903ea6ae439574747806bab1bbc2a6d9. Apr 13 19:25:19.450341 systemd[1]: Started cri-containerd-74022d358a2b2781294800bced0685230f3d926bac9e39f1126474aa68dec6d4.scope - libcontainer container 74022d358a2b2781294800bced0685230f3d926bac9e39f1126474aa68dec6d4. Apr 13 19:25:19.478329 kubelet[2864]: I0413 19:25:19.478277 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-32" Apr 13 19:25:19.478833 kubelet[2864]: E0413 19:25:19.478764 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.32:6443/api/v1/nodes\": dial tcp 172.31.17.32:6443: connect: connection refused" node="ip-172-31-17-32" Apr 13 19:25:19.511072 containerd[2019]: time="2026-04-13T19:25:19.510962901Z" level=info msg="StartContainer for \"aba07e330f532bba4242b9f16fb05bb27152a1afd6c26b649f2e91befaaae59f\" returns successfully" Apr 13 19:25:19.583463 containerd[2019]: time="2026-04-13T19:25:19.583233045Z" level=info msg="StartContainer for \"74022d358a2b2781294800bced0685230f3d926bac9e39f1126474aa68dec6d4\" returns successfully" Apr 13 19:25:19.593094 containerd[2019]: time="2026-04-13T19:25:19.592864725Z" level=info msg="StartContainer for \"5e60177215312b1adba1968280ff6e55903ea6ae439574747806bab1bbc2a6d9\" returns successfully" Apr 13 19:25:19.943008 kubelet[2864]: E0413 19:25:19.942323 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:19.955013 kubelet[2864]: E0413 19:25:19.953414 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:19.960630 kubelet[2864]: E0413 19:25:19.960564 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:20.962708 kubelet[2864]: E0413 19:25:20.962653 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:20.963646 kubelet[2864]: E0413 19:25:20.963606 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:21.083407 kubelet[2864]: I0413 19:25:21.082730 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-32" Apr 13 19:25:23.372139 update_engine[2001]: I20260413 19:25:23.372030 2001 update_attempter.cc:509] Updating boot flags... Apr 13 19:25:23.495152 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3162) Apr 13 19:25:24.236018 kubelet[2864]: E0413 19:25:24.233825 2864 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:24.470486 kubelet[2864]: E0413 19:25:24.470384 2864 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-32\" not found" node="ip-172-31-17-32" Apr 13 19:25:24.534117 kubelet[2864]: I0413 19:25:24.531943 2864 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-32" Apr 13 19:25:24.534117 kubelet[2864]: E0413 19:25:24.532012 2864 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-32\": node \"ip-172-31-17-32\" not found" Apr 13 19:25:24.550921 kubelet[2864]: I0413 19:25:24.550031 2864 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:24.671736 kubelet[2864]: E0413 19:25:24.671689 2864 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:24.671971 kubelet[2864]: I0413 19:25:24.671948 2864 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:24.693438 kubelet[2864]: E0413 19:25:24.693388 2864 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:24.693713 kubelet[2864]: I0413 19:25:24.693685 2864 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-32" Apr 13 19:25:24.709467 kubelet[2864]: E0413 19:25:24.709397 2864 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-32" Apr 13 19:25:24.826090 kubelet[2864]: I0413 19:25:24.825635 2864 apiserver.go:52] "Watching apiserver" Apr 13 19:25:24.853680 kubelet[2864]: I0413 19:25:24.853585 2864 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:25:25.060020 kubelet[2864]: I0413 19:25:25.058424 2864 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:25.354330 kubelet[2864]: I0413 19:25:25.354246 2864 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:26.708306 systemd[1]: Reloading requested from client PID 3246 ('systemctl') (unit session-7.scope)... Apr 13 19:25:26.708331 systemd[1]: Reloading... Apr 13 19:25:26.918577 zram_generator::config[3292]: No configuration found. Apr 13 19:25:27.193387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:25:27.407661 systemd[1]: Reloading finished in 698 ms. Apr 13 19:25:27.502490 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:27.525920 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:25:27.526671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:27.526929 systemd[1]: kubelet.service: Consumed 2.370s CPU time, 125.1M memory peak, 0B memory swap peak. Apr 13 19:25:27.545873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:25:28.108442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:25:28.108923 (kubelet)[3347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:25:28.202723 kubelet[3347]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:25:28.202723 kubelet[3347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:25:28.204948 kubelet[3347]: I0413 19:25:28.203713 3347 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:25:28.225919 kubelet[3347]: I0413 19:25:28.225853 3347 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 19:25:28.225919 kubelet[3347]: I0413 19:25:28.225903 3347 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:25:28.226178 kubelet[3347]: I0413 19:25:28.225952 3347 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 19:25:28.229036 kubelet[3347]: I0413 19:25:28.225968 3347 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:25:28.229036 kubelet[3347]: I0413 19:25:28.228855 3347 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:25:28.233900 kubelet[3347]: I0413 19:25:28.233842 3347 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:25:28.239919 kubelet[3347]: I0413 19:25:28.239663 3347 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:25:28.250326 kubelet[3347]: E0413 19:25:28.250002 3347 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:25:28.250718 kubelet[3347]: I0413 19:25:28.250686 3347 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 19:25:28.263338 kubelet[3347]: I0413 19:25:28.263272 3347 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 19:25:28.263837 kubelet[3347]: I0413 19:25:28.263770 3347 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:25:28.264439 kubelet[3347]: I0413 19:25:28.263825 3347 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-32","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:25:28.264439 kubelet[3347]: I0413 19:25:28.264107 3347 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:25:28.264439 kubelet[3347]: I0413 19:25:28.264126 3347 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 19:25:28.264439 kubelet[3347]: I0413 19:25:28.264166 3347 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 19:25:28.266173 kubelet[3347]: I0413 19:25:28.264501 3347 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:25:28.266173 kubelet[3347]: I0413 19:25:28.264783 3347 kubelet.go:475] "Attempting to sync node with API server" Apr 13 19:25:28.266173 kubelet[3347]: I0413 19:25:28.264816 3347 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:25:28.266173 kubelet[3347]: I0413 19:25:28.264862 3347 kubelet.go:387] "Adding apiserver pod source" Apr 13 19:25:28.266173 kubelet[3347]: I0413 19:25:28.264891 3347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:25:28.286013 kubelet[3347]: I0413 19:25:28.285287 3347 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:25:28.289002 kubelet[3347]: I0413 19:25:28.288007 3347 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:25:28.289202 kubelet[3347]: I0413 19:25:28.289179 3347 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 19:25:28.306018 kubelet[3347]: I0413 19:25:28.305381 3347 server.go:1262] "Started kubelet" Apr 13 19:25:28.313038 kubelet[3347]: I0413 19:25:28.312733 3347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:25:28.317722 sudo[3363]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 19:25:28.318422 sudo[3363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 19:25:28.330824 kubelet[3347]: I0413 19:25:28.330383 3347 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:25:28.339029 kubelet[3347]: I0413 19:25:28.337385 3347 server.go:310] "Adding debug handlers to kubelet server" Apr 13 19:25:28.350241 kubelet[3347]: I0413 19:25:28.350162 3347 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:25:28.350444 kubelet[3347]: I0413 19:25:28.350419 3347 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 19:25:28.351435 kubelet[3347]: I0413 19:25:28.350858 3347 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:25:28.353083 kubelet[3347]: I0413 19:25:28.351931 3347 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:25:28.361017 kubelet[3347]: I0413 19:25:28.360839 3347 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 19:25:28.365866 kubelet[3347]: E0413 19:25:28.365807 3347 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-17-32\" not found" Apr 13 19:25:28.366729 kubelet[3347]: I0413 19:25:28.366689 3347 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 19:25:28.367020 kubelet[3347]: I0413 19:25:28.366945 3347 reconciler.go:29] "Reconciler: start to sync state" Apr 13 19:25:28.410588 kubelet[3347]: I0413 19:25:28.410363 3347 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 19:25:28.427877 kubelet[3347]: I0413 19:25:28.427123 3347 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:25:28.430014 kubelet[3347]: I0413 19:25:28.428285 3347 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:25:28.430502 kubelet[3347]: I0413 19:25:28.430467 3347 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 19:25:28.430853 kubelet[3347]: I0413 19:25:28.430831 3347 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 19:25:28.431515 kubelet[3347]: I0413 19:25:28.431486 3347 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 19:25:28.432022 kubelet[3347]: E0413 19:25:28.431727 3347 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:25:28.445385 kubelet[3347]: I0413 19:25:28.445345 3347 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:25:28.454020 kubelet[3347]: E0413 19:25:28.452399 3347 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:25:28.535381 kubelet[3347]: E0413 19:25:28.535325 3347 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.582799 3347 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.582828 3347 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.582866 3347 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.583106 3347 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.583127 3347 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.583158 3347 policy_none.go:49] "None policy: Start" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.583178 3347 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.583206 3347 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.583428 3347 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 19:25:28.587888 kubelet[3347]: I0413 19:25:28.583447 3347 policy_none.go:47] "Start" Apr 13 19:25:28.599517 kubelet[3347]: E0413 19:25:28.598735 3347 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:25:28.599517 kubelet[3347]: I0413 19:25:28.599098 3347 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:25:28.599517 kubelet[3347]: I0413 19:25:28.599123 3347 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:25:28.602196 kubelet[3347]: I0413 19:25:28.601365 3347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:25:28.603497 kubelet[3347]: E0413 19:25:28.603448 3347 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:25:28.729206 kubelet[3347]: I0413 19:25:28.729059 3347 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-32" Apr 13 19:25:28.738840 kubelet[3347]: I0413 19:25:28.737161 3347 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-32" Apr 13 19:25:28.738840 kubelet[3347]: I0413 19:25:28.737656 3347 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:28.741437 kubelet[3347]: I0413 19:25:28.741379 3347 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:28.757586 kubelet[3347]: E0413 19:25:28.757525 3347 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-32\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:28.757779 kubelet[3347]: I0413 19:25:28.757742 3347 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-32" Apr 13 19:25:28.757854 kubelet[3347]: I0413 19:25:28.757836 3347 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-32" Apr 13 19:25:28.763440 kubelet[3347]: E0413 19:25:28.763380 3347 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-32\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:28.775925 kubelet[3347]: I0413 19:25:28.775763 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68c00809cf82c05e55b6b1475562fd30-ca-certs\") pod \"kube-apiserver-ip-172-31-17-32\" (UID: \"68c00809cf82c05e55b6b1475562fd30\") " pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:28.776105 kubelet[3347]: I0413 19:25:28.776037 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68c00809cf82c05e55b6b1475562fd30-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-32\" (UID: \"68c00809cf82c05e55b6b1475562fd30\") " pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:28.778041 kubelet[3347]: I0413 19:25:28.776084 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:28.778041 kubelet[3347]: I0413 19:25:28.776425 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:28.778041 kubelet[3347]: I0413 19:25:28.776474 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:28.778041 kubelet[3347]: I0413 19:25:28.776630 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:28.778041 kubelet[3347]: I0413 19:25:28.776729 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68c00809cf82c05e55b6b1475562fd30-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-32\" (UID: \"68c00809cf82c05e55b6b1475562fd30\") " pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:28.778479 kubelet[3347]: I0413 19:25:28.776833 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81d995be9cc0f708587b9b055592df6c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-32\" (UID: \"81d995be9cc0f708587b9b055592df6c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-32" Apr 13 19:25:28.778479 kubelet[3347]: I0413 19:25:28.777030 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccb960f5d25b0b7c38311e4fea57ed7a-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-32\" (UID: \"ccb960f5d25b0b7c38311e4fea57ed7a\") " pod="kube-system/kube-scheduler-ip-172-31-17-32" Apr 13 19:25:29.252207 sudo[3363]: pam_unix(sudo:session): session closed for user root Apr 13 19:25:29.269032 kubelet[3347]: I0413 19:25:29.268702 3347 apiserver.go:52] "Watching apiserver" Apr 13 19:25:29.367755 kubelet[3347]: I0413 19:25:29.367669 3347 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 19:25:29.497908 kubelet[3347]: I0413 19:25:29.497800 3347 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:29.530869 kubelet[3347]: E0413 19:25:29.530387 3347 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-32\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-32" Apr 13 19:25:29.561014 kubelet[3347]: I0413 19:25:29.557008 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-32" podStartSLOduration=1.556960771 podStartE2EDuration="1.556960771s" podCreationTimestamp="2026-04-13 19:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:29.555302443 +0000 UTC m=+1.439140400" watchObservedRunningTime="2026-04-13 19:25:29.556960771 +0000 UTC m=+1.440798692" Apr 13 19:25:29.611000 kubelet[3347]: I0413 19:25:29.608920 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-32" podStartSLOduration=4.608894299 podStartE2EDuration="4.608894299s" podCreationTimestamp="2026-04-13 19:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:29.590201911 +0000 UTC m=+1.474039856" watchObservedRunningTime="2026-04-13 19:25:29.608894299 +0000 UTC m=+1.492732220" Apr 13 19:25:29.644133 kubelet[3347]: I0413 19:25:29.644037 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-32" podStartSLOduration=4.644010991 podStartE2EDuration="4.644010991s" podCreationTimestamp="2026-04-13 19:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:29.608814979 +0000 UTC m=+1.492652936" watchObservedRunningTime="2026-04-13 19:25:29.644010991 +0000 UTC m=+1.527848924" Apr 13 19:25:31.940673 kubelet[3347]: I0413 19:25:31.940616 3347 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:25:31.942315 containerd[2019]: time="2026-04-13T19:25:31.941672687Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:25:31.944659 kubelet[3347]: I0413 19:25:31.944342 3347 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:25:32.781352 sudo[2340]: pam_unix(sudo:session): session closed for user root Apr 13 19:25:32.947012 sshd[2337]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:32.954781 systemd[1]: sshd@6-172.31.17.32:22-4.175.71.9:49976.service: Deactivated successfully. Apr 13 19:25:32.960774 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:25:32.961612 systemd[1]: session-7.scope: Consumed 11.757s CPU time, 156.4M memory peak, 0B memory swap peak. Apr 13 19:25:32.966249 systemd-logind[2000]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:25:32.972640 systemd-logind[2000]: Removed session 7. Apr 13 19:25:33.049097 systemd[1]: Created slice kubepods-besteffort-pod974d7470_7d39_4393_8f05_bb1f43bea45b.slice - libcontainer container kubepods-besteffort-pod974d7470_7d39_4393_8f05_bb1f43bea45b.slice. Apr 13 19:25:33.078226 systemd[1]: Created slice kubepods-burstable-podc6422f07_b3a7_429c_bc58_b1cf324d5e4e.slice - libcontainer container kubepods-burstable-podc6422f07_b3a7_429c_bc58_b1cf324d5e4e.slice. Apr 13 19:25:33.109859 kubelet[3347]: I0413 19:25:33.107319 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hostproc\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.109859 kubelet[3347]: I0413 19:25:33.107390 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-net\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.109859 kubelet[3347]: I0413 19:25:33.107432 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hubble-tls\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.109859 kubelet[3347]: I0413 19:25:33.107468 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjpj\" (UniqueName: \"kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-kube-api-access-cgjpj\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.109859 kubelet[3347]: I0413 19:25:33.107510 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7tdc\" (UniqueName: \"kubernetes.io/projected/974d7470-7d39-4393-8f05-bb1f43bea45b-kube-api-access-q7tdc\") pod \"kube-proxy-8llvx\" (UID: \"974d7470-7d39-4393-8f05-bb1f43bea45b\") " pod="kube-system/kube-proxy-8llvx" Apr 13 19:25:33.109859 kubelet[3347]: I0413 19:25:33.107545 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-run\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.110761 kubelet[3347]: I0413 19:25:33.107593 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-bpf-maps\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.110761 kubelet[3347]: I0413 19:25:33.107626 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-cgroup\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.110761 kubelet[3347]: I0413 19:25:33.107665 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cni-path\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.110761 kubelet[3347]: I0413 19:25:33.107702 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-etc-cni-netd\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.110761 kubelet[3347]: I0413 19:25:33.107742 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/974d7470-7d39-4393-8f05-bb1f43bea45b-xtables-lock\") pod \"kube-proxy-8llvx\" (UID: \"974d7470-7d39-4393-8f05-bb1f43bea45b\") " pod="kube-system/kube-proxy-8llvx" Apr 13 19:25:33.110761 kubelet[3347]: I0413 19:25:33.107777 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/974d7470-7d39-4393-8f05-bb1f43bea45b-lib-modules\") pod \"kube-proxy-8llvx\" (UID: \"974d7470-7d39-4393-8f05-bb1f43bea45b\") " pod="kube-system/kube-proxy-8llvx" Apr 13 19:25:33.111142 kubelet[3347]: I0413 19:25:33.107810 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-xtables-lock\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.111142 kubelet[3347]: I0413 19:25:33.107851 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-clustermesh-secrets\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.111142 kubelet[3347]: I0413 19:25:33.107888 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-config-path\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.111142 kubelet[3347]: I0413 19:25:33.107926 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/974d7470-7d39-4393-8f05-bb1f43bea45b-kube-proxy\") pod \"kube-proxy-8llvx\" (UID: \"974d7470-7d39-4393-8f05-bb1f43bea45b\") " pod="kube-system/kube-proxy-8llvx" Apr 13 19:25:33.111142 kubelet[3347]: I0413 19:25:33.107960 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-lib-modules\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.114065 kubelet[3347]: I0413 19:25:33.111923 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-kernel\") pod \"cilium-swv4z\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " pod="kube-system/cilium-swv4z" Apr 13 19:25:33.281762 systemd[1]: Created slice kubepods-besteffort-pod5431c25c_e09c_4c91_8e25_9a27cece6f71.slice - libcontainer container kubepods-besteffort-pod5431c25c_e09c_4c91_8e25_9a27cece6f71.slice. Apr 13 19:25:33.319081 kubelet[3347]: I0413 19:25:33.318265 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztdxg\" (UniqueName: \"kubernetes.io/projected/5431c25c-e09c-4c91-8e25-9a27cece6f71-kube-api-access-ztdxg\") pod \"cilium-operator-6f9c7c5859-n4fsm\" (UID: \"5431c25c-e09c-4c91-8e25-9a27cece6f71\") " pod="kube-system/cilium-operator-6f9c7c5859-n4fsm" Apr 13 19:25:33.319081 kubelet[3347]: I0413 19:25:33.318348 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5431c25c-e09c-4c91-8e25-9a27cece6f71-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-n4fsm\" (UID: \"5431c25c-e09c-4c91-8e25-9a27cece6f71\") " pod="kube-system/cilium-operator-6f9c7c5859-n4fsm" Apr 13 19:25:33.365675 containerd[2019]: time="2026-04-13T19:25:33.365507938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8llvx,Uid:974d7470-7d39-4393-8f05-bb1f43bea45b,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:33.389513 containerd[2019]: time="2026-04-13T19:25:33.388970278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swv4z,Uid:c6422f07-b3a7-429c-bc58-b1cf324d5e4e,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:33.412442 containerd[2019]: time="2026-04-13T19:25:33.411915010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:33.412442 containerd[2019]: time="2026-04-13T19:25:33.412032790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:33.412442 containerd[2019]: time="2026-04-13T19:25:33.412059214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:33.412442 containerd[2019]: time="2026-04-13T19:25:33.412204030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:33.441136 containerd[2019]: time="2026-04-13T19:25:33.440442802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:33.441136 containerd[2019]: time="2026-04-13T19:25:33.440584990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:33.441136 containerd[2019]: time="2026-04-13T19:25:33.440624194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:33.441136 containerd[2019]: time="2026-04-13T19:25:33.440841898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:33.470847 systemd[1]: Started cri-containerd-bac2572b3a98a64f443dc2887b56d8faf34b3eed643449d5728a627b804bfe21.scope - libcontainer container bac2572b3a98a64f443dc2887b56d8faf34b3eed643449d5728a627b804bfe21. Apr 13 19:25:33.498245 systemd[1]: Started cri-containerd-fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e.scope - libcontainer container fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e. Apr 13 19:25:33.572743 containerd[2019]: time="2026-04-13T19:25:33.572045459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8llvx,Uid:974d7470-7d39-4393-8f05-bb1f43bea45b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bac2572b3a98a64f443dc2887b56d8faf34b3eed643449d5728a627b804bfe21\"" Apr 13 19:25:33.584607 containerd[2019]: time="2026-04-13T19:25:33.584442779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swv4z,Uid:c6422f07-b3a7-429c-bc58-b1cf324d5e4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\"" Apr 13 19:25:33.586322 containerd[2019]: time="2026-04-13T19:25:33.586208147Z" level=info msg="CreateContainer within sandbox \"bac2572b3a98a64f443dc2887b56d8faf34b3eed643449d5728a627b804bfe21\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:25:33.591337 containerd[2019]: time="2026-04-13T19:25:33.591116879Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 19:25:33.595180 containerd[2019]: time="2026-04-13T19:25:33.594109199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-n4fsm,Uid:5431c25c-e09c-4c91-8e25-9a27cece6f71,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:33.646473 containerd[2019]: time="2026-04-13T19:25:33.646234199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:33.646688 containerd[2019]: time="2026-04-13T19:25:33.646520399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:33.646688 containerd[2019]: time="2026-04-13T19:25:33.646629935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:33.647381 containerd[2019]: time="2026-04-13T19:25:33.646846967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:33.680058 systemd[1]: Started cri-containerd-56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384.scope - libcontainer container 56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384. Apr 13 19:25:33.684425 containerd[2019]: time="2026-04-13T19:25:33.684184007Z" level=info msg="CreateContainer within sandbox \"bac2572b3a98a64f443dc2887b56d8faf34b3eed643449d5728a627b804bfe21\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36cf26503992cddf54754e5706f53eafa603df7dad498b3135a67a6d03a937e4\"" Apr 13 19:25:33.689168 containerd[2019]: time="2026-04-13T19:25:33.688671275Z" level=info msg="StartContainer for \"36cf26503992cddf54754e5706f53eafa603df7dad498b3135a67a6d03a937e4\"" Apr 13 19:25:33.756821 systemd[1]: Started cri-containerd-36cf26503992cddf54754e5706f53eafa603df7dad498b3135a67a6d03a937e4.scope - libcontainer container 36cf26503992cddf54754e5706f53eafa603df7dad498b3135a67a6d03a937e4. Apr 13 19:25:33.779602 containerd[2019]: time="2026-04-13T19:25:33.779528748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-n4fsm,Uid:5431c25c-e09c-4c91-8e25-9a27cece6f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\"" Apr 13 19:25:33.826277 containerd[2019]: time="2026-04-13T19:25:33.826021560Z" level=info msg="StartContainer for \"36cf26503992cddf54754e5706f53eafa603df7dad498b3135a67a6d03a937e4\" returns successfully" Apr 13 19:25:35.044009 kubelet[3347]: I0413 19:25:35.042057 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8llvx" podStartSLOduration=2.042033058 podStartE2EDuration="2.042033058s" podCreationTimestamp="2026-04-13 19:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:34.549858024 +0000 UTC m=+6.433695981" watchObservedRunningTime="2026-04-13 19:25:35.042033058 +0000 UTC m=+6.925871027" Apr 13 19:25:38.738049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946030143.mount: Deactivated successfully. Apr 13 19:25:41.321388 containerd[2019]: time="2026-04-13T19:25:41.321322769Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:41.323577 containerd[2019]: time="2026-04-13T19:25:41.323497613Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 13 19:25:41.324541 containerd[2019]: time="2026-04-13T19:25:41.324483509Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:41.329502 containerd[2019]: time="2026-04-13T19:25:41.329432513Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.738242806s" Apr 13 19:25:41.329502 containerd[2019]: time="2026-04-13T19:25:41.329498909Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 13 19:25:41.332666 containerd[2019]: time="2026-04-13T19:25:41.332579765Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 19:25:41.339011 containerd[2019]: time="2026-04-13T19:25:41.338788637Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:25:41.360098 containerd[2019]: time="2026-04-13T19:25:41.359013281Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\"" Apr 13 19:25:41.361400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508112113.mount: Deactivated successfully. Apr 13 19:25:41.364028 containerd[2019]: time="2026-04-13T19:25:41.363533537Z" level=info msg="StartContainer for \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\"" Apr 13 19:25:41.417880 systemd[1]: Started cri-containerd-a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3.scope - libcontainer container a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3. Apr 13 19:25:41.466415 containerd[2019]: time="2026-04-13T19:25:41.466347678Z" level=info msg="StartContainer for \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\" returns successfully" Apr 13 19:25:41.503580 systemd[1]: cri-containerd-a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3.scope: Deactivated successfully. Apr 13 19:25:42.351701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3-rootfs.mount: Deactivated successfully. Apr 13 19:25:42.654757 containerd[2019]: time="2026-04-13T19:25:42.654453620Z" level=info msg="shim disconnected" id=a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3 namespace=k8s.io Apr 13 19:25:42.654757 containerd[2019]: time="2026-04-13T19:25:42.654534068Z" level=warning msg="cleaning up after shim disconnected" id=a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3 namespace=k8s.io Apr 13 19:25:42.654757 containerd[2019]: time="2026-04-13T19:25:42.654557960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:43.125198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726437042.mount: Deactivated successfully. Apr 13 19:25:43.602517 containerd[2019]: time="2026-04-13T19:25:43.602269773Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:25:43.650259 containerd[2019]: time="2026-04-13T19:25:43.650183709Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\"" Apr 13 19:25:43.654612 containerd[2019]: time="2026-04-13T19:25:43.654456369Z" level=info msg="StartContainer for \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\"" Apr 13 19:25:43.754466 systemd[1]: Started cri-containerd-656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811.scope - libcontainer container 656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811. Apr 13 19:25:43.815877 containerd[2019]: time="2026-04-13T19:25:43.815722042Z" level=info msg="StartContainer for \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\" returns successfully" Apr 13 19:25:43.854626 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:25:43.855239 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:25:43.855350 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:25:43.866403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:25:43.866890 systemd[1]: cri-containerd-656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811.scope: Deactivated successfully. Apr 13 19:25:43.917658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:25:43.944245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811-rootfs.mount: Deactivated successfully. Apr 13 19:25:44.077099 containerd[2019]: time="2026-04-13T19:25:44.076961647Z" level=info msg="shim disconnected" id=656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811 namespace=k8s.io Apr 13 19:25:44.077745 containerd[2019]: time="2026-04-13T19:25:44.077399647Z" level=warning msg="cleaning up after shim disconnected" id=656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811 namespace=k8s.io Apr 13 19:25:44.077745 containerd[2019]: time="2026-04-13T19:25:44.077429839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:44.122479 containerd[2019]: time="2026-04-13T19:25:44.122334415Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:44.125057 containerd[2019]: time="2026-04-13T19:25:44.124947295Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 13 19:25:44.127226 containerd[2019]: time="2026-04-13T19:25:44.126199591Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:25:44.129239 containerd[2019]: time="2026-04-13T19:25:44.129189583Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.796530282s" Apr 13 19:25:44.129407 containerd[2019]: time="2026-04-13T19:25:44.129373987Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 13 19:25:44.138684 containerd[2019]: time="2026-04-13T19:25:44.138633163Z" level=info msg="CreateContainer within sandbox \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 19:25:44.167279 containerd[2019]: time="2026-04-13T19:25:44.167186143Z" level=info msg="CreateContainer within sandbox \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\"" Apr 13 19:25:44.170029 containerd[2019]: time="2026-04-13T19:25:44.168184423Z" level=info msg="StartContainer for \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\"" Apr 13 19:25:44.217544 systemd[1]: Started cri-containerd-fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab.scope - libcontainer container fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab. Apr 13 19:25:44.260669 containerd[2019]: time="2026-04-13T19:25:44.260585756Z" level=info msg="StartContainer for \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\" returns successfully" Apr 13 19:25:44.598486 containerd[2019]: time="2026-04-13T19:25:44.598391325Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:25:44.622526 containerd[2019]: time="2026-04-13T19:25:44.622439494Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\"" Apr 13 19:25:44.629083 containerd[2019]: time="2026-04-13T19:25:44.624264874Z" level=info msg="StartContainer for \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\"" Apr 13 19:25:44.723324 systemd[1]: Started cri-containerd-6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834.scope - libcontainer container 6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834. Apr 13 19:25:44.871107 containerd[2019]: time="2026-04-13T19:25:44.870607787Z" level=info msg="StartContainer for \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\" returns successfully" Apr 13 19:25:44.912353 systemd[1]: cri-containerd-6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834.scope: Deactivated successfully. Apr 13 19:25:44.972884 kubelet[3347]: I0413 19:25:44.972792 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-n4fsm" podStartSLOduration=1.623019288 podStartE2EDuration="11.972769739s" podCreationTimestamp="2026-04-13 19:25:33 +0000 UTC" firstStartedPulling="2026-04-13 19:25:33.782521908 +0000 UTC m=+5.666359817" lastFinishedPulling="2026-04-13 19:25:44.132272359 +0000 UTC m=+16.016110268" observedRunningTime="2026-04-13 19:25:44.675641758 +0000 UTC m=+16.559479715" watchObservedRunningTime="2026-04-13 19:25:44.972769739 +0000 UTC m=+16.856607684" Apr 13 19:25:44.987045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834-rootfs.mount: Deactivated successfully. Apr 13 19:25:44.995955 containerd[2019]: time="2026-04-13T19:25:44.995853587Z" level=info msg="shim disconnected" id=6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834 namespace=k8s.io Apr 13 19:25:44.995955 containerd[2019]: time="2026-04-13T19:25:44.996012095Z" level=warning msg="cleaning up after shim disconnected" id=6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834 namespace=k8s.io Apr 13 19:25:44.995955 containerd[2019]: time="2026-04-13T19:25:44.996035663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:45.609946 containerd[2019]: time="2026-04-13T19:25:45.609873839Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:25:45.641010 containerd[2019]: time="2026-04-13T19:25:45.640912379Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\"" Apr 13 19:25:45.646039 containerd[2019]: time="2026-04-13T19:25:45.641937683Z" level=info msg="StartContainer for \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\"" Apr 13 19:25:45.729316 systemd[1]: Started cri-containerd-1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10.scope - libcontainer container 1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10. Apr 13 19:25:45.813324 systemd[1]: cri-containerd-1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10.scope: Deactivated successfully. Apr 13 19:25:45.817852 containerd[2019]: time="2026-04-13T19:25:45.817534788Z" level=info msg="StartContainer for \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\" returns successfully" Apr 13 19:25:45.875622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10-rootfs.mount: Deactivated successfully. Apr 13 19:25:45.881145 containerd[2019]: time="2026-04-13T19:25:45.880643640Z" level=info msg="shim disconnected" id=1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10 namespace=k8s.io Apr 13 19:25:45.881145 containerd[2019]: time="2026-04-13T19:25:45.880718928Z" level=warning msg="cleaning up after shim disconnected" id=1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10 namespace=k8s.io Apr 13 19:25:45.881145 containerd[2019]: time="2026-04-13T19:25:45.880739976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:25:46.619030 containerd[2019]: time="2026-04-13T19:25:46.618949476Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:25:46.644578 containerd[2019]: time="2026-04-13T19:25:46.644502480Z" level=info msg="CreateContainer within sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\"" Apr 13 19:25:46.646053 containerd[2019]: time="2026-04-13T19:25:46.645970752Z" level=info msg="StartContainer for \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\"" Apr 13 19:25:46.707343 systemd[1]: Started cri-containerd-5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505.scope - libcontainer container 5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505. Apr 13 19:25:46.763867 containerd[2019]: time="2026-04-13T19:25:46.763800876Z" level=info msg="StartContainer for \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\" returns successfully" Apr 13 19:25:47.009130 kubelet[3347]: I0413 19:25:47.007425 3347 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 19:25:47.132177 systemd[1]: Created slice kubepods-burstable-pod6e1f4e23_5150_42c7_865a_af6e44565a88.slice - libcontainer container kubepods-burstable-pod6e1f4e23_5150_42c7_865a_af6e44565a88.slice. Apr 13 19:25:47.153158 systemd[1]: Created slice kubepods-burstable-pod08aa37d5_0a1d_482c_8bba_2ee35d197562.slice - libcontainer container kubepods-burstable-pod08aa37d5_0a1d_482c_8bba_2ee35d197562.slice. Apr 13 19:25:47.227354 kubelet[3347]: I0413 19:25:47.226997 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgs4h\" (UniqueName: \"kubernetes.io/projected/08aa37d5-0a1d-482c-8bba-2ee35d197562-kube-api-access-qgs4h\") pod \"coredns-66bc5c9577-zbphn\" (UID: \"08aa37d5-0a1d-482c-8bba-2ee35d197562\") " pod="kube-system/coredns-66bc5c9577-zbphn" Apr 13 19:25:47.227354 kubelet[3347]: I0413 19:25:47.227088 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08aa37d5-0a1d-482c-8bba-2ee35d197562-config-volume\") pod \"coredns-66bc5c9577-zbphn\" (UID: \"08aa37d5-0a1d-482c-8bba-2ee35d197562\") " pod="kube-system/coredns-66bc5c9577-zbphn" Apr 13 19:25:47.227354 kubelet[3347]: I0413 19:25:47.227129 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e1f4e23-5150-42c7-865a-af6e44565a88-config-volume\") pod \"coredns-66bc5c9577-p5bq8\" (UID: \"6e1f4e23-5150-42c7-865a-af6e44565a88\") " pod="kube-system/coredns-66bc5c9577-p5bq8" Apr 13 19:25:47.227354 kubelet[3347]: I0413 19:25:47.227169 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht2db\" (UniqueName: \"kubernetes.io/projected/6e1f4e23-5150-42c7-865a-af6e44565a88-kube-api-access-ht2db\") pod \"coredns-66bc5c9577-p5bq8\" (UID: \"6e1f4e23-5150-42c7-865a-af6e44565a88\") " pod="kube-system/coredns-66bc5c9577-p5bq8" Apr 13 19:25:47.447487 containerd[2019]: time="2026-04-13T19:25:47.447423504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p5bq8,Uid:6e1f4e23-5150-42c7-865a-af6e44565a88,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:47.478092 containerd[2019]: time="2026-04-13T19:25:47.477257640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zbphn,Uid:08aa37d5-0a1d-482c-8bba-2ee35d197562,Namespace:kube-system,Attempt:0,}" Apr 13 19:25:47.676630 kubelet[3347]: I0413 19:25:47.676535 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-swv4z" podStartSLOduration=6.9350193430000004 podStartE2EDuration="14.676514773s" podCreationTimestamp="2026-04-13 19:25:33 +0000 UTC" firstStartedPulling="2026-04-13 19:25:33.589381427 +0000 UTC m=+5.473219348" lastFinishedPulling="2026-04-13 19:25:41.330876869 +0000 UTC m=+13.214714778" observedRunningTime="2026-04-13 19:25:47.676095973 +0000 UTC m=+19.559933906" watchObservedRunningTime="2026-04-13 19:25:47.676514773 +0000 UTC m=+19.560352706" Apr 13 19:25:49.874856 (udev-worker)[4161]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:49.876309 systemd-networkd[1939]: cilium_host: Link UP Apr 13 19:25:49.876606 systemd-networkd[1939]: cilium_net: Link UP Apr 13 19:25:49.881129 systemd-networkd[1939]: cilium_net: Gained carrier Apr 13 19:25:49.881643 systemd-networkd[1939]: cilium_host: Gained carrier Apr 13 19:25:49.882027 systemd-networkd[1939]: cilium_net: Gained IPv6LL Apr 13 19:25:49.882360 systemd-networkd[1939]: cilium_host: Gained IPv6LL Apr 13 19:25:49.882835 (udev-worker)[4197]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:50.076858 systemd-networkd[1939]: cilium_vxlan: Link UP Apr 13 19:25:50.076872 systemd-networkd[1939]: cilium_vxlan: Gained carrier Apr 13 19:25:50.669176 kernel: NET: Registered PF_ALG protocol family Apr 13 19:25:52.025022 systemd-networkd[1939]: lxc_health: Link UP Apr 13 19:25:52.032906 (udev-worker)[4208]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:25:52.037328 systemd-networkd[1939]: lxc_health: Gained carrier Apr 13 19:25:52.047709 systemd-networkd[1939]: cilium_vxlan: Gained IPv6LL Apr 13 19:25:52.540786 systemd-networkd[1939]: lxce1bc765b207a: Link UP Apr 13 19:25:52.549211 kernel: eth0: renamed from tmp0e21c Apr 13 19:25:52.554902 systemd-networkd[1939]: lxce1bc765b207a: Gained carrier Apr 13 19:25:52.608552 systemd-networkd[1939]: lxcde391d91472a: Link UP Apr 13 19:25:52.619285 kernel: eth0: renamed from tmpd0d8f Apr 13 19:25:52.625240 systemd-networkd[1939]: lxcde391d91472a: Gained carrier Apr 13 19:25:53.647231 systemd-networkd[1939]: lxce1bc765b207a: Gained IPv6LL Apr 13 19:25:53.967198 systemd-networkd[1939]: lxc_health: Gained IPv6LL Apr 13 19:25:54.287288 systemd-networkd[1939]: lxcde391d91472a: Gained IPv6LL Apr 13 19:25:56.605631 ntpd[1993]: Listen normally on 8 cilium_host 192.168.0.243:123 Apr 13 19:25:56.606815 ntpd[1993]: 13 Apr 19:25:56 ntpd[1993]: Listen normally on 8 cilium_host 192.168.0.243:123 Apr 13 19:25:56.606815 ntpd[1993]: 13 Apr 19:25:56 ntpd[1993]: Listen normally on 9 cilium_net [fe80::806b:6eff:fe46:709%4]:123 Apr 13 19:25:56.606815 ntpd[1993]: 13 Apr 19:25:56 ntpd[1993]: Listen normally on 10 cilium_host [fe80::d854:66ff:fe14:72c4%5]:123 Apr 13 19:25:56.606815 ntpd[1993]: 13 Apr 19:25:56 ntpd[1993]: Listen normally on 11 cilium_vxlan [fe80::1061:eaff:fe19:26be%6]:123 Apr 13 19:25:56.606815 ntpd[1993]: 13 Apr 19:25:56 ntpd[1993]: Listen normally on 12 lxc_health [fe80::4d5:d0ff:fea5:db4%8]:123 Apr 13 19:25:56.606815 ntpd[1993]: 13 Apr 19:25:56 ntpd[1993]: Listen normally on 13 lxce1bc765b207a [fe80::c81d:a3ff:fedf:427e%10]:123 Apr 13 19:25:56.606815 ntpd[1993]: 13 Apr 19:25:56 ntpd[1993]: Listen normally on 14 lxcde391d91472a [fe80::e868:ffff:fe23:73b2%12]:123 Apr 13 19:25:56.605771 ntpd[1993]: Listen normally on 9 cilium_net [fe80::806b:6eff:fe46:709%4]:123 Apr 13 19:25:56.605854 ntpd[1993]: Listen normally on 10 cilium_host [fe80::d854:66ff:fe14:72c4%5]:123 Apr 13 19:25:56.605925 ntpd[1993]: Listen normally on 11 cilium_vxlan [fe80::1061:eaff:fe19:26be%6]:123 Apr 13 19:25:56.606054 ntpd[1993]: Listen normally on 12 lxc_health [fe80::4d5:d0ff:fea5:db4%8]:123 Apr 13 19:25:56.606131 ntpd[1993]: Listen normally on 13 lxce1bc765b207a [fe80::c81d:a3ff:fedf:427e%10]:123 Apr 13 19:25:56.606201 ntpd[1993]: Listen normally on 14 lxcde391d91472a [fe80::e868:ffff:fe23:73b2%12]:123 Apr 13 19:26:01.121774 containerd[2019]: time="2026-04-13T19:26:01.121242408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:26:01.121774 containerd[2019]: time="2026-04-13T19:26:01.121343112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:26:01.121774 containerd[2019]: time="2026-04-13T19:26:01.121371288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:01.121774 containerd[2019]: time="2026-04-13T19:26:01.121538676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:01.181326 systemd[1]: Started cri-containerd-0e21c0c793b44708d8f9235f6ec4cc1611689c905116766eb3544be8c35ee9f6.scope - libcontainer container 0e21c0c793b44708d8f9235f6ec4cc1611689c905116766eb3544be8c35ee9f6. Apr 13 19:26:01.221792 containerd[2019]: time="2026-04-13T19:26:01.221199660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:26:01.221792 containerd[2019]: time="2026-04-13T19:26:01.221319900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:26:01.221792 containerd[2019]: time="2026-04-13T19:26:01.221358552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:01.221792 containerd[2019]: time="2026-04-13T19:26:01.221529996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:01.286683 systemd[1]: Started cri-containerd-d0d8f9882ff2fa02dcc2f821736e6219c2bd105b324f594b21dd571c2b35014a.scope - libcontainer container d0d8f9882ff2fa02dcc2f821736e6219c2bd105b324f594b21dd571c2b35014a. Apr 13 19:26:01.317455 containerd[2019]: time="2026-04-13T19:26:01.317389705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-p5bq8,Uid:6e1f4e23-5150-42c7-865a-af6e44565a88,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e21c0c793b44708d8f9235f6ec4cc1611689c905116766eb3544be8c35ee9f6\"" Apr 13 19:26:01.332795 containerd[2019]: time="2026-04-13T19:26:01.332739049Z" level=info msg="CreateContainer within sandbox \"0e21c0c793b44708d8f9235f6ec4cc1611689c905116766eb3544be8c35ee9f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:26:01.361043 containerd[2019]: time="2026-04-13T19:26:01.360933757Z" level=info msg="CreateContainer within sandbox \"0e21c0c793b44708d8f9235f6ec4cc1611689c905116766eb3544be8c35ee9f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e7374062f8692593957ee7e2b9b4f1c246a4cb86af4347ca2a161eecfdd555c\"" Apr 13 19:26:01.364150 containerd[2019]: time="2026-04-13T19:26:01.363385177Z" level=info msg="StartContainer for \"1e7374062f8692593957ee7e2b9b4f1c246a4cb86af4347ca2a161eecfdd555c\"" Apr 13 19:26:01.441304 systemd[1]: Started cri-containerd-1e7374062f8692593957ee7e2b9b4f1c246a4cb86af4347ca2a161eecfdd555c.scope - libcontainer container 1e7374062f8692593957ee7e2b9b4f1c246a4cb86af4347ca2a161eecfdd555c. Apr 13 19:26:01.449032 containerd[2019]: time="2026-04-13T19:26:01.448937461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zbphn,Uid:08aa37d5-0a1d-482c-8bba-2ee35d197562,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0d8f9882ff2fa02dcc2f821736e6219c2bd105b324f594b21dd571c2b35014a\"" Apr 13 19:26:01.463522 containerd[2019]: time="2026-04-13T19:26:01.463468933Z" level=info msg="CreateContainer within sandbox \"d0d8f9882ff2fa02dcc2f821736e6219c2bd105b324f594b21dd571c2b35014a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:26:01.519049 containerd[2019]: time="2026-04-13T19:26:01.518487314Z" level=info msg="CreateContainer within sandbox \"d0d8f9882ff2fa02dcc2f821736e6219c2bd105b324f594b21dd571c2b35014a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a0b40cb24cc86fc24d6808d7eb117a39a1da63d19d046db002869e8c6dad441\"" Apr 13 19:26:01.524474 containerd[2019]: time="2026-04-13T19:26:01.524327762Z" level=info msg="StartContainer for \"7a0b40cb24cc86fc24d6808d7eb117a39a1da63d19d046db002869e8c6dad441\"" Apr 13 19:26:01.555910 containerd[2019]: time="2026-04-13T19:26:01.555605078Z" level=info msg="StartContainer for \"1e7374062f8692593957ee7e2b9b4f1c246a4cb86af4347ca2a161eecfdd555c\" returns successfully" Apr 13 19:26:01.607686 systemd[1]: Started cri-containerd-7a0b40cb24cc86fc24d6808d7eb117a39a1da63d19d046db002869e8c6dad441.scope - libcontainer container 7a0b40cb24cc86fc24d6808d7eb117a39a1da63d19d046db002869e8c6dad441. Apr 13 19:26:01.712366 containerd[2019]: time="2026-04-13T19:26:01.712203926Z" level=info msg="StartContainer for \"7a0b40cb24cc86fc24d6808d7eb117a39a1da63d19d046db002869e8c6dad441\" returns successfully" Apr 13 19:26:02.710190 kubelet[3347]: I0413 19:26:02.709727 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p5bq8" podStartSLOduration=29.709656603 podStartE2EDuration="29.709656603s" podCreationTimestamp="2026-04-13 19:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:26:01.731190783 +0000 UTC m=+33.615028716" watchObservedRunningTime="2026-04-13 19:26:02.709656603 +0000 UTC m=+34.593494620" Apr 13 19:26:02.738578 kubelet[3347]: I0413 19:26:02.736727 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zbphn" podStartSLOduration=29.7367077 podStartE2EDuration="29.7367077s" podCreationTimestamp="2026-04-13 19:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:26:02.711417207 +0000 UTC m=+34.595255152" watchObservedRunningTime="2026-04-13 19:26:02.7367077 +0000 UTC m=+34.620545633" Apr 13 19:26:17.253531 systemd[1]: Started sshd@7-172.31.17.32:22-4.175.71.9:34912.service - OpenSSH per-connection server daemon (4.175.71.9:34912). Apr 13 19:26:18.271837 sshd[4735]: Accepted publickey for core from 4.175.71.9 port 34912 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:18.274781 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:18.282922 systemd-logind[2000]: New session 8 of user core. Apr 13 19:26:18.293272 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:26:19.134619 sshd[4735]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:19.141833 systemd[1]: sshd@7-172.31.17.32:22-4.175.71.9:34912.service: Deactivated successfully. Apr 13 19:26:19.146911 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:26:19.148756 systemd-logind[2000]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:26:19.150867 systemd-logind[2000]: Removed session 8. Apr 13 19:26:24.311504 systemd[1]: Started sshd@8-172.31.17.32:22-4.175.71.9:34916.service - OpenSSH per-connection server daemon (4.175.71.9:34916). Apr 13 19:26:25.285736 sshd[4749]: Accepted publickey for core from 4.175.71.9 port 34916 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:25.288512 sshd[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:25.296609 systemd-logind[2000]: New session 9 of user core. Apr 13 19:26:25.306262 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:26:26.081503 sshd[4749]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:26.093800 systemd[1]: sshd@8-172.31.17.32:22-4.175.71.9:34916.service: Deactivated successfully. Apr 13 19:26:26.098604 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:26:26.102252 systemd-logind[2000]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:26:26.104477 systemd-logind[2000]: Removed session 9. Apr 13 19:26:31.260542 systemd[1]: Started sshd@9-172.31.17.32:22-4.175.71.9:36608.service - OpenSSH per-connection server daemon (4.175.71.9:36608). Apr 13 19:26:32.264854 sshd[4767]: Accepted publickey for core from 4.175.71.9 port 36608 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:32.266644 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:32.276038 systemd-logind[2000]: New session 10 of user core. Apr 13 19:26:32.283525 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:26:33.075771 sshd[4767]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:33.088440 systemd[1]: sshd@9-172.31.17.32:22-4.175.71.9:36608.service: Deactivated successfully. Apr 13 19:26:33.099202 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:26:33.101661 systemd-logind[2000]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:26:33.105366 systemd-logind[2000]: Removed session 10. Apr 13 19:26:38.255427 systemd[1]: Started sshd@10-172.31.17.32:22-4.175.71.9:55988.service - OpenSSH per-connection server daemon (4.175.71.9:55988). Apr 13 19:26:39.260926 sshd[4783]: Accepted publickey for core from 4.175.71.9 port 55988 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:39.262697 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:39.270582 systemd-logind[2000]: New session 11 of user core. Apr 13 19:26:39.284261 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:26:40.061880 sshd[4783]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:40.070308 systemd-logind[2000]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:26:40.070931 systemd[1]: sshd@10-172.31.17.32:22-4.175.71.9:55988.service: Deactivated successfully. Apr 13 19:26:40.078212 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:26:40.081025 systemd-logind[2000]: Removed session 11. Apr 13 19:26:40.242519 systemd[1]: Started sshd@11-172.31.17.32:22-4.175.71.9:55996.service - OpenSSH per-connection server daemon (4.175.71.9:55996). Apr 13 19:26:41.239582 sshd[4797]: Accepted publickey for core from 4.175.71.9 port 55996 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:41.243631 sshd[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:41.251208 systemd-logind[2000]: New session 12 of user core. Apr 13 19:26:41.259273 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:26:42.124805 sshd[4797]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:42.132828 systemd[1]: sshd@11-172.31.17.32:22-4.175.71.9:55996.service: Deactivated successfully. Apr 13 19:26:42.136500 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:26:42.138275 systemd-logind[2000]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:26:42.140540 systemd-logind[2000]: Removed session 12. Apr 13 19:26:42.303540 systemd[1]: Started sshd@12-172.31.17.32:22-4.175.71.9:56004.service - OpenSSH per-connection server daemon (4.175.71.9:56004). Apr 13 19:26:43.311800 sshd[4808]: Accepted publickey for core from 4.175.71.9 port 56004 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:43.314573 sshd[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:43.323365 systemd-logind[2000]: New session 13 of user core. Apr 13 19:26:43.328259 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:26:44.120911 sshd[4808]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:44.128350 systemd[1]: sshd@12-172.31.17.32:22-4.175.71.9:56004.service: Deactivated successfully. Apr 13 19:26:44.132881 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:26:44.134629 systemd-logind[2000]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:26:44.136917 systemd-logind[2000]: Removed session 13. Apr 13 19:26:49.301537 systemd[1]: Started sshd@13-172.31.17.32:22-4.175.71.9:52140.service - OpenSSH per-connection server daemon (4.175.71.9:52140). Apr 13 19:26:50.300640 sshd[4822]: Accepted publickey for core from 4.175.71.9 port 52140 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:50.302426 sshd[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:50.310084 systemd-logind[2000]: New session 14 of user core. Apr 13 19:26:50.316275 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:26:51.093427 sshd[4822]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:51.101621 systemd-logind[2000]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:26:51.102895 systemd[1]: sshd@13-172.31.17.32:22-4.175.71.9:52140.service: Deactivated successfully. Apr 13 19:26:51.106440 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:26:51.108761 systemd-logind[2000]: Removed session 14. Apr 13 19:26:56.279523 systemd[1]: Started sshd@14-172.31.17.32:22-4.175.71.9:40434.service - OpenSSH per-connection server daemon (4.175.71.9:40434). Apr 13 19:26:57.326557 sshd[4835]: Accepted publickey for core from 4.175.71.9 port 40434 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:57.329313 sshd[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:57.336880 systemd-logind[2000]: New session 15 of user core. Apr 13 19:26:57.349257 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:58.150829 sshd[4835]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:58.157353 systemd[1]: sshd@14-172.31.17.32:22-4.175.71.9:40434.service: Deactivated successfully. Apr 13 19:26:58.162155 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:58.163765 systemd-logind[2000]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:58.167247 systemd-logind[2000]: Removed session 15. Apr 13 19:26:58.316551 systemd[1]: Started sshd@15-172.31.17.32:22-4.175.71.9:40444.service - OpenSSH per-connection server daemon (4.175.71.9:40444). Apr 13 19:26:59.276425 sshd[4848]: Accepted publickey for core from 4.175.71.9 port 40444 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:59.279400 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:59.290565 systemd-logind[2000]: New session 16 of user core. Apr 13 19:26:59.297610 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:27:00.140220 sshd[4848]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:00.147175 systemd[1]: sshd@15-172.31.17.32:22-4.175.71.9:40444.service: Deactivated successfully. Apr 13 19:27:00.150469 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:27:00.152720 systemd-logind[2000]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:27:00.154952 systemd-logind[2000]: Removed session 16. Apr 13 19:27:00.328552 systemd[1]: Started sshd@16-172.31.17.32:22-4.175.71.9:40450.service - OpenSSH per-connection server daemon (4.175.71.9:40450). Apr 13 19:27:01.348773 sshd[4858]: Accepted publickey for core from 4.175.71.9 port 40450 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:01.350639 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:01.358398 systemd-logind[2000]: New session 17 of user core. Apr 13 19:27:01.367280 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:27:02.995075 sshd[4858]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:03.002875 systemd[1]: sshd@16-172.31.17.32:22-4.175.71.9:40450.service: Deactivated successfully. Apr 13 19:27:03.003068 systemd-logind[2000]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:27:03.008605 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:27:03.011566 systemd-logind[2000]: Removed session 17. Apr 13 19:27:03.171531 systemd[1]: Started sshd@17-172.31.17.32:22-4.175.71.9:40456.service - OpenSSH per-connection server daemon (4.175.71.9:40456). Apr 13 19:27:04.128479 sshd[4874]: Accepted publickey for core from 4.175.71.9 port 40456 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:04.131228 sshd[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:04.139961 systemd-logind[2000]: New session 18 of user core. Apr 13 19:27:04.142301 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:27:05.155331 sshd[4874]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:05.162316 systemd-logind[2000]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:27:05.163534 systemd[1]: sshd@17-172.31.17.32:22-4.175.71.9:40456.service: Deactivated successfully. Apr 13 19:27:05.168931 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:27:05.171034 systemd-logind[2000]: Removed session 18. Apr 13 19:27:05.344506 systemd[1]: Started sshd@18-172.31.17.32:22-4.175.71.9:40464.service - OpenSSH per-connection server daemon (4.175.71.9:40464). Apr 13 19:27:06.385756 sshd[4889]: Accepted publickey for core from 4.175.71.9 port 40464 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:06.388696 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:06.398068 systemd-logind[2000]: New session 19 of user core. Apr 13 19:27:06.407264 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:27:07.211894 sshd[4889]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:07.220613 systemd-logind[2000]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:27:07.221605 systemd[1]: sshd@18-172.31.17.32:22-4.175.71.9:40464.service: Deactivated successfully. Apr 13 19:27:07.224895 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:27:07.227423 systemd-logind[2000]: Removed session 19. Apr 13 19:27:12.387510 systemd[1]: Started sshd@19-172.31.17.32:22-4.175.71.9:38004.service - OpenSSH per-connection server daemon (4.175.71.9:38004). Apr 13 19:27:13.374431 sshd[4904]: Accepted publickey for core from 4.175.71.9 port 38004 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:13.377324 sshd[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:13.386276 systemd-logind[2000]: New session 20 of user core. Apr 13 19:27:13.393247 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:27:14.178379 sshd[4904]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:14.184829 systemd[1]: sshd@19-172.31.17.32:22-4.175.71.9:38004.service: Deactivated successfully. Apr 13 19:27:14.188956 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:27:14.190523 systemd-logind[2000]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:27:14.192299 systemd-logind[2000]: Removed session 20. Apr 13 19:27:19.356570 systemd[1]: Started sshd@20-172.31.17.32:22-4.175.71.9:36146.service - OpenSSH per-connection server daemon (4.175.71.9:36146). Apr 13 19:27:20.343740 sshd[4917]: Accepted publickey for core from 4.175.71.9 port 36146 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:20.345499 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:20.354648 systemd-logind[2000]: New session 21 of user core. Apr 13 19:27:20.362251 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:27:21.137110 sshd[4917]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:21.142567 systemd[1]: sshd@20-172.31.17.32:22-4.175.71.9:36146.service: Deactivated successfully. Apr 13 19:27:21.146364 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:27:21.150467 systemd-logind[2000]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:27:21.152662 systemd-logind[2000]: Removed session 21. Apr 13 19:27:21.323566 systemd[1]: Started sshd@21-172.31.17.32:22-4.175.71.9:36162.service - OpenSSH per-connection server daemon (4.175.71.9:36162). Apr 13 19:27:22.368774 sshd[4930]: Accepted publickey for core from 4.175.71.9 port 36162 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:22.371527 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:22.381406 systemd-logind[2000]: New session 22 of user core. Apr 13 19:27:22.385279 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:27:25.743412 containerd[2019]: time="2026-04-13T19:27:25.743323776Z" level=info msg="StopContainer for \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\" with timeout 30 (s)" Apr 13 19:27:25.745725 containerd[2019]: time="2026-04-13T19:27:25.745266780Z" level=info msg="Stop container \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\" with signal terminated" Apr 13 19:27:25.775332 containerd[2019]: time="2026-04-13T19:27:25.775245072Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:27:25.810454 containerd[2019]: time="2026-04-13T19:27:25.810128448Z" level=info msg="StopContainer for \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\" with timeout 2 (s)" Apr 13 19:27:25.811583 containerd[2019]: time="2026-04-13T19:27:25.811136580Z" level=info msg="Stop container \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\" with signal terminated" Apr 13 19:27:25.843399 systemd-networkd[1939]: lxc_health: Link DOWN Apr 13 19:27:25.843419 systemd-networkd[1939]: lxc_health: Lost carrier Apr 13 19:27:25.907437 systemd[1]: cri-containerd-5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505.scope: Deactivated successfully. Apr 13 19:27:25.908382 systemd[1]: cri-containerd-5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505.scope: Consumed 14.709s CPU time. Apr 13 19:27:25.921218 systemd[1]: cri-containerd-fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab.scope: Deactivated successfully. Apr 13 19:27:25.989580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505-rootfs.mount: Deactivated successfully. Apr 13 19:27:26.006221 containerd[2019]: time="2026-04-13T19:27:26.006021117Z" level=info msg="shim disconnected" id=5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505 namespace=k8s.io Apr 13 19:27:26.010708 containerd[2019]: time="2026-04-13T19:27:26.010417017Z" level=warning msg="cleaning up after shim disconnected" id=5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505 namespace=k8s.io Apr 13 19:27:26.010708 containerd[2019]: time="2026-04-13T19:27:26.010468437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:26.013884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab-rootfs.mount: Deactivated successfully. Apr 13 19:27:26.019025 containerd[2019]: time="2026-04-13T19:27:26.018779553Z" level=info msg="shim disconnected" id=fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab namespace=k8s.io Apr 13 19:27:26.019025 containerd[2019]: time="2026-04-13T19:27:26.018880821Z" level=warning msg="cleaning up after shim disconnected" id=fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab namespace=k8s.io Apr 13 19:27:26.019025 containerd[2019]: time="2026-04-13T19:27:26.018906309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:26.050820 containerd[2019]: time="2026-04-13T19:27:26.050757753Z" level=info msg="StopContainer for \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\" returns successfully" Apr 13 19:27:26.052074 containerd[2019]: time="2026-04-13T19:27:26.052017669Z" level=info msg="StopPodSandbox for \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\"" Apr 13 19:27:26.052424 containerd[2019]: time="2026-04-13T19:27:26.052266225Z" level=info msg="Container to stop \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:27:26.057850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384-shm.mount: Deactivated successfully. Apr 13 19:27:26.060100 containerd[2019]: time="2026-04-13T19:27:26.059948133Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:27:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:27:26.066527 containerd[2019]: time="2026-04-13T19:27:26.066458913Z" level=info msg="StopContainer for \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\" returns successfully" Apr 13 19:27:26.067958 containerd[2019]: time="2026-04-13T19:27:26.067520469Z" level=info msg="StopPodSandbox for \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\"" Apr 13 19:27:26.067958 containerd[2019]: time="2026-04-13T19:27:26.067590273Z" level=info msg="Container to stop \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:27:26.067958 containerd[2019]: time="2026-04-13T19:27:26.067617909Z" level=info msg="Container to stop \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:27:26.067958 containerd[2019]: time="2026-04-13T19:27:26.067641153Z" level=info msg="Container to stop \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:27:26.067958 containerd[2019]: time="2026-04-13T19:27:26.067666641Z" level=info msg="Container to stop \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:27:26.067958 containerd[2019]: time="2026-04-13T19:27:26.067692069Z" level=info msg="Container to stop \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:27:26.074215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e-shm.mount: Deactivated successfully. Apr 13 19:27:26.081516 systemd[1]: cri-containerd-56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384.scope: Deactivated successfully. Apr 13 19:27:26.092278 systemd[1]: cri-containerd-fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e.scope: Deactivated successfully. Apr 13 19:27:26.147509 containerd[2019]: time="2026-04-13T19:27:26.147230278Z" level=info msg="shim disconnected" id=56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384 namespace=k8s.io Apr 13 19:27:26.147509 containerd[2019]: time="2026-04-13T19:27:26.147320926Z" level=warning msg="cleaning up after shim disconnected" id=56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384 namespace=k8s.io Apr 13 19:27:26.147509 containerd[2019]: time="2026-04-13T19:27:26.147344530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:26.148343 containerd[2019]: time="2026-04-13T19:27:26.147772942Z" level=info msg="shim disconnected" id=fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e namespace=k8s.io Apr 13 19:27:26.148343 containerd[2019]: time="2026-04-13T19:27:26.148246210Z" level=warning msg="cleaning up after shim disconnected" id=fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e namespace=k8s.io Apr 13 19:27:26.148508 containerd[2019]: time="2026-04-13T19:27:26.148271902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:26.180481 containerd[2019]: time="2026-04-13T19:27:26.180410266Z" level=info msg="TearDown network for sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" successfully" Apr 13 19:27:26.180481 containerd[2019]: time="2026-04-13T19:27:26.180471130Z" level=info msg="StopPodSandbox for \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" returns successfully" Apr 13 19:27:26.184455 containerd[2019]: time="2026-04-13T19:27:26.184016926Z" level=info msg="TearDown network for sandbox \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" successfully" Apr 13 19:27:26.184455 containerd[2019]: time="2026-04-13T19:27:26.184085494Z" level=info msg="StopPodSandbox for \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" returns successfully" Apr 13 19:27:26.327555 kubelet[3347]: I0413 19:27:26.325094 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztdxg\" (UniqueName: \"kubernetes.io/projected/5431c25c-e09c-4c91-8e25-9a27cece6f71-kube-api-access-ztdxg\") pod \"5431c25c-e09c-4c91-8e25-9a27cece6f71\" (UID: \"5431c25c-e09c-4c91-8e25-9a27cece6f71\") " Apr 13 19:27:26.327555 kubelet[3347]: I0413 19:27:26.325160 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-etc-cni-netd\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.327555 kubelet[3347]: I0413 19:27:26.325199 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-clustermesh-secrets\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.327555 kubelet[3347]: I0413 19:27:26.325234 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-lib-modules\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.327555 kubelet[3347]: I0413 19:27:26.325269 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hostproc\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.327555 kubelet[3347]: I0413 19:27:26.325301 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-net\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328460 kubelet[3347]: I0413 19:27:26.325335 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-bpf-maps\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328460 kubelet[3347]: I0413 19:27:26.325366 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cni-path\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328460 kubelet[3347]: I0413 19:27:26.325397 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-xtables-lock\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328460 kubelet[3347]: I0413 19:27:26.325437 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-config-path\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328460 kubelet[3347]: I0413 19:27:26.325473 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-kernel\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328460 kubelet[3347]: I0413 19:27:26.325509 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5431c25c-e09c-4c91-8e25-9a27cece6f71-cilium-config-path\") pod \"5431c25c-e09c-4c91-8e25-9a27cece6f71\" (UID: \"5431c25c-e09c-4c91-8e25-9a27cece6f71\") " Apr 13 19:27:26.328787 kubelet[3347]: I0413 19:27:26.325568 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgjpj\" (UniqueName: \"kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-kube-api-access-cgjpj\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328787 kubelet[3347]: I0413 19:27:26.325610 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hubble-tls\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328787 kubelet[3347]: I0413 19:27:26.325645 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-cgroup\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328787 kubelet[3347]: I0413 19:27:26.325683 3347 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-run\") pod \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\" (UID: \"c6422f07-b3a7-429c-bc58-b1cf324d5e4e\") " Apr 13 19:27:26.328787 kubelet[3347]: I0413 19:27:26.325798 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.328787 kubelet[3347]: I0413 19:27:26.325861 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.330277 kubelet[3347]: I0413 19:27:26.327246 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.330277 kubelet[3347]: I0413 19:27:26.327338 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.330277 kubelet[3347]: I0413 19:27:26.327403 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.330277 kubelet[3347]: I0413 19:27:26.327447 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.330277 kubelet[3347]: I0413 19:27:26.327511 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.330635 kubelet[3347]: I0413 19:27:26.327573 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.333021 kubelet[3347]: I0413 19:27:26.332903 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.338587 kubelet[3347]: I0413 19:27:26.338521 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:27:26.338853 kubelet[3347]: I0413 19:27:26.338705 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:27:26.338853 kubelet[3347]: I0413 19:27:26.338831 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5431c25c-e09c-4c91-8e25-9a27cece6f71-kube-api-access-ztdxg" (OuterVolumeSpecName: "kube-api-access-ztdxg") pod "5431c25c-e09c-4c91-8e25-9a27cece6f71" (UID: "5431c25c-e09c-4c91-8e25-9a27cece6f71"). InnerVolumeSpecName "kube-api-access-ztdxg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:27:26.343352 kubelet[3347]: I0413 19:27:26.343169 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-kube-api-access-cgjpj" (OuterVolumeSpecName: "kube-api-access-cgjpj") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "kube-api-access-cgjpj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:27:26.345371 kubelet[3347]: I0413 19:27:26.345213 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:27:26.346685 kubelet[3347]: I0413 19:27:26.346471 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6422f07-b3a7-429c-bc58-b1cf324d5e4e" (UID: "c6422f07-b3a7-429c-bc58-b1cf324d5e4e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:27:26.349625 kubelet[3347]: I0413 19:27:26.349530 3347 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5431c25c-e09c-4c91-8e25-9a27cece6f71-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5431c25c-e09c-4c91-8e25-9a27cece6f71" (UID: "5431c25c-e09c-4c91-8e25-9a27cece6f71"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426726 3347 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cgjpj\" (UniqueName: \"kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-kube-api-access-cgjpj\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426771 3347 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hubble-tls\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426801 3347 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-cgroup\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426824 3347 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-run\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426847 3347 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztdxg\" (UniqueName: \"kubernetes.io/projected/5431c25c-e09c-4c91-8e25-9a27cece6f71-kube-api-access-ztdxg\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426884 3347 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-etc-cni-netd\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426910 3347 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-clustermesh-secrets\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427182 kubelet[3347]: I0413 19:27:26.426932 3347 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-lib-modules\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.426954 3347 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-hostproc\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.427008 3347 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-net\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.427033 3347 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-bpf-maps\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.427053 3347 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cni-path\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.427072 3347 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-xtables-lock\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.427092 3347 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-cilium-config-path\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.427116 3347 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6422f07-b3a7-429c-bc58-b1cf324d5e4e-host-proc-sys-kernel\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.427696 kubelet[3347]: I0413 19:27:26.427137 3347 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5431c25c-e09c-4c91-8e25-9a27cece6f71-cilium-config-path\") on node \"ip-172-31-17-32\" DevicePath \"\"" Apr 13 19:27:26.448626 systemd[1]: Removed slice kubepods-burstable-podc6422f07_b3a7_429c_bc58_b1cf324d5e4e.slice - libcontainer container kubepods-burstable-podc6422f07_b3a7_429c_bc58_b1cf324d5e4e.slice. Apr 13 19:27:26.449163 systemd[1]: kubepods-burstable-podc6422f07_b3a7_429c_bc58_b1cf324d5e4e.slice: Consumed 14.878s CPU time. Apr 13 19:27:26.453322 systemd[1]: Removed slice kubepods-besteffort-pod5431c25c_e09c_4c91_8e25_9a27cece6f71.slice - libcontainer container kubepods-besteffort-pod5431c25c_e09c_4c91_8e25_9a27cece6f71.slice. Apr 13 19:27:26.724769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384-rootfs.mount: Deactivated successfully. Apr 13 19:27:26.724945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e-rootfs.mount: Deactivated successfully. Apr 13 19:27:26.725104 systemd[1]: var-lib-kubelet-pods-5431c25c\x2de09c\x2d4c91\x2d8e25\x2d9a27cece6f71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztdxg.mount: Deactivated successfully. Apr 13 19:27:26.725239 systemd[1]: var-lib-kubelet-pods-c6422f07\x2db3a7\x2d429c\x2dbc58\x2db1cf324d5e4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcgjpj.mount: Deactivated successfully. Apr 13 19:27:26.725381 systemd[1]: var-lib-kubelet-pods-c6422f07\x2db3a7\x2d429c\x2dbc58\x2db1cf324d5e4e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 19:27:26.725515 systemd[1]: var-lib-kubelet-pods-c6422f07\x2db3a7\x2d429c\x2dbc58\x2db1cf324d5e4e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 19:27:26.921725 kubelet[3347]: I0413 19:27:26.921351 3347 scope.go:117] "RemoveContainer" containerID="fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab" Apr 13 19:27:26.932187 containerd[2019]: time="2026-04-13T19:27:26.931047002Z" level=info msg="RemoveContainer for \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\"" Apr 13 19:27:26.944961 containerd[2019]: time="2026-04-13T19:27:26.943557242Z" level=info msg="RemoveContainer for \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\" returns successfully" Apr 13 19:27:26.948442 kubelet[3347]: I0413 19:27:26.946337 3347 scope.go:117] "RemoveContainer" containerID="fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab" Apr 13 19:27:26.948442 kubelet[3347]: E0413 19:27:26.946882 3347 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\": not found" containerID="fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab" Apr 13 19:27:26.948442 kubelet[3347]: I0413 19:27:26.946929 3347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab"} err="failed to get container status \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\": not found" Apr 13 19:27:26.948442 kubelet[3347]: I0413 19:27:26.947313 3347 scope.go:117] "RemoveContainer" containerID="5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505" Apr 13 19:27:26.948802 containerd[2019]: time="2026-04-13T19:27:26.946644938Z" level=error msg="ContainerStatus for \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe27503faf70c54cfba9829649d0c91438f7838643ad3add9607641d6f381aab\": not found" Apr 13 19:27:26.956834 containerd[2019]: time="2026-04-13T19:27:26.956786750Z" level=info msg="RemoveContainer for \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\"" Apr 13 19:27:26.962581 containerd[2019]: time="2026-04-13T19:27:26.962494274Z" level=info msg="RemoveContainer for \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\" returns successfully" Apr 13 19:27:26.963240 kubelet[3347]: I0413 19:27:26.962910 3347 scope.go:117] "RemoveContainer" containerID="1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10" Apr 13 19:27:26.969577 containerd[2019]: time="2026-04-13T19:27:26.969271106Z" level=info msg="RemoveContainer for \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\"" Apr 13 19:27:26.976478 containerd[2019]: time="2026-04-13T19:27:26.975938246Z" level=info msg="RemoveContainer for \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\" returns successfully" Apr 13 19:27:26.977424 kubelet[3347]: I0413 19:27:26.976775 3347 scope.go:117] "RemoveContainer" containerID="6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834" Apr 13 19:27:26.981517 containerd[2019]: time="2026-04-13T19:27:26.981396914Z" level=info msg="RemoveContainer for \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\"" Apr 13 19:27:26.985504 containerd[2019]: time="2026-04-13T19:27:26.985447394Z" level=info msg="RemoveContainer for \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\" returns successfully" Apr 13 19:27:26.985971 kubelet[3347]: I0413 19:27:26.985861 3347 scope.go:117] "RemoveContainer" containerID="656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811" Apr 13 19:27:26.991808 containerd[2019]: time="2026-04-13T19:27:26.991268462Z" level=info msg="RemoveContainer for \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\"" Apr 13 19:27:26.998947 containerd[2019]: time="2026-04-13T19:27:26.998599214Z" level=info msg="RemoveContainer for \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\" returns successfully" Apr 13 19:27:26.999253 kubelet[3347]: I0413 19:27:26.999193 3347 scope.go:117] "RemoveContainer" containerID="a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3" Apr 13 19:27:27.002584 containerd[2019]: time="2026-04-13T19:27:27.002531638Z" level=info msg="RemoveContainer for \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\"" Apr 13 19:27:27.006090 containerd[2019]: time="2026-04-13T19:27:27.006032482Z" level=info msg="RemoveContainer for \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\" returns successfully" Apr 13 19:27:27.006672 kubelet[3347]: I0413 19:27:27.006517 3347 scope.go:117] "RemoveContainer" containerID="5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505" Apr 13 19:27:27.007448 containerd[2019]: time="2026-04-13T19:27:27.007003654Z" level=error msg="ContainerStatus for \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\": not found" Apr 13 19:27:27.007576 kubelet[3347]: E0413 19:27:27.007252 3347 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\": not found" containerID="5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505" Apr 13 19:27:27.007576 kubelet[3347]: I0413 19:27:27.007294 3347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505"} err="failed to get container status \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e088f0a4166e4956994570f34c5d9ac7bb7f994a90a241d291782ce6743c505\": not found" Apr 13 19:27:27.007576 kubelet[3347]: I0413 19:27:27.007324 3347 scope.go:117] "RemoveContainer" containerID="1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10" Apr 13 19:27:27.007766 containerd[2019]: time="2026-04-13T19:27:27.007626334Z" level=error msg="ContainerStatus for \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\": not found" Apr 13 19:27:27.008299 kubelet[3347]: E0413 19:27:27.008242 3347 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\": not found" containerID="1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10" Apr 13 19:27:27.008594 kubelet[3347]: I0413 19:27:27.008293 3347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10"} err="failed to get container status \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b172e4cdaecc5a498f2153b3dffd38e408e130aa2f0b81bc8f45875424c3e10\": not found" Apr 13 19:27:27.008594 kubelet[3347]: I0413 19:27:27.008325 3347 scope.go:117] "RemoveContainer" containerID="6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834" Apr 13 19:27:27.009181 kubelet[3347]: E0413 19:27:27.008950 3347 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\": not found" containerID="6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834" Apr 13 19:27:27.009181 kubelet[3347]: I0413 19:27:27.009010 3347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834"} err="failed to get container status \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\": not found" Apr 13 19:27:27.009181 kubelet[3347]: I0413 19:27:27.009046 3347 scope.go:117] "RemoveContainer" containerID="656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811" Apr 13 19:27:27.009390 containerd[2019]: time="2026-04-13T19:27:27.008669410Z" level=error msg="ContainerStatus for \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e2f9e4192ee2f456f3cd8e4982c4fec63b02acee9de6de4b466fa62f97d7834\": not found" Apr 13 19:27:27.010150 containerd[2019]: time="2026-04-13T19:27:27.009658642Z" level=error msg="ContainerStatus for \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\": not found" Apr 13 19:27:27.010276 kubelet[3347]: E0413 19:27:27.009907 3347 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\": not found" containerID="656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811" Apr 13 19:27:27.010276 kubelet[3347]: I0413 19:27:27.009945 3347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811"} err="failed to get container status \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\": rpc error: code = NotFound desc = an error occurred when try to find container \"656e96c6f7dff8bc5e317ba0da540f381496371ed8bab6c6005d2825b754b811\": not found" Apr 13 19:27:27.010276 kubelet[3347]: I0413 19:27:27.010011 3347 scope.go:117] "RemoveContainer" containerID="a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3" Apr 13 19:27:27.010471 containerd[2019]: time="2026-04-13T19:27:27.010318330Z" level=error msg="ContainerStatus for \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\": not found" Apr 13 19:27:27.010808 kubelet[3347]: E0413 19:27:27.010652 3347 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\": not found" containerID="a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3" Apr 13 19:27:27.010808 kubelet[3347]: I0413 19:27:27.010694 3347 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3"} err="failed to get container status \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"a287601e1dc5c2511438034d2baa61396b0a05314cef76be115121080bdab6e3\": not found" Apr 13 19:27:27.786749 sshd[4930]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:27.793196 systemd[1]: sshd@21-172.31.17.32:22-4.175.71.9:36162.service: Deactivated successfully. Apr 13 19:27:27.797215 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:27:27.797894 systemd[1]: session-22.scope: Consumed 2.108s CPU time. Apr 13 19:27:27.801928 systemd-logind[2000]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:27:27.804236 systemd-logind[2000]: Removed session 22. Apr 13 19:27:27.966536 systemd[1]: Started sshd@22-172.31.17.32:22-4.175.71.9:38270.service - OpenSSH per-connection server daemon (4.175.71.9:38270). Apr 13 19:27:28.440641 kubelet[3347]: I0413 19:27:28.440586 3347 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5431c25c-e09c-4c91-8e25-9a27cece6f71" path="/var/lib/kubelet/pods/5431c25c-e09c-4c91-8e25-9a27cece6f71/volumes" Apr 13 19:27:28.445027 kubelet[3347]: I0413 19:27:28.443644 3347 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6422f07-b3a7-429c-bc58-b1cf324d5e4e" path="/var/lib/kubelet/pods/c6422f07-b3a7-429c-bc58-b1cf324d5e4e/volumes" Apr 13 19:27:28.457530 containerd[2019]: time="2026-04-13T19:27:28.457483129Z" level=info msg="StopPodSandbox for \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\"" Apr 13 19:27:28.458452 containerd[2019]: time="2026-04-13T19:27:28.458395261Z" level=info msg="TearDown network for sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" successfully" Apr 13 19:27:28.458452 containerd[2019]: time="2026-04-13T19:27:28.458437741Z" level=info msg="StopPodSandbox for \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" returns successfully" Apr 13 19:27:28.459424 containerd[2019]: time="2026-04-13T19:27:28.459362881Z" level=info msg="RemovePodSandbox for \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\"" Apr 13 19:27:28.459525 containerd[2019]: time="2026-04-13T19:27:28.459422377Z" level=info msg="Forcibly stopping sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\"" Apr 13 19:27:28.459603 containerd[2019]: time="2026-04-13T19:27:28.459518857Z" level=info msg="TearDown network for sandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" successfully" Apr 13 19:27:28.463098 containerd[2019]: time="2026-04-13T19:27:28.463035289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:27:28.463216 containerd[2019]: time="2026-04-13T19:27:28.463121473Z" level=info msg="RemovePodSandbox \"fa74ce4db06b5bb063225e0c99c8476317e1241cc8c9a2bd57d66d32e3f78b4e\" returns successfully" Apr 13 19:27:28.464336 containerd[2019]: time="2026-04-13T19:27:28.463853365Z" level=info msg="StopPodSandbox for \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\"" Apr 13 19:27:28.464336 containerd[2019]: time="2026-04-13T19:27:28.464005729Z" level=info msg="TearDown network for sandbox \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" successfully" Apr 13 19:27:28.464336 containerd[2019]: time="2026-04-13T19:27:28.464046493Z" level=info msg="StopPodSandbox for \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" returns successfully" Apr 13 19:27:28.464927 containerd[2019]: time="2026-04-13T19:27:28.464825449Z" level=info msg="RemovePodSandbox for \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\"" Apr 13 19:27:28.464927 containerd[2019]: time="2026-04-13T19:27:28.464877781Z" level=info msg="Forcibly stopping sandbox \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\"" Apr 13 19:27:28.465212 containerd[2019]: time="2026-04-13T19:27:28.464991565Z" level=info msg="TearDown network for sandbox \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" successfully" Apr 13 19:27:28.468604 containerd[2019]: time="2026-04-13T19:27:28.468535705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:27:28.468826 containerd[2019]: time="2026-04-13T19:27:28.468614641Z" level=info msg="RemovePodSandbox \"56f44f9db5e1e23e0a1a1b5ea4dd9cfe11cc08bd9396cc0da438942c46a02384\" returns successfully" Apr 13 19:27:28.605462 ntpd[1993]: Deleting interface #12 lxc_health, fe80::4d5:d0ff:fea5:db4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=92 secs Apr 13 19:27:28.605970 ntpd[1993]: 13 Apr 19:27:28 ntpd[1993]: Deleting interface #12 lxc_health, fe80::4d5:d0ff:fea5:db4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=92 secs Apr 13 19:27:28.643122 kubelet[3347]: E0413 19:27:28.643033 3347 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:27:28.971564 sshd[5089]: Accepted publickey for core from 4.175.71.9 port 38270 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:28.973329 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:28.982227 systemd-logind[2000]: New session 23 of user core. Apr 13 19:27:28.989571 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 19:27:31.369035 kubelet[3347]: I0413 19:27:31.366746 3347 setters.go:543] "Node became not ready" node="ip-172-31-17-32" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-13T19:27:31Z","lastTransitionTime":"2026-04-13T19:27:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 13 19:27:31.368129 systemd[1]: Created slice kubepods-burstable-pod12d4dc76_48d4_4b0b_a9e7_0fd2b15506db.slice - libcontainer container kubepods-burstable-pod12d4dc76_48d4_4b0b_a9e7_0fd2b15506db.slice. Apr 13 19:27:31.462006 kubelet[3347]: I0413 19:27:31.461924 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-hostproc\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462179 kubelet[3347]: I0413 19:27:31.462016 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-etc-cni-netd\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462179 kubelet[3347]: I0413 19:27:31.462056 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-cilium-run\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462179 kubelet[3347]: I0413 19:27:31.462090 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-lib-modules\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462179 kubelet[3347]: I0413 19:27:31.462129 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-host-proc-sys-kernel\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462179 kubelet[3347]: I0413 19:27:31.462167 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-cni-path\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462443 kubelet[3347]: I0413 19:27:31.462201 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-hubble-tls\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462443 kubelet[3347]: I0413 19:27:31.462241 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-bpf-maps\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462443 kubelet[3347]: I0413 19:27:31.462273 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-cilium-cgroup\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462443 kubelet[3347]: I0413 19:27:31.462307 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-clustermesh-secrets\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462443 kubelet[3347]: I0413 19:27:31.462341 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-cilium-ipsec-secrets\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462443 kubelet[3347]: I0413 19:27:31.462374 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-host-proc-sys-net\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462735 kubelet[3347]: I0413 19:27:31.462408 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntfmt\" (UniqueName: \"kubernetes.io/projected/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-kube-api-access-ntfmt\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462735 kubelet[3347]: I0413 19:27:31.462440 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-xtables-lock\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.462735 kubelet[3347]: I0413 19:27:31.462480 3347 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12d4dc76-48d4-4b0b-a9e7-0fd2b15506db-cilium-config-path\") pod \"cilium-mx6zv\" (UID: \"12d4dc76-48d4-4b0b-a9e7-0fd2b15506db\") " pod="kube-system/cilium-mx6zv" Apr 13 19:27:31.510143 sshd[5089]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:31.516543 systemd-logind[2000]: Session 23 logged out. Waiting for processes to exit. Apr 13 19:27:31.516874 systemd[1]: sshd@22-172.31.17.32:22-4.175.71.9:38270.service: Deactivated successfully. Apr 13 19:27:31.521313 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 19:27:31.523205 systemd[1]: session-23.scope: Consumed 1.716s CPU time. Apr 13 19:27:31.527721 systemd-logind[2000]: Removed session 23. Apr 13 19:27:31.680379 containerd[2019]: time="2026-04-13T19:27:31.679742561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx6zv,Uid:12d4dc76-48d4-4b0b-a9e7-0fd2b15506db,Namespace:kube-system,Attempt:0,}" Apr 13 19:27:31.699515 systemd[1]: Started sshd@23-172.31.17.32:22-4.175.71.9:38274.service - OpenSSH per-connection server daemon (4.175.71.9:38274). Apr 13 19:27:31.740404 containerd[2019]: time="2026-04-13T19:27:31.739260702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:27:31.740404 containerd[2019]: time="2026-04-13T19:27:31.739361730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:27:31.740404 containerd[2019]: time="2026-04-13T19:27:31.739399626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:31.740404 containerd[2019]: time="2026-04-13T19:27:31.739549158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:27:31.773331 systemd[1]: Started cri-containerd-a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065.scope - libcontainer container a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065. Apr 13 19:27:31.813426 containerd[2019]: time="2026-04-13T19:27:31.813295086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx6zv,Uid:12d4dc76-48d4-4b0b-a9e7-0fd2b15506db,Namespace:kube-system,Attempt:0,} returns sandbox id \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\"" Apr 13 19:27:31.822391 containerd[2019]: time="2026-04-13T19:27:31.822063210Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:27:31.838716 containerd[2019]: time="2026-04-13T19:27:31.838537734Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d\"" Apr 13 19:27:31.841158 containerd[2019]: time="2026-04-13T19:27:31.840933486Z" level=info msg="StartContainer for \"a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d\"" Apr 13 19:27:31.886385 systemd[1]: Started cri-containerd-a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d.scope - libcontainer container a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d. Apr 13 19:27:31.934344 containerd[2019]: time="2026-04-13T19:27:31.934185835Z" level=info msg="StartContainer for \"a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d\" returns successfully" Apr 13 19:27:31.954602 systemd[1]: cri-containerd-a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d.scope: Deactivated successfully. Apr 13 19:27:32.038694 containerd[2019]: time="2026-04-13T19:27:32.038360655Z" level=info msg="shim disconnected" id=a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d namespace=k8s.io Apr 13 19:27:32.038694 containerd[2019]: time="2026-04-13T19:27:32.038433315Z" level=warning msg="cleaning up after shim disconnected" id=a9dc2e1df55520dce8ae74e747c53277b7b261fef477fb87afafa27a0b7d957d namespace=k8s.io Apr 13 19:27:32.038694 containerd[2019]: time="2026-04-13T19:27:32.038455155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:32.750753 sshd[5108]: Accepted publickey for core from 4.175.71.9 port 38274 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:32.754013 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:32.762090 systemd-logind[2000]: New session 24 of user core. Apr 13 19:27:32.772260 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 19:27:32.985367 containerd[2019]: time="2026-04-13T19:27:32.985198220Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:27:33.015656 containerd[2019]: time="2026-04-13T19:27:33.015482056Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009\"" Apr 13 19:27:33.020043 containerd[2019]: time="2026-04-13T19:27:33.016704220Z" level=info msg="StartContainer for \"20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009\"" Apr 13 19:27:33.017360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034323397.mount: Deactivated successfully. Apr 13 19:27:33.076447 systemd[1]: Started cri-containerd-20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009.scope - libcontainer container 20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009. Apr 13 19:27:33.134102 containerd[2019]: time="2026-04-13T19:27:33.134044577Z" level=info msg="StartContainer for \"20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009\" returns successfully" Apr 13 19:27:33.172301 systemd[1]: cri-containerd-20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009.scope: Deactivated successfully. Apr 13 19:27:33.227250 containerd[2019]: time="2026-04-13T19:27:33.227176085Z" level=info msg="shim disconnected" id=20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009 namespace=k8s.io Apr 13 19:27:33.227787 containerd[2019]: time="2026-04-13T19:27:33.227518925Z" level=warning msg="cleaning up after shim disconnected" id=20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009 namespace=k8s.io Apr 13 19:27:33.227787 containerd[2019]: time="2026-04-13T19:27:33.227547617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:33.247929 containerd[2019]: time="2026-04-13T19:27:33.247858889Z" level=warning msg="cleanup warnings time=\"2026-04-13T19:27:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 19:27:33.451003 sshd[5108]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:33.458535 systemd-logind[2000]: Session 24 logged out. Waiting for processes to exit. Apr 13 19:27:33.460264 systemd[1]: sshd@23-172.31.17.32:22-4.175.71.9:38274.service: Deactivated successfully. Apr 13 19:27:33.465122 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 19:27:33.468136 systemd-logind[2000]: Removed session 24. Apr 13 19:27:33.572174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20eca5376fee44d6eb402271b881cbb484de86002d9853803d48a6ebd3214009-rootfs.mount: Deactivated successfully. Apr 13 19:27:33.637468 systemd[1]: Started sshd@24-172.31.17.32:22-4.175.71.9:38280.service - OpenSSH per-connection server daemon (4.175.71.9:38280). Apr 13 19:27:33.644888 kubelet[3347]: E0413 19:27:33.644762 3347 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:27:33.999016 containerd[2019]: time="2026-04-13T19:27:33.998444409Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:27:34.049698 containerd[2019]: time="2026-04-13T19:27:34.049452761Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6\"" Apr 13 19:27:34.051040 containerd[2019]: time="2026-04-13T19:27:34.050762957Z" level=info msg="StartContainer for \"206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6\"" Apr 13 19:27:34.148851 systemd[1]: Started cri-containerd-206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6.scope - libcontainer container 206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6. Apr 13 19:27:34.222174 containerd[2019]: time="2026-04-13T19:27:34.222092538Z" level=info msg="StartContainer for \"206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6\" returns successfully" Apr 13 19:27:34.229789 systemd[1]: cri-containerd-206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6.scope: Deactivated successfully. Apr 13 19:27:34.277295 containerd[2019]: time="2026-04-13T19:27:34.276465150Z" level=info msg="shim disconnected" id=206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6 namespace=k8s.io Apr 13 19:27:34.277295 containerd[2019]: time="2026-04-13T19:27:34.276537198Z" level=warning msg="cleaning up after shim disconnected" id=206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6 namespace=k8s.io Apr 13 19:27:34.277295 containerd[2019]: time="2026-04-13T19:27:34.276559458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:34.571917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-206ee617ee599c8e5be4ee573c00957102b7b9dc2ce84e230dbdf72285be8bf6-rootfs.mount: Deactivated successfully. Apr 13 19:27:34.643892 sshd[5285]: Accepted publickey for core from 4.175.71.9 port 38280 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:27:34.646532 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:27:34.654622 systemd-logind[2000]: New session 25 of user core. Apr 13 19:27:34.661243 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 19:27:35.001005 containerd[2019]: time="2026-04-13T19:27:35.000930906Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:27:35.021032 containerd[2019]: time="2026-04-13T19:27:35.018860346Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235\"" Apr 13 19:27:35.021032 containerd[2019]: time="2026-04-13T19:27:35.019834902Z" level=info msg="StartContainer for \"552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235\"" Apr 13 19:27:35.086431 systemd[1]: Started cri-containerd-552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235.scope - libcontainer container 552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235. Apr 13 19:27:35.137905 systemd[1]: cri-containerd-552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235.scope: Deactivated successfully. Apr 13 19:27:35.142889 containerd[2019]: time="2026-04-13T19:27:35.142718059Z" level=info msg="StartContainer for \"552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235\" returns successfully" Apr 13 19:27:35.184327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235-rootfs.mount: Deactivated successfully. Apr 13 19:27:35.196276 containerd[2019]: time="2026-04-13T19:27:35.196196275Z" level=info msg="shim disconnected" id=552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235 namespace=k8s.io Apr 13 19:27:35.196783 containerd[2019]: time="2026-04-13T19:27:35.196505671Z" level=warning msg="cleaning up after shim disconnected" id=552f1fe6d28e2a461b89431d752e8b2550f06764dd10e178c3023785bcbae235 namespace=k8s.io Apr 13 19:27:35.196783 containerd[2019]: time="2026-04-13T19:27:35.196534183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:36.009185 containerd[2019]: time="2026-04-13T19:27:36.009095191Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:27:36.033671 containerd[2019]: time="2026-04-13T19:27:36.033371551Z" level=info msg="CreateContainer within sandbox \"a74d799e9b7f3e76f9bc94f7c94acdcd226d656cc2ffa795109b1a45402ae065\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"890b7628f3e5590832dab8062be683fae3959f9e431d9f7b7d18827a34a19e90\"" Apr 13 19:27:36.035571 containerd[2019]: time="2026-04-13T19:27:36.034578235Z" level=info msg="StartContainer for \"890b7628f3e5590832dab8062be683fae3959f9e431d9f7b7d18827a34a19e90\"" Apr 13 19:27:36.099319 systemd[1]: Started cri-containerd-890b7628f3e5590832dab8062be683fae3959f9e431d9f7b7d18827a34a19e90.scope - libcontainer container 890b7628f3e5590832dab8062be683fae3959f9e431d9f7b7d18827a34a19e90. Apr 13 19:27:36.158354 containerd[2019]: time="2026-04-13T19:27:36.158278196Z" level=info msg="StartContainer for \"890b7628f3e5590832dab8062be683fae3959f9e431d9f7b7d18827a34a19e90\" returns successfully" Apr 13 19:27:36.948053 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 13 19:27:41.267448 systemd-networkd[1939]: lxc_health: Link UP Apr 13 19:27:41.278293 (udev-worker)[5960]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:27:41.283423 systemd-networkd[1939]: lxc_health: Gained carrier Apr 13 19:27:41.719185 kubelet[3347]: I0413 19:27:41.718143 3347 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mx6zv" podStartSLOduration=10.718123107 podStartE2EDuration="10.718123107s" podCreationTimestamp="2026-04-13 19:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:37.093607316 +0000 UTC m=+128.977445261" watchObservedRunningTime="2026-04-13 19:27:41.718123107 +0000 UTC m=+133.601961040" Apr 13 19:27:42.448708 systemd-networkd[1939]: lxc_health: Gained IPv6LL Apr 13 19:27:44.605580 ntpd[1993]: Listen normally on 15 lxc_health [fe80::4886:faff:feea:637f%14]:123 Apr 13 19:27:44.606164 ntpd[1993]: 13 Apr 19:27:44 ntpd[1993]: Listen normally on 15 lxc_health [fe80::4886:faff:feea:637f%14]:123 Apr 13 19:27:49.152339 sshd[5285]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:49.161863 systemd[1]: sshd@24-172.31.17.32:22-4.175.71.9:38280.service: Deactivated successfully. Apr 13 19:27:49.169554 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 19:27:49.172698 systemd-logind[2000]: Session 25 logged out. Waiting for processes to exit. Apr 13 19:27:49.179509 systemd-logind[2000]: Removed session 25.