Apr 13 19:23:16.254120 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 13 19:23:16.254167 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Apr 13 18:04:44 -00 2026 Apr 13 19:23:16.254193 kernel: KASLR disabled due to lack of seed Apr 13 19:23:16.254211 kernel: efi: EFI v2.7 by EDK II Apr 13 19:23:16.254228 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 13 19:23:16.254245 kernel: ACPI: Early table checksum verification disabled Apr 13 19:23:16.254264 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 13 19:23:16.254281 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 19:23:16.254298 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 19:23:16.254314 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 19:23:16.254336 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 19:23:16.254381 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 13 19:23:16.254398 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 13 19:23:16.254415 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 13 19:23:16.254435 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 19:23:16.254458 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 13 19:23:16.254477 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 13 19:23:16.254495 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 13 19:23:16.254512 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 13 19:23:16.254530 kernel: printk: bootconsole [uart0] enabled Apr 13 19:23:16.254548 kernel: NUMA: Failed to initialise from firmware Apr 13 19:23:16.254567 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:16.254584 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 13 19:23:16.254601 kernel: Zone ranges: Apr 13 19:23:16.254619 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 13 19:23:16.254636 kernel: DMA32 empty Apr 13 19:23:16.254658 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 13 19:23:16.254677 kernel: Movable zone start for each node Apr 13 19:23:16.254694 kernel: Early memory node ranges Apr 13 19:23:16.254712 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 13 19:23:16.254730 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 13 19:23:16.254747 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 13 19:23:16.254764 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 13 19:23:16.254781 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 13 19:23:16.254798 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 13 19:23:16.254816 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 13 19:23:16.254833 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 13 19:23:16.254850 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 13 19:23:16.254871 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 13 19:23:16.254889 kernel: psci: probing for conduit method from ACPI. Apr 13 19:23:16.254914 kernel: psci: PSCIv1.0 detected in firmware. Apr 13 19:23:16.254932 kernel: psci: Using standard PSCI v0.2 function IDs Apr 13 19:23:16.254951 kernel: psci: Trusted OS migration not required Apr 13 19:23:16.254974 kernel: psci: SMC Calling Convention v1.1 Apr 13 19:23:16.254993 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 13 19:23:16.255012 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 13 19:23:16.255031 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 13 19:23:16.255050 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 13 19:23:16.255068 kernel: Detected PIPT I-cache on CPU0 Apr 13 19:23:16.255086 kernel: CPU features: detected: GIC system register CPU interface Apr 13 19:23:16.255104 kernel: CPU features: detected: Spectre-v2 Apr 13 19:23:16.255122 kernel: CPU features: detected: Spectre-v3a Apr 13 19:23:16.255141 kernel: CPU features: detected: Spectre-BHB Apr 13 19:23:16.255159 kernel: CPU features: detected: ARM erratum 1742098 Apr 13 19:23:16.255181 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 13 19:23:16.255199 kernel: alternatives: applying boot alternatives Apr 13 19:23:16.255220 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:16.255239 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 19:23:16.255258 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 19:23:16.255276 kernel: Fallback order for Node 0: 0 Apr 13 19:23:16.255294 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 13 19:23:16.255311 kernel: Policy zone: Normal Apr 13 19:23:16.255329 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 19:23:16.259405 kernel: software IO TLB: area num 2. Apr 13 19:23:16.259436 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 13 19:23:16.259466 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 13 19:23:16.259485 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 19:23:16.259504 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 19:23:16.259523 kernel: rcu: RCU event tracing is enabled. Apr 13 19:23:16.259542 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 19:23:16.259561 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 19:23:16.259580 kernel: Tracing variant of Tasks RCU enabled. Apr 13 19:23:16.259598 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 19:23:16.259616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 19:23:16.259634 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 13 19:23:16.259652 kernel: GICv3: 96 SPIs implemented Apr 13 19:23:16.259675 kernel: GICv3: 0 Extended SPIs implemented Apr 13 19:23:16.259693 kernel: Root IRQ handler: gic_handle_irq Apr 13 19:23:16.259711 kernel: GICv3: GICv3 features: 16 PPIs Apr 13 19:23:16.259729 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 13 19:23:16.259747 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 13 19:23:16.259765 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 13 19:23:16.259784 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 13 19:23:16.259802 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 13 19:23:16.259820 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 13 19:23:16.259838 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 13 19:23:16.259856 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 19:23:16.259874 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 13 19:23:16.259897 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 13 19:23:16.259916 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 13 19:23:16.259934 kernel: Console: colour dummy device 80x25 Apr 13 19:23:16.259953 kernel: printk: console [tty1] enabled Apr 13 19:23:16.259971 kernel: ACPI: Core revision 20230628 Apr 13 19:23:16.259990 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 13 19:23:16.260009 kernel: pid_max: default: 32768 minimum: 301 Apr 13 19:23:16.260027 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 19:23:16.260047 kernel: landlock: Up and running. Apr 13 19:23:16.260071 kernel: SELinux: Initializing. Apr 13 19:23:16.260090 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:16.260109 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 19:23:16.260128 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:16.260147 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 19:23:16.260179 kernel: rcu: Hierarchical SRCU implementation. Apr 13 19:23:16.260205 kernel: rcu: Max phase no-delay instances is 400. Apr 13 19:23:16.260225 kernel: Platform MSI: ITS@0x10080000 domain created Apr 13 19:23:16.260244 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 13 19:23:16.260268 kernel: Remapping and enabling EFI services. Apr 13 19:23:16.260287 kernel: smp: Bringing up secondary CPUs ... Apr 13 19:23:16.260306 kernel: Detected PIPT I-cache on CPU1 Apr 13 19:23:16.260324 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 13 19:23:16.260365 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 13 19:23:16.260388 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 13 19:23:16.260409 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 19:23:16.260455 kernel: SMP: Total of 2 processors activated. Apr 13 19:23:16.260479 kernel: CPU features: detected: 32-bit EL0 Support Apr 13 19:23:16.260503 kernel: CPU features: detected: 32-bit EL1 Support Apr 13 19:23:16.260523 kernel: CPU features: detected: CRC32 instructions Apr 13 19:23:16.260542 kernel: CPU: All CPU(s) started at EL1 Apr 13 19:23:16.260572 kernel: alternatives: applying system-wide alternatives Apr 13 19:23:16.260597 kernel: devtmpfs: initialized Apr 13 19:23:16.260618 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 19:23:16.260637 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 19:23:16.260657 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 19:23:16.260676 kernel: SMBIOS 3.0.0 present. Apr 13 19:23:16.260701 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 13 19:23:16.260720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 19:23:16.260740 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 13 19:23:16.260759 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 13 19:23:16.260778 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 13 19:23:16.260798 kernel: audit: initializing netlink subsys (disabled) Apr 13 19:23:16.260817 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Apr 13 19:23:16.260836 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 19:23:16.260860 kernel: cpuidle: using governor menu Apr 13 19:23:16.260879 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 13 19:23:16.260898 kernel: ASID allocator initialised with 65536 entries Apr 13 19:23:16.260918 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 19:23:16.260937 kernel: Serial: AMBA PL011 UART driver Apr 13 19:23:16.260956 kernel: Modules: 17488 pages in range for non-PLT usage Apr 13 19:23:16.260976 kernel: Modules: 509008 pages in range for PLT usage Apr 13 19:23:16.260995 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 19:23:16.261014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 19:23:16.261037 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 13 19:23:16.261057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 13 19:23:16.261076 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 19:23:16.261095 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 19:23:16.261115 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 13 19:23:16.261134 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 13 19:23:16.261153 kernel: ACPI: Added _OSI(Module Device) Apr 13 19:23:16.261172 kernel: ACPI: Added _OSI(Processor Device) Apr 13 19:23:16.261191 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 19:23:16.261214 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 19:23:16.261233 kernel: ACPI: Interpreter enabled Apr 13 19:23:16.261252 kernel: ACPI: Using GIC for interrupt routing Apr 13 19:23:16.261271 kernel: ACPI: MCFG table detected, 1 entries Apr 13 19:23:16.261291 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 13 19:23:16.262806 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 19:23:16.263042 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 13 19:23:16.263255 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 13 19:23:16.263512 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 13 19:23:16.263734 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 13 19:23:16.263760 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 13 19:23:16.263780 kernel: acpiphp: Slot [1] registered Apr 13 19:23:16.263799 kernel: acpiphp: Slot [2] registered Apr 13 19:23:16.263818 kernel: acpiphp: Slot [3] registered Apr 13 19:23:16.263837 kernel: acpiphp: Slot [4] registered Apr 13 19:23:16.263856 kernel: acpiphp: Slot [5] registered Apr 13 19:23:16.263882 kernel: acpiphp: Slot [6] registered Apr 13 19:23:16.263901 kernel: acpiphp: Slot [7] registered Apr 13 19:23:16.263920 kernel: acpiphp: Slot [8] registered Apr 13 19:23:16.263939 kernel: acpiphp: Slot [9] registered Apr 13 19:23:16.263959 kernel: acpiphp: Slot [10] registered Apr 13 19:23:16.263978 kernel: acpiphp: Slot [11] registered Apr 13 19:23:16.263997 kernel: acpiphp: Slot [12] registered Apr 13 19:23:16.264016 kernel: acpiphp: Slot [13] registered Apr 13 19:23:16.264035 kernel: acpiphp: Slot [14] registered Apr 13 19:23:16.264054 kernel: acpiphp: Slot [15] registered Apr 13 19:23:16.264078 kernel: acpiphp: Slot [16] registered Apr 13 19:23:16.264097 kernel: acpiphp: Slot [17] registered Apr 13 19:23:16.264116 kernel: acpiphp: Slot [18] registered Apr 13 19:23:16.264135 kernel: acpiphp: Slot [19] registered Apr 13 19:23:16.264154 kernel: acpiphp: Slot [20] registered Apr 13 19:23:16.264173 kernel: acpiphp: Slot [21] registered Apr 13 19:23:16.264192 kernel: acpiphp: Slot [22] registered Apr 13 19:23:16.264211 kernel: acpiphp: Slot [23] registered Apr 13 19:23:16.264231 kernel: acpiphp: Slot [24] registered Apr 13 19:23:16.264254 kernel: acpiphp: Slot [25] registered Apr 13 19:23:16.264274 kernel: acpiphp: Slot [26] registered Apr 13 19:23:16.264293 kernel: acpiphp: Slot [27] registered Apr 13 19:23:16.264312 kernel: acpiphp: Slot [28] registered Apr 13 19:23:16.264331 kernel: acpiphp: Slot [29] registered Apr 13 19:23:16.264428 kernel: acpiphp: Slot [30] registered Apr 13 19:23:16.264450 kernel: acpiphp: Slot [31] registered Apr 13 19:23:16.264470 kernel: PCI host bridge to bus 0000:00 Apr 13 19:23:16.264697 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 13 19:23:16.264901 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 13 19:23:16.265096 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:16.265290 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 13 19:23:16.265597 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 13 19:23:16.265855 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 13 19:23:16.266083 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 13 19:23:16.266322 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 19:23:16.266595 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 13 19:23:16.266822 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:16.267059 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 19:23:16.267284 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 13 19:23:16.269618 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 13 19:23:16.269871 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 13 19:23:16.270096 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 13 19:23:16.270295 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 13 19:23:16.271439 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 13 19:23:16.271637 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 13 19:23:16.271663 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 13 19:23:16.271684 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 13 19:23:16.271704 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 13 19:23:16.271723 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 13 19:23:16.271750 kernel: iommu: Default domain type: Translated Apr 13 19:23:16.271770 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 13 19:23:16.271789 kernel: efivars: Registered efivars operations Apr 13 19:23:16.271808 kernel: vgaarb: loaded Apr 13 19:23:16.271827 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 13 19:23:16.271846 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 19:23:16.271865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 19:23:16.271885 kernel: pnp: PnP ACPI init Apr 13 19:23:16.272101 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 13 19:23:16.272134 kernel: pnp: PnP ACPI: found 1 devices Apr 13 19:23:16.272154 kernel: NET: Registered PF_INET protocol family Apr 13 19:23:16.272174 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 19:23:16.272193 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 19:23:16.272213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 19:23:16.272232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 19:23:16.272252 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 19:23:16.272271 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 19:23:16.272295 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:16.272315 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 19:23:16.272334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 19:23:16.273955 kernel: PCI: CLS 0 bytes, default 64 Apr 13 19:23:16.273978 kernel: kvm [1]: HYP mode not available Apr 13 19:23:16.273999 kernel: Initialise system trusted keyrings Apr 13 19:23:16.274019 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 19:23:16.274039 kernel: Key type asymmetric registered Apr 13 19:23:16.274060 kernel: Asymmetric key parser 'x509' registered Apr 13 19:23:16.274090 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 13 19:23:16.274111 kernel: io scheduler mq-deadline registered Apr 13 19:23:16.274130 kernel: io scheduler kyber registered Apr 13 19:23:16.274149 kernel: io scheduler bfq registered Apr 13 19:23:16.275546 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 13 19:23:16.275595 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 13 19:23:16.275616 kernel: ACPI: button: Power Button [PWRB] Apr 13 19:23:16.275636 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 13 19:23:16.275655 kernel: ACPI: button: Sleep Button [SLPB] Apr 13 19:23:16.275683 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 19:23:16.275704 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 13 19:23:16.275939 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 13 19:23:16.275967 kernel: printk: console [ttyS0] disabled Apr 13 19:23:16.275987 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 13 19:23:16.276006 kernel: printk: console [ttyS0] enabled Apr 13 19:23:16.276026 kernel: printk: bootconsole [uart0] disabled Apr 13 19:23:16.276045 kernel: thunder_xcv, ver 1.0 Apr 13 19:23:16.276064 kernel: thunder_bgx, ver 1.0 Apr 13 19:23:16.276089 kernel: nicpf, ver 1.0 Apr 13 19:23:16.276108 kernel: nicvf, ver 1.0 Apr 13 19:23:16.276327 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 13 19:23:16.277703 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-13T19:23:15 UTC (1776108195) Apr 13 19:23:16.277735 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 19:23:16.277755 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 13 19:23:16.277775 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 13 19:23:16.277794 kernel: watchdog: Hard watchdog permanently disabled Apr 13 19:23:16.277848 kernel: NET: Registered PF_INET6 protocol family Apr 13 19:23:16.277871 kernel: Segment Routing with IPv6 Apr 13 19:23:16.277890 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 19:23:16.277909 kernel: NET: Registered PF_PACKET protocol family Apr 13 19:23:16.277929 kernel: Key type dns_resolver registered Apr 13 19:23:16.277948 kernel: registered taskstats version 1 Apr 13 19:23:16.277967 kernel: Loading compiled-in X.509 certificates Apr 13 19:23:16.277987 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51f707dd0fb1eacaaa32bdbd733952de038a5bd7' Apr 13 19:23:16.278006 kernel: Key type .fscrypt registered Apr 13 19:23:16.278031 kernel: Key type fscrypt-provisioning registered Apr 13 19:23:16.278051 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 19:23:16.278070 kernel: ima: Allocated hash algorithm: sha1 Apr 13 19:23:16.278089 kernel: ima: No architecture policies found Apr 13 19:23:16.278108 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 13 19:23:16.278127 kernel: clk: Disabling unused clocks Apr 13 19:23:16.278146 kernel: Freeing unused kernel memory: 39424K Apr 13 19:23:16.278165 kernel: Run /init as init process Apr 13 19:23:16.278184 kernel: with arguments: Apr 13 19:23:16.278207 kernel: /init Apr 13 19:23:16.278226 kernel: with environment: Apr 13 19:23:16.278244 kernel: HOME=/ Apr 13 19:23:16.278263 kernel: TERM=linux Apr 13 19:23:16.278287 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:16.278311 systemd[1]: Detected virtualization amazon. Apr 13 19:23:16.278332 systemd[1]: Detected architecture arm64. Apr 13 19:23:16.278391 systemd[1]: Running in initrd. Apr 13 19:23:16.279944 systemd[1]: No hostname configured, using default hostname. Apr 13 19:23:16.279969 systemd[1]: Hostname set to . Apr 13 19:23:16.279992 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:16.280013 systemd[1]: Queued start job for default target initrd.target. Apr 13 19:23:16.280034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:16.280056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:16.280078 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 19:23:16.280100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:16.280129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 19:23:16.280151 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 19:23:16.280176 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 19:23:16.280198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 19:23:16.280220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:16.280241 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:16.280266 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:16.280288 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:16.280309 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:16.280330 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:16.280399 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:16.280425 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:16.280447 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 19:23:16.280468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 19:23:16.280489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:16.280517 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:16.280539 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:16.280560 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:16.280581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 19:23:16.280603 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:16.280624 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 19:23:16.280645 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 19:23:16.280666 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:16.280687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:16.280713 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:16.280735 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:16.280756 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:16.280777 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 19:23:16.280848 systemd-journald[251]: Collecting audit messages is disabled. Apr 13 19:23:16.280900 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 19:23:16.280923 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:16.280945 systemd-journald[251]: Journal started Apr 13 19:23:16.280989 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2f44c360668ee6f08fd34f8fb24040) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:16.249659 systemd-modules-load[252]: Inserted module 'overlay' Apr 13 19:23:16.288489 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 19:23:16.290313 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 13 19:23:16.294830 kernel: Bridge firewalling registered Apr 13 19:23:16.294868 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:16.300453 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:16.313951 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:16.314373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 19:23:16.322614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:16.327766 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:16.335626 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:16.380414 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:16.389659 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:16.399623 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:16.411649 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:16.430657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:16.443655 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 19:23:16.477590 dracut-cmdline[292]: dracut-dracut-053 Apr 13 19:23:16.484533 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06a955818c1cb85215c4fc3bbca340081bcaba3fb92fe20a32668615ff23854b Apr 13 19:23:16.522027 systemd-resolved[282]: Positive Trust Anchors: Apr 13 19:23:16.522062 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:16.522126 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:16.643384 kernel: SCSI subsystem initialized Apr 13 19:23:16.652367 kernel: Loading iSCSI transport class v2.0-870. Apr 13 19:23:16.664394 kernel: iscsi: registered transport (tcp) Apr 13 19:23:16.687655 kernel: iscsi: registered transport (qla4xxx) Apr 13 19:23:16.687731 kernel: QLogic iSCSI HBA Driver Apr 13 19:23:16.769429 kernel: random: crng init done Apr 13 19:23:16.769839 systemd-resolved[282]: Defaulting to hostname 'linux'. Apr 13 19:23:16.772044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:16.780410 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:16.806464 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:16.816675 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 19:23:16.854916 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 19:23:16.854994 kernel: device-mapper: uevent: version 1.0.3 Apr 13 19:23:16.857288 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 19:23:16.939401 kernel: raid6: neonx8 gen() 6692 MB/s Apr 13 19:23:16.942377 kernel: raid6: neonx4 gen() 6544 MB/s Apr 13 19:23:16.960381 kernel: raid6: neonx2 gen() 5463 MB/s Apr 13 19:23:16.978378 kernel: raid6: neonx1 gen() 3953 MB/s Apr 13 19:23:16.996379 kernel: raid6: int64x8 gen() 3821 MB/s Apr 13 19:23:17.014378 kernel: raid6: int64x4 gen() 3720 MB/s Apr 13 19:23:17.032380 kernel: raid6: int64x2 gen() 3607 MB/s Apr 13 19:23:17.050492 kernel: raid6: int64x1 gen() 2756 MB/s Apr 13 19:23:17.050540 kernel: raid6: using algorithm neonx8 gen() 6692 MB/s Apr 13 19:23:17.069831 kernel: raid6: .... xor() 4808 MB/s, rmw enabled Apr 13 19:23:17.069871 kernel: raid6: using neon recovery algorithm Apr 13 19:23:17.079378 kernel: xor: measuring software checksum speed Apr 13 19:23:17.079434 kernel: 8regs : 10959 MB/sec Apr 13 19:23:17.080817 kernel: 32regs : 11947 MB/sec Apr 13 19:23:17.082228 kernel: arm64_neon : 9498 MB/sec Apr 13 19:23:17.082261 kernel: xor: using function: 32regs (11947 MB/sec) Apr 13 19:23:17.168400 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 19:23:17.189421 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:17.198734 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:17.246315 systemd-udevd[473]: Using default interface naming scheme 'v255'. Apr 13 19:23:17.256769 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:17.266696 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 19:23:17.313384 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Apr 13 19:23:17.370557 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:17.387597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:17.502328 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:17.516676 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 19:23:17.565602 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:17.577206 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:17.579812 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:17.582091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:17.598274 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 19:23:17.634456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:17.704436 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 13 19:23:17.704502 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 13 19:23:17.719296 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 19:23:17.719678 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 19:23:17.733400 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:60:76:9f:e6:0f Apr 13 19:23:17.739920 (udev-worker)[534]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:17.741659 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:17.742615 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:17.757238 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:17.758165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:17.760536 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:17.786053 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 13 19:23:17.786096 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 19:23:17.760707 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:17.784933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:17.805397 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 19:23:17.814480 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 19:23:17.814550 kernel: GPT:9289727 != 33554431 Apr 13 19:23:17.814577 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 19:23:17.815490 kernel: GPT:9289727 != 33554431 Apr 13 19:23:17.816660 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 19:23:17.816702 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:17.819856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:17.840678 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 19:23:17.873420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:17.904402 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (530) Apr 13 19:23:17.959793 kernel: BTRFS: device fsid ed38fcff-9752-482a-82dd-c0f0fcf94cdd devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (534) Apr 13 19:23:17.990958 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 19:23:18.018926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 19:23:18.075702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:18.092463 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:18.095252 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 19:23:18.112721 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 19:23:18.126881 disk-uuid[662]: Primary Header is updated. Apr 13 19:23:18.126881 disk-uuid[662]: Secondary Entries is updated. Apr 13 19:23:18.126881 disk-uuid[662]: Secondary Header is updated. Apr 13 19:23:18.143543 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:18.151401 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:18.159392 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:19.165407 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 19:23:19.167278 disk-uuid[663]: The operation has completed successfully. Apr 13 19:23:19.343800 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 19:23:19.344026 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 19:23:19.406698 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 19:23:19.429204 sh[1006]: Success Apr 13 19:23:19.455601 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 13 19:23:19.629439 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 19:23:19.641573 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 19:23:19.650675 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 19:23:19.673166 kernel: BTRFS info (device dm-0): first mount of filesystem ed38fcff-9752-482a-82dd-c0f0fcf94cdd Apr 13 19:23:19.673230 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:19.673269 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 19:23:19.676999 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 19:23:19.677036 kernel: BTRFS info (device dm-0): using free space tree Apr 13 19:23:19.804380 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 19:23:19.806418 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 19:23:19.809493 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 19:23:19.822765 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 19:23:19.832676 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 19:23:19.865415 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:19.865504 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:19.867424 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:19.876437 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:19.895653 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 19:23:19.900365 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:19.911483 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 19:23:19.928747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 19:23:20.019543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:20.032682 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:20.087509 systemd-networkd[1198]: lo: Link UP Apr 13 19:23:20.087529 systemd-networkd[1198]: lo: Gained carrier Apr 13 19:23:20.090094 systemd-networkd[1198]: Enumeration completed Apr 13 19:23:20.091064 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:20.091071 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:20.093155 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:20.096409 systemd[1]: Reached target network.target - Network. Apr 13 19:23:20.097094 systemd-networkd[1198]: eth0: Link UP Apr 13 19:23:20.097102 systemd-networkd[1198]: eth0: Gained carrier Apr 13 19:23:20.097119 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:20.139495 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.31.24/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:20.410999 ignition[1125]: Ignition 2.19.0 Apr 13 19:23:20.411027 ignition[1125]: Stage: fetch-offline Apr 13 19:23:20.415022 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:20.415063 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:20.417369 ignition[1125]: Ignition finished successfully Apr 13 19:23:20.422986 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:20.433681 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 19:23:20.469377 ignition[1207]: Ignition 2.19.0 Apr 13 19:23:20.469400 ignition[1207]: Stage: fetch Apr 13 19:23:20.470075 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:20.470104 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:20.470278 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:20.484055 ignition[1207]: PUT result: OK Apr 13 19:23:20.491498 ignition[1207]: parsed url from cmdline: "" Apr 13 19:23:20.491564 ignition[1207]: no config URL provided Apr 13 19:23:20.491580 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 19:23:20.491607 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Apr 13 19:23:20.491640 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:20.493629 ignition[1207]: PUT result: OK Apr 13 19:23:20.493721 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 19:23:20.496295 ignition[1207]: GET result: OK Apr 13 19:23:20.507098 unknown[1207]: fetched base config from "system" Apr 13 19:23:20.496471 ignition[1207]: parsing config with SHA512: 91602179f11523d736a22400ed94bd7fd3e863ca25b4b63b39bf8a29eaedb48d7276681e0e6ba03196c96afba414252c030e1424ef99840e6e9b3a5795802525 Apr 13 19:23:20.507114 unknown[1207]: fetched base config from "system" Apr 13 19:23:20.508180 ignition[1207]: fetch: fetch complete Apr 13 19:23:20.507143 unknown[1207]: fetched user config from "aws" Apr 13 19:23:20.508192 ignition[1207]: fetch: fetch passed Apr 13 19:23:20.508283 ignition[1207]: Ignition finished successfully Apr 13 19:23:20.521875 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 19:23:20.536620 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 19:23:20.566233 ignition[1213]: Ignition 2.19.0 Apr 13 19:23:20.567604 ignition[1213]: Stage: kargs Apr 13 19:23:20.568707 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:20.568740 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:20.568894 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:20.572804 ignition[1213]: PUT result: OK Apr 13 19:23:20.580732 ignition[1213]: kargs: kargs passed Apr 13 19:23:20.580890 ignition[1213]: Ignition finished successfully Apr 13 19:23:20.586191 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 19:23:20.595700 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 19:23:20.623823 ignition[1219]: Ignition 2.19.0 Apr 13 19:23:20.623843 ignition[1219]: Stage: disks Apr 13 19:23:20.625081 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:20.625107 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:20.625271 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:20.635384 ignition[1219]: PUT result: OK Apr 13 19:23:20.640419 ignition[1219]: disks: disks passed Apr 13 19:23:20.640754 ignition[1219]: Ignition finished successfully Apr 13 19:23:20.645398 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 19:23:20.648145 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:20.650794 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 19:23:20.653540 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:20.655791 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:20.658394 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:20.673767 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 19:23:20.718522 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 19:23:20.724435 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 19:23:20.736654 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 19:23:20.825421 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 775210d8-8fbf-4f17-be2d-56007930061c r/w with ordered data mode. Quota mode: none. Apr 13 19:23:20.825501 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 19:23:20.829719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:20.842565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:20.852527 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 19:23:20.856250 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 19:23:20.856329 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 19:23:20.877086 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1246) Apr 13 19:23:20.856399 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:20.884376 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:20.884443 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:20.884472 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:20.892394 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:20.896856 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:20.897208 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 19:23:20.916746 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 19:23:21.343176 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 19:23:21.353442 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Apr 13 19:23:21.363182 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 19:23:21.372063 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 19:23:21.402071 systemd-networkd[1198]: eth0: Gained IPv6LL Apr 13 19:23:21.783710 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:21.793535 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 19:23:21.807647 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 19:23:21.829560 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 19:23:21.831776 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:21.867835 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 19:23:21.885214 ignition[1360]: INFO : Ignition 2.19.0 Apr 13 19:23:21.885214 ignition[1360]: INFO : Stage: mount Apr 13 19:23:21.885214 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:21.885214 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:21.885214 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:21.897275 ignition[1360]: INFO : PUT result: OK Apr 13 19:23:21.905459 ignition[1360]: INFO : mount: mount passed Apr 13 19:23:21.905459 ignition[1360]: INFO : Ignition finished successfully Apr 13 19:23:21.910032 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 19:23:21.921553 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 19:23:21.948268 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 19:23:21.971212 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1371) Apr 13 19:23:21.971289 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 82e51161-2104-45f8-9ecc-3d62852b78d3 Apr 13 19:23:21.973302 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 13 19:23:21.974836 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 19:23:21.980387 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 19:23:21.984057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 19:23:22.020851 ignition[1388]: INFO : Ignition 2.19.0 Apr 13 19:23:22.020851 ignition[1388]: INFO : Stage: files Apr 13 19:23:22.026055 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:22.026055 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:22.026055 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:22.033944 ignition[1388]: INFO : PUT result: OK Apr 13 19:23:22.039191 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Apr 13 19:23:22.046303 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 19:23:22.046303 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 19:23:22.111338 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 19:23:22.114651 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 19:23:22.118057 unknown[1388]: wrote ssh authorized keys file for user: core Apr 13 19:23:22.120559 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 19:23:22.125438 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:22.129677 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 13 19:23:22.224619 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 19:23:22.387086 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 13 19:23:22.387086 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:23:22.387086 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 13 19:23:22.626112 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 19:23:22.761153 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 19:23:22.761153 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:22.772461 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 13 19:23:23.034590 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 19:23:23.404168 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 13 19:23:23.404168 ignition[1388]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 19:23:23.412317 ignition[1388]: INFO : files: files passed Apr 13 19:23:23.412317 ignition[1388]: INFO : Ignition finished successfully Apr 13 19:23:23.430468 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 19:23:23.449668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 19:23:23.462626 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 19:23:23.476153 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 19:23:23.479579 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 19:23:23.499227 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:23.499227 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:23.506780 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 19:23:23.514074 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:23.520792 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 19:23:23.532628 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 19:23:23.583914 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 19:23:23.584333 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 19:23:23.594497 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 19:23:23.596916 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 19:23:23.599437 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 19:23:23.611946 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 19:23:23.649509 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:23.661648 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 19:23:23.686944 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:23.692277 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:23.695040 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 19:23:23.697267 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 19:23:23.697977 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 19:23:23.709529 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 19:23:23.714233 systemd[1]: Stopped target basic.target - Basic System. Apr 13 19:23:23.714707 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 19:23:23.724519 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 19:23:23.729717 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 19:23:23.734753 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 19:23:23.739803 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 19:23:23.743619 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 19:23:23.746385 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 19:23:23.755653 systemd[1]: Stopped target swap.target - Swaps. Apr 13 19:23:23.757747 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 19:23:23.757995 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 19:23:23.760827 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:23.763715 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:23.777663 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 19:23:23.780984 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:23.784317 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 19:23:23.784591 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 19:23:23.794260 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 19:23:23.794724 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 19:23:23.802646 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 19:23:23.802867 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 19:23:23.814811 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 19:23:23.819494 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 19:23:23.820072 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:23.831859 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 19:23:23.845905 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 19:23:23.848514 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:23.854054 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 19:23:23.854327 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 19:23:23.876322 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 19:23:23.876560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 19:23:23.887105 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 19:23:23.898496 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 19:23:23.900729 ignition[1440]: INFO : Ignition 2.19.0 Apr 13 19:23:23.900729 ignition[1440]: INFO : Stage: umount Apr 13 19:23:23.907699 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 19:23:23.907699 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 19:23:23.907699 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 19:23:23.902941 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 19:23:23.918187 ignition[1440]: INFO : PUT result: OK Apr 13 19:23:23.921895 ignition[1440]: INFO : umount: umount passed Apr 13 19:23:23.924524 ignition[1440]: INFO : Ignition finished successfully Apr 13 19:23:23.924191 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 19:23:23.926401 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 19:23:23.930817 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 19:23:23.930988 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 19:23:23.934069 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 19:23:23.934167 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 19:23:23.936546 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 19:23:23.936630 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 19:23:23.939458 systemd[1]: Stopped target network.target - Network. Apr 13 19:23:23.957278 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 19:23:23.957411 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 19:23:23.960830 systemd[1]: Stopped target paths.target - Path Units. Apr 13 19:23:23.962853 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 19:23:23.965146 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:23.967752 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 19:23:23.969664 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 19:23:23.972534 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 19:23:23.972613 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 19:23:23.974769 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 19:23:23.974843 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 19:23:23.977073 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 19:23:23.977156 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 19:23:23.979322 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 19:23:23.979432 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 19:23:23.981724 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 19:23:23.981816 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 19:23:23.982203 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 19:23:23.982957 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 19:23:23.998194 systemd-networkd[1198]: eth0: DHCPv6 lease lost Apr 13 19:23:24.004841 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 19:23:24.005159 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 19:23:24.014129 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 19:23:24.014325 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 19:23:24.031592 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 19:23:24.031714 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:24.055301 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 19:23:24.067099 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 19:23:24.067232 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 19:23:24.068244 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:23:24.068334 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:24.068552 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 19:23:24.068629 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:24.068893 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 19:23:24.068967 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:24.071974 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:24.112238 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 19:23:24.112753 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:24.122300 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 19:23:24.122781 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:24.130398 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 19:23:24.130483 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:24.132878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 19:23:24.132967 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 19:23:24.135979 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 19:23:24.136065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 19:23:24.149994 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 19:23:24.150093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 19:23:24.168520 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 19:23:24.178313 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 19:23:24.178459 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:24.181328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 19:23:24.181446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:24.187070 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 19:23:24.187331 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 19:23:24.198703 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 19:23:24.198962 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 19:23:24.214018 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 19:23:24.229668 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 19:23:24.248363 systemd[1]: Switching root. Apr 13 19:23:24.305401 systemd-journald[251]: Journal stopped Apr 13 19:23:27.277309 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 13 19:23:27.288792 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 19:23:27.288846 kernel: SELinux: policy capability open_perms=1 Apr 13 19:23:27.288878 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 19:23:27.288908 kernel: SELinux: policy capability always_check_network=0 Apr 13 19:23:27.288948 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 19:23:27.288979 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 19:23:27.289019 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 19:23:27.289063 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 19:23:27.289095 kernel: audit: type=1403 audit(1776108205.020:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 19:23:27.289129 systemd[1]: Successfully loaded SELinux policy in 82.024ms. Apr 13 19:23:27.289178 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.369ms. Apr 13 19:23:27.289213 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 19:23:27.289247 systemd[1]: Detected virtualization amazon. Apr 13 19:23:27.289277 systemd[1]: Detected architecture arm64. Apr 13 19:23:27.289312 systemd[1]: Detected first boot. Apr 13 19:23:27.294392 systemd[1]: Initializing machine ID from VM UUID. Apr 13 19:23:27.294465 zram_generator::config[1482]: No configuration found. Apr 13 19:23:27.294502 systemd[1]: Populated /etc with preset unit settings. Apr 13 19:23:27.294535 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 19:23:27.294567 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 19:23:27.294599 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:27.294633 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 19:23:27.294666 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 19:23:27.294699 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 19:23:27.294735 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 19:23:27.294768 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 19:23:27.294801 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 19:23:27.294834 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 19:23:27.294866 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 19:23:27.294898 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 19:23:27.294931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 19:23:27.294963 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 19:23:27.294994 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 19:23:27.295031 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 19:23:27.295062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 19:23:27.295094 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 19:23:27.295128 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 19:23:27.295158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 19:23:27.295190 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 19:23:27.295224 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 19:23:27.295259 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 19:23:27.295294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 19:23:27.295325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 19:23:27.295393 systemd[1]: Reached target slices.target - Slice Units. Apr 13 19:23:27.295427 systemd[1]: Reached target swap.target - Swaps. Apr 13 19:23:27.295458 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 19:23:27.295489 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 19:23:27.295519 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 19:23:27.295549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 19:23:27.295587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 19:23:27.295617 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 19:23:27.295648 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 19:23:27.295678 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 19:23:27.295709 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 19:23:27.295741 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 19:23:27.295773 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 19:23:27.295804 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 19:23:27.295835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 19:23:27.295871 systemd[1]: Reached target machines.target - Containers. Apr 13 19:23:27.295903 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 19:23:27.295935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:27.295965 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 19:23:27.295994 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 19:23:27.296027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:27.296061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:27.296090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:27.296122 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 19:23:27.296157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:27.296187 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 19:23:27.296217 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 19:23:27.296248 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 19:23:27.296281 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 19:23:27.296312 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 19:23:27.298418 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 19:23:27.298479 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 19:23:27.298519 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 19:23:27.298554 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 19:23:27.298584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 19:23:27.298617 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 19:23:27.298648 systemd[1]: Stopped verity-setup.service. Apr 13 19:23:27.298677 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 19:23:27.298707 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 19:23:27.298739 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 19:23:27.298769 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 19:23:27.298804 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 19:23:27.298834 kernel: ACPI: bus type drm_connector registered Apr 13 19:23:27.298867 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 19:23:27.298896 kernel: loop: module loaded Apr 13 19:23:27.298925 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 19:23:27.298957 kernel: fuse: init (API version 7.39) Apr 13 19:23:27.298987 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 19:23:27.299017 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 19:23:27.299049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:27.299079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:27.299110 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:27.299140 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:27.299171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:27.299204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:27.299239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 19:23:27.299269 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 19:23:27.299298 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:27.305450 systemd-journald[1560]: Collecting audit messages is disabled. Apr 13 19:23:27.305560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:27.305594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 19:23:27.305628 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 19:23:27.305658 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 19:23:27.305691 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 19:23:27.305722 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 19:23:27.305755 systemd-journald[1560]: Journal started Apr 13 19:23:27.305830 systemd-journald[1560]: Runtime Journal (/run/log/journal/ec2f44c360668ee6f08fd34f8fb24040) is 8.0M, max 75.3M, 67.3M free. Apr 13 19:23:26.499203 systemd[1]: Queued start job for default target multi-user.target. Apr 13 19:23:26.647725 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 19:23:26.648582 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 19:23:27.326384 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 19:23:27.326485 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 19:23:27.331374 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 19:23:27.345875 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 19:23:27.359631 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 19:23:27.374333 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 19:23:27.379378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:27.393865 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 19:23:27.399251 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:27.416413 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 19:23:27.420388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:27.438937 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:23:27.456662 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 19:23:27.466214 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 19:23:27.479479 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 19:23:27.487050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 19:23:27.491152 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 19:23:27.495135 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 19:23:27.498425 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 19:23:27.502336 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 19:23:27.531655 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:23:27.547954 kernel: loop0: detected capacity change from 0 to 114432 Apr 13 19:23:27.557479 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 19:23:27.568761 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 19:23:27.580820 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 19:23:27.588680 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 19:23:27.595886 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 19:23:27.645493 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 19:23:27.653123 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 19:23:27.659595 systemd-journald[1560]: Time spent on flushing to /var/log/journal/ec2f44c360668ee6f08fd34f8fb24040 is 62.015ms for 912 entries. Apr 13 19:23:27.659595 systemd-journald[1560]: System Journal (/var/log/journal/ec2f44c360668ee6f08fd34f8fb24040) is 8.0M, max 195.6M, 187.6M free. Apr 13 19:23:27.730170 systemd-journald[1560]: Received client request to flush runtime journal. Apr 13 19:23:27.730237 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 19:23:27.689079 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 19:23:27.709058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 19:23:27.715378 udevadm[1622]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 19:23:27.736457 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 19:23:27.756423 kernel: loop1: detected capacity change from 0 to 52536 Apr 13 19:23:27.813404 kernel: loop2: detected capacity change from 0 to 114328 Apr 13 19:23:27.817049 systemd-tmpfiles[1628]: ACLs are not supported, ignoring. Apr 13 19:23:27.817091 systemd-tmpfiles[1628]: ACLs are not supported, ignoring. Apr 13 19:23:27.831031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 19:23:27.918386 kernel: loop3: detected capacity change from 0 to 209336 Apr 13 19:23:28.280403 kernel: loop4: detected capacity change from 0 to 114432 Apr 13 19:23:28.300384 kernel: loop5: detected capacity change from 0 to 52536 Apr 13 19:23:28.316403 kernel: loop6: detected capacity change from 0 to 114328 Apr 13 19:23:28.338401 kernel: loop7: detected capacity change from 0 to 209336 Apr 13 19:23:28.369309 (sd-merge)[1637]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 19:23:28.370379 (sd-merge)[1637]: Merged extensions into '/usr'. Apr 13 19:23:28.380542 systemd[1]: Reloading requested from client PID 1593 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 19:23:28.380574 systemd[1]: Reloading... Apr 13 19:23:28.570945 zram_generator::config[1666]: No configuration found. Apr 13 19:23:28.863423 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:28.975187 systemd[1]: Reloading finished in 593 ms. Apr 13 19:23:29.014758 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 19:23:29.016942 ldconfig[1589]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 19:23:29.019730 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 19:23:29.023647 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 19:23:29.041825 systemd[1]: Starting ensure-sysext.service... Apr 13 19:23:29.052709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 19:23:29.067753 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 19:23:29.080486 systemd[1]: Reloading requested from client PID 1716 ('systemctl') (unit ensure-sysext.service)... Apr 13 19:23:29.080516 systemd[1]: Reloading... Apr 13 19:23:29.109304 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 19:23:29.110039 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 19:23:29.111941 systemd-tmpfiles[1717]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 19:23:29.112516 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. Apr 13 19:23:29.112652 systemd-tmpfiles[1717]: ACLs are not supported, ignoring. Apr 13 19:23:29.118737 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:29.118762 systemd-tmpfiles[1717]: Skipping /boot Apr 13 19:23:29.139932 systemd-tmpfiles[1717]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 19:23:29.139960 systemd-tmpfiles[1717]: Skipping /boot Apr 13 19:23:29.199034 systemd-udevd[1718]: Using default interface naming scheme 'v255'. Apr 13 19:23:29.304672 zram_generator::config[1747]: No configuration found. Apr 13 19:23:29.424734 (udev-worker)[1764]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:29.701120 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:23:29.737397 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1763) Apr 13 19:23:29.861054 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 19:23:29.862123 systemd[1]: Reloading finished in 780 ms. Apr 13 19:23:29.895768 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 19:23:29.912314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 19:23:29.965472 systemd[1]: Finished ensure-sysext.service. Apr 13 19:23:29.986899 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 19:23:30.014393 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 19:23:30.023655 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:30.033681 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 19:23:30.039614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 19:23:30.048698 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 19:23:30.055705 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 19:23:30.062716 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 19:23:30.071635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 19:23:30.081910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 19:23:30.084513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 19:23:30.089044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 19:23:30.096487 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 19:23:30.108376 lvm[1915]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:30.111120 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 19:23:30.140154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 19:23:30.141737 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 19:23:30.150698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 19:23:30.157186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 19:23:30.168215 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 19:23:30.189746 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 19:23:30.193128 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 19:23:30.207431 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 19:23:30.238293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 19:23:30.238653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 19:23:30.261031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 19:23:30.261949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 19:23:30.266332 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 19:23:30.289441 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 19:23:30.305266 lvm[1934]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 19:23:30.306836 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 19:23:30.318994 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 19:23:30.322094 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 19:23:30.324562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 19:23:30.327653 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 19:23:30.328122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 19:23:30.331834 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 19:23:30.337255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 19:23:30.401099 augenrules[1954]: No rules Apr 13 19:23:30.405991 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:30.411127 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 19:23:30.423498 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 19:23:30.427096 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 19:23:30.433219 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 19:23:30.453603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 19:23:30.464425 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 19:23:30.567974 systemd-networkd[1928]: lo: Link UP Apr 13 19:23:30.567995 systemd-networkd[1928]: lo: Gained carrier Apr 13 19:23:30.571003 systemd-networkd[1928]: Enumeration completed Apr 13 19:23:30.571233 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 19:23:30.572207 systemd-networkd[1928]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:30.572216 systemd-networkd[1928]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 19:23:30.578811 systemd-networkd[1928]: eth0: Link UP Apr 13 19:23:30.579199 systemd-networkd[1928]: eth0: Gained carrier Apr 13 19:23:30.579234 systemd-networkd[1928]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 19:23:30.584134 systemd-resolved[1929]: Positive Trust Anchors: Apr 13 19:23:30.584170 systemd-resolved[1929]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 19:23:30.584236 systemd-resolved[1929]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 19:23:30.587077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 19:23:30.596462 systemd-networkd[1928]: eth0: DHCPv4 address 172.31.31.24/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 19:23:30.613486 systemd-resolved[1929]: Defaulting to hostname 'linux'. Apr 13 19:23:30.616891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 19:23:30.619460 systemd[1]: Reached target network.target - Network. Apr 13 19:23:30.621443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 19:23:30.625374 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 19:23:30.627810 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 19:23:30.630590 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 19:23:30.633689 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 19:23:30.636220 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 19:23:30.639305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 19:23:30.642118 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 19:23:30.642295 systemd[1]: Reached target paths.target - Path Units. Apr 13 19:23:30.644381 systemd[1]: Reached target timers.target - Timer Units. Apr 13 19:23:30.647791 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 19:23:30.652866 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 19:23:30.661647 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 19:23:30.665050 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 19:23:30.667653 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 19:23:30.669882 systemd[1]: Reached target basic.target - Basic System. Apr 13 19:23:30.671990 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:30.672043 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 19:23:30.678756 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 19:23:30.687721 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 19:23:30.697678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 19:23:30.706925 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 19:23:30.714707 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 19:23:30.717715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 19:23:30.729658 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 19:23:30.745806 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 19:23:30.755207 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 19:23:30.762017 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 19:23:30.770436 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 19:23:30.780669 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 19:23:30.798735 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 19:23:30.803627 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 19:23:30.805045 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 19:23:30.815153 jq[1979]: false Apr 13 19:23:30.815847 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 19:23:30.831623 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 19:23:30.840932 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 19:23:30.843479 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 19:23:30.903437 coreos-metadata[1977]: Apr 13 19:23:30.902 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.905 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.906 INFO Fetch successful Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.906 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.906 INFO Fetch successful Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.906 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.908 INFO Fetch successful Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.908 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.909 INFO Fetch successful Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.909 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.911 INFO Fetch failed with 404: resource not found Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.911 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.912 INFO Fetch successful Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.912 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.913 INFO Fetch successful Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.913 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.914 INFO Fetch successful Apr 13 19:23:30.915120 coreos-metadata[1977]: Apr 13 19:23:30.914 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 19:23:30.930736 coreos-metadata[1977]: Apr 13 19:23:30.919 INFO Fetch successful Apr 13 19:23:30.930736 coreos-metadata[1977]: Apr 13 19:23:30.919 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 19:23:30.930736 coreos-metadata[1977]: Apr 13 19:23:30.920 INFO Fetch successful Apr 13 19:23:30.942444 jq[1991]: true Apr 13 19:23:30.981269 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 19:23:30.983442 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 19:23:30.988921 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 19:23:30.991452 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 19:23:31.004665 dbus-daemon[1978]: [system] SELinux support is enabled Apr 13 19:23:31.022481 extend-filesystems[1980]: Found loop4 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found loop5 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found loop6 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found loop7 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1p1 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1p2 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1p3 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found usr Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1p4 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1p6 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1p7 Apr 13 19:23:31.022481 extend-filesystems[1980]: Found nvme0n1p9 Apr 13 19:23:31.022481 extend-filesystems[1980]: Checking size of /dev/nvme0n1p9 Apr 13 19:23:31.007764 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 19:23:31.055954 dbus-daemon[1978]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1928 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:31.013651 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 19:23:31.068419 dbus-daemon[1978]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 19:23:31.013698 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 19:23:31.016645 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 19:23:31.016680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 19:23:31.075674 (ntainerd)[2013]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 19:23:31.096634 jq[2010]: true Apr 13 19:23:31.075868 ntpd[1982]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 17:37:19 UTC 2026 (1): Starting Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: ---------------------------------------------------- Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: corporation. Support and training for ntp-4 are Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: available at https://www.nwtime.org/support Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: ---------------------------------------------------- Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: proto: precision = 0.096 usec (-23) Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: basedate set to 2026-04-01 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Listen normally on 3 eth0 172.31.31.24:123 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: bind(21) AF_INET6 fe80::460:76ff:fe9f:e60f%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: unable to create socket on eth0 (5) for fe80::460:76ff:fe9f:e60f%2#123 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: failed to init interface for address fe80::460:76ff:fe9f:e60f%2 Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: Listening on routing socket on fd #21 for interface updates Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:31.097420 ntpd[1982]: 13 Apr 19:23:31 ntpd[1982]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:31.075914 ntpd[1982]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 19:23:31.098674 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 19:23:31.075935 ntpd[1982]: ---------------------------------------------------- Apr 13 19:23:31.108066 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 19:23:31.075955 ntpd[1982]: ntp-4 is maintained by Network Time Foundation, Apr 13 19:23:31.075974 ntpd[1982]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 19:23:31.075992 ntpd[1982]: corporation. Support and training for ntp-4 are Apr 13 19:23:31.076014 ntpd[1982]: available at https://www.nwtime.org/support Apr 13 19:23:31.076032 ntpd[1982]: ---------------------------------------------------- Apr 13 19:23:31.081041 ntpd[1982]: proto: precision = 0.096 usec (-23) Apr 13 19:23:31.082461 ntpd[1982]: basedate set to 2026-04-01 Apr 13 19:23:31.082495 ntpd[1982]: gps base set to 2026-04-05 (week 2413) Apr 13 19:23:31.086099 ntpd[1982]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 19:23:31.086175 ntpd[1982]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 19:23:31.086528 ntpd[1982]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 19:23:31.086594 ntpd[1982]: Listen normally on 3 eth0 172.31.31.24:123 Apr 13 19:23:31.086660 ntpd[1982]: Listen normally on 4 lo [::1]:123 Apr 13 19:23:31.086733 ntpd[1982]: bind(21) AF_INET6 fe80::460:76ff:fe9f:e60f%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:31.086772 ntpd[1982]: unable to create socket on eth0 (5) for fe80::460:76ff:fe9f:e60f%2#123 Apr 13 19:23:31.086801 ntpd[1982]: failed to init interface for address fe80::460:76ff:fe9f:e60f%2 Apr 13 19:23:31.086852 ntpd[1982]: Listening on routing socket on fd #21 for interface updates Apr 13 19:23:31.096935 ntpd[1982]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:31.096983 ntpd[1982]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 19:23:31.135440 tar[2006]: linux-arm64/LICENSE Apr 13 19:23:31.135440 tar[2006]: linux-arm64/helm Apr 13 19:23:31.139099 extend-filesystems[1980]: Resized partition /dev/nvme0n1p9 Apr 13 19:23:31.152696 extend-filesystems[2035]: resize2fs 1.47.1 (20-May-2024) Apr 13 19:23:31.184617 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 19:23:31.196685 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 19:23:31.199512 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 19:23:31.256111 update_engine[1990]: I20260413 19:23:31.255693 1990 main.cc:92] Flatcar Update Engine starting Apr 13 19:23:31.263147 systemd-logind[1989]: Watching system buttons on /dev/input/event0 (Power Button) Apr 13 19:23:31.271747 systemd[1]: Started update-engine.service - Update Engine. Apr 13 19:23:31.274103 systemd-logind[1989]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 13 19:23:31.278550 systemd-logind[1989]: New seat seat0. Apr 13 19:23:31.288123 update_engine[1990]: I20260413 19:23:31.277779 1990 update_check_scheduler.cc:74] Next update check in 2m55s Apr 13 19:23:31.295718 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 19:23:31.298693 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 19:23:31.325444 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 19:23:31.346173 extend-filesystems[2035]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 19:23:31.346173 extend-filesystems[2035]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 19:23:31.346173 extend-filesystems[2035]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 19:23:31.355720 extend-filesystems[1980]: Resized filesystem in /dev/nvme0n1p9 Apr 13 19:23:31.373945 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 19:23:31.375431 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 19:23:31.393689 bash[2057]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:31.403161 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 19:23:31.417861 systemd[1]: Starting sshkeys.service... Apr 13 19:23:31.454718 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 19:23:31.488214 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 19:23:31.552386 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1763) Apr 13 19:23:31.657985 dbus-daemon[1978]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 19:23:31.658708 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 19:23:31.669807 dbus-daemon[1978]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2028 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 19:23:31.701441 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 19:23:31.784917 polkitd[2108]: Started polkitd version 121 Apr 13 19:23:31.798410 coreos-metadata[2066]: Apr 13 19:23:31.798 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 19:23:31.810104 coreos-metadata[2066]: Apr 13 19:23:31.807 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 19:23:31.810104 coreos-metadata[2066]: Apr 13 19:23:31.808 INFO Fetch successful Apr 13 19:23:31.810104 coreos-metadata[2066]: Apr 13 19:23:31.808 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 19:23:31.810458 coreos-metadata[2066]: Apr 13 19:23:31.810 INFO Fetch successful Apr 13 19:23:31.813016 containerd[2013]: time="2026-04-13T19:23:31.812857441Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 19:23:31.824884 unknown[2066]: wrote ssh authorized keys file for user: core Apr 13 19:23:31.834503 locksmithd[2046]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 19:23:31.879456 polkitd[2108]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 19:23:31.879584 polkitd[2108]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 19:23:31.890398 polkitd[2108]: Finished loading, compiling and executing 2 rules Apr 13 19:23:31.895650 dbus-daemon[1978]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 19:23:31.895955 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 19:23:31.901260 polkitd[2108]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 19:23:31.937716 update-ssh-keys[2128]: Updated "/home/core/.ssh/authorized_keys" Apr 13 19:23:31.939672 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 19:23:31.956652 systemd[1]: Finished sshkeys.service. Apr 13 19:23:31.997958 systemd-resolved[1929]: System hostname changed to 'ip-172-31-31-24'. Apr 13 19:23:31.998029 systemd-hostnamed[2028]: Hostname set to (transient) Apr 13 19:23:32.020385 containerd[2013]: time="2026-04-13T19:23:32.018887110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.028743610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.028809802Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.028844878Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029134582Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029169058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029297086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029380594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029701762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029736934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029795074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030374 containerd[2013]: time="2026-04-13T19:23:32.029823490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:32.030900 containerd[2013]: time="2026-04-13T19:23:32.029991922Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:32.035137 containerd[2013]: time="2026-04-13T19:23:32.034452982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 19:23:32.035137 containerd[2013]: time="2026-04-13T19:23:32.034741630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 19:23:32.035137 containerd[2013]: time="2026-04-13T19:23:32.034773046Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 19:23:32.035137 containerd[2013]: time="2026-04-13T19:23:32.034974766Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 19:23:32.035137 containerd[2013]: time="2026-04-13T19:23:32.035069662Z" level=info msg="metadata content store policy set" policy=shared Apr 13 19:23:32.044071 containerd[2013]: time="2026-04-13T19:23:32.043141642Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 19:23:32.044071 containerd[2013]: time="2026-04-13T19:23:32.043265974Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 19:23:32.044071 containerd[2013]: time="2026-04-13T19:23:32.043301362Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 19:23:32.044071 containerd[2013]: time="2026-04-13T19:23:32.043478182Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 19:23:32.044071 containerd[2013]: time="2026-04-13T19:23:32.043512058Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 19:23:32.044071 containerd[2013]: time="2026-04-13T19:23:32.043800538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.045771598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046027918Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046061818Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046091374Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046124578Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046167358Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046199926Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046231378Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046263202Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.047377 containerd[2013]: time="2026-04-13T19:23:32.046293214Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.046327582Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049427218Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049495630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049529554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049560346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049594582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049625818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049659118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049701550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049734322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049784230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049824058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049864270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050305 containerd[2013]: time="2026-04-13T19:23:32.049897030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050981 containerd[2013]: time="2026-04-13T19:23:32.049930102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050981 containerd[2013]: time="2026-04-13T19:23:32.049965334Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 19:23:32.050981 containerd[2013]: time="2026-04-13T19:23:32.050011270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050981 containerd[2013]: time="2026-04-13T19:23:32.050046778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.050981 containerd[2013]: time="2026-04-13T19:23:32.050074198Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.051853174Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.051930886Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.051959686Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.051988174Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.052011670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.052043590Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.052068334Z" level=info msg="NRI interface is disabled by configuration." Apr 13 19:23:32.055364 containerd[2013]: time="2026-04-13T19:23:32.052093018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 19:23:32.057392 containerd[2013]: time="2026-04-13T19:23:32.056229598Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 19:23:32.057392 containerd[2013]: time="2026-04-13T19:23:32.056381218Z" level=info msg="Connect containerd service" Apr 13 19:23:32.057392 containerd[2013]: time="2026-04-13T19:23:32.056458318Z" level=info msg="using legacy CRI server" Apr 13 19:23:32.057392 containerd[2013]: time="2026-04-13T19:23:32.056476630Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 19:23:32.057392 containerd[2013]: time="2026-04-13T19:23:32.056635546Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 19:23:32.063238 containerd[2013]: time="2026-04-13T19:23:32.063076810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:23:32.064971 containerd[2013]: time="2026-04-13T19:23:32.063579634Z" level=info msg="Start subscribing containerd event" Apr 13 19:23:32.071141 containerd[2013]: time="2026-04-13T19:23:32.071075422Z" level=info msg="Start recovering state" Apr 13 19:23:32.076987 ntpd[1982]: bind(24) AF_INET6 fe80::460:76ff:fe9f:e60f%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:32.077704 ntpd[1982]: 13 Apr 19:23:32 ntpd[1982]: bind(24) AF_INET6 fe80::460:76ff:fe9f:e60f%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 19:23:32.077704 ntpd[1982]: 13 Apr 19:23:32 ntpd[1982]: unable to create socket on eth0 (6) for fe80::460:76ff:fe9f:e60f%2#123 Apr 13 19:23:32.077704 ntpd[1982]: 13 Apr 19:23:32 ntpd[1982]: failed to init interface for address fe80::460:76ff:fe9f:e60f%2 Apr 13 19:23:32.077051 ntpd[1982]: unable to create socket on eth0 (6) for fe80::460:76ff:fe9f:e60f%2#123 Apr 13 19:23:32.077081 ntpd[1982]: failed to init interface for address fe80::460:76ff:fe9f:e60f%2 Apr 13 19:23:32.083057 containerd[2013]: time="2026-04-13T19:23:32.078197050Z" level=info msg="Start event monitor" Apr 13 19:23:32.083057 containerd[2013]: time="2026-04-13T19:23:32.078247414Z" level=info msg="Start snapshots syncer" Apr 13 19:23:32.083057 containerd[2013]: time="2026-04-13T19:23:32.078271054Z" level=info msg="Start cni network conf syncer for default" Apr 13 19:23:32.083057 containerd[2013]: time="2026-04-13T19:23:32.078290578Z" level=info msg="Start streaming server" Apr 13 19:23:32.083057 containerd[2013]: time="2026-04-13T19:23:32.073579294Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 19:23:32.083057 containerd[2013]: time="2026-04-13T19:23:32.078750946Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 19:23:32.083057 containerd[2013]: time="2026-04-13T19:23:32.080424406Z" level=info msg="containerd successfully booted in 0.271320s" Apr 13 19:23:32.080553 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 19:23:32.108865 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 19:23:32.536533 systemd-networkd[1928]: eth0: Gained IPv6LL Apr 13 19:23:32.544764 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 19:23:32.548260 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 19:23:32.562477 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 19:23:32.574009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:32.584859 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 19:23:32.682168 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 19:23:32.714476 tar[2006]: linux-arm64/README.md Apr 13 19:23:32.742955 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 19:23:32.759427 amazon-ssm-agent[2184]: Initializing new seelog logger Apr 13 19:23:32.759427 amazon-ssm-agent[2184]: New Seelog Logger Creation Complete Apr 13 19:23:32.759427 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.759427 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.759427 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 processing appconfig overrides Apr 13 19:23:32.760417 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.760512 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.760728 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 processing appconfig overrides Apr 13 19:23:32.761050 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.761131 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.761357 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 processing appconfig overrides Apr 13 19:23:32.762646 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO Proxy environment variables: Apr 13 19:23:32.762750 sshd_keygen[2031]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 19:23:32.766627 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.766763 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 19:23:32.767017 amazon-ssm-agent[2184]: 2026/04/13 19:23:32 processing appconfig overrides Apr 13 19:23:32.826463 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 19:23:32.844203 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 19:23:32.857896 systemd[1]: Started sshd@0-172.31.31.24:22-4.175.71.9:41012.service - OpenSSH per-connection server daemon (4.175.71.9:41012). Apr 13 19:23:32.866696 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO https_proxy: Apr 13 19:23:32.871638 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 19:23:32.877614 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 19:23:32.885848 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 19:23:32.935453 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 19:23:32.950933 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 19:23:32.968399 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO http_proxy: Apr 13 19:23:32.964910 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 19:23:32.974066 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 19:23:33.061679 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO no_proxy: Apr 13 19:23:33.160463 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO Checking if agent identity type OnPrem can be assumed Apr 13 19:23:33.260362 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO Checking if agent identity type EC2 can be assumed Apr 13 19:23:33.357121 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO Agent will take identity from EC2 Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [Registrar] Starting registrar module Apr 13 19:23:33.429716 amazon-ssm-agent[2184]: 2026-04-13 19:23:32 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 19:23:33.430301 amazon-ssm-agent[2184]: 2026-04-13 19:23:33 INFO [EC2Identity] EC2 registration was successful. Apr 13 19:23:33.430301 amazon-ssm-agent[2184]: 2026-04-13 19:23:33 INFO [CredentialRefresher] credentialRefresher has started Apr 13 19:23:33.430301 amazon-ssm-agent[2184]: 2026-04-13 19:23:33 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 19:23:33.430301 amazon-ssm-agent[2184]: 2026-04-13 19:23:33 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 19:23:33.456113 amazon-ssm-agent[2184]: 2026-04-13 19:23:33 INFO [CredentialRefresher] Next credential rotation will be in 30.716647231266666 minutes Apr 13 19:23:33.932369 sshd[2212]: Accepted publickey for core from 4.175.71.9 port 41012 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:33.935515 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:33.957093 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 19:23:33.967852 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 19:23:33.975980 systemd-logind[1989]: New session 1 of user core. Apr 13 19:23:34.010439 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 19:23:34.023978 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 19:23:34.039302 (systemd)[2226]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 19:23:34.279619 systemd[2226]: Queued start job for default target default.target. Apr 13 19:23:34.289856 systemd[2226]: Created slice app.slice - User Application Slice. Apr 13 19:23:34.290122 systemd[2226]: Reached target paths.target - Paths. Apr 13 19:23:34.290160 systemd[2226]: Reached target timers.target - Timers. Apr 13 19:23:34.294620 systemd[2226]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 19:23:34.334975 systemd[2226]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 19:23:34.335445 systemd[2226]: Reached target sockets.target - Sockets. Apr 13 19:23:34.335659 systemd[2226]: Reached target basic.target - Basic System. Apr 13 19:23:34.335894 systemd[2226]: Reached target default.target - Main User Target. Apr 13 19:23:34.336046 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 19:23:34.336103 systemd[2226]: Startup finished in 284ms. Apr 13 19:23:34.348998 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 19:23:34.461555 amazon-ssm-agent[2184]: 2026-04-13 19:23:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 19:23:34.563688 amazon-ssm-agent[2184]: 2026-04-13 19:23:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2236) started Apr 13 19:23:34.597674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:34.602589 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 19:23:34.606819 systemd[1]: Startup finished in 1.182s (kernel) + 9.170s (initrd) + 9.668s (userspace) = 20.021s. Apr 13 19:23:34.615998 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:34.667860 amazon-ssm-agent[2184]: 2026-04-13 19:23:34 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 19:23:35.068732 systemd[1]: Started sshd@1-172.31.31.24:22-4.175.71.9:41016.service - OpenSSH per-connection server daemon (4.175.71.9:41016). Apr 13 19:23:35.077397 ntpd[1982]: Listen normally on 7 eth0 [fe80::460:76ff:fe9f:e60f%2]:123 Apr 13 19:23:35.077994 ntpd[1982]: 13 Apr 19:23:35 ntpd[1982]: Listen normally on 7 eth0 [fe80::460:76ff:fe9f:e60f%2]:123 Apr 13 19:23:35.578134 kubelet[2247]: E0413 19:23:35.578041 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:35.582891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:35.583222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:35.584179 systemd[1]: kubelet.service: Consumed 1.354s CPU time. Apr 13 19:23:36.041873 sshd[2261]: Accepted publickey for core from 4.175.71.9 port 41016 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:36.044680 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:36.053519 systemd-logind[1989]: New session 2 of user core. Apr 13 19:23:36.058595 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 19:23:36.713508 sshd[2261]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:36.718984 systemd[1]: sshd@1-172.31.31.24:22-4.175.71.9:41016.service: Deactivated successfully. Apr 13 19:23:36.722282 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 19:23:36.726122 systemd-logind[1989]: Session 2 logged out. Waiting for processes to exit. Apr 13 19:23:36.728194 systemd-logind[1989]: Removed session 2. Apr 13 19:23:36.886909 systemd[1]: Started sshd@2-172.31.31.24:22-4.175.71.9:36288.service - OpenSSH per-connection server daemon (4.175.71.9:36288). Apr 13 19:23:37.882809 sshd[2271]: Accepted publickey for core from 4.175.71.9 port 36288 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:37.884524 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:37.891861 systemd-logind[1989]: New session 3 of user core. Apr 13 19:23:37.902599 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 19:23:37.734697 systemd-resolved[1929]: Clock change detected. Flushing caches. Apr 13 19:23:37.741582 systemd-journald[1560]: Time jumped backwards, rotating. Apr 13 19:23:38.215471 sshd[2271]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:38.221670 systemd[1]: sshd@2-172.31.31.24:22-4.175.71.9:36288.service: Deactivated successfully. Apr 13 19:23:38.224698 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 19:23:38.228351 systemd-logind[1989]: Session 3 logged out. Waiting for processes to exit. Apr 13 19:23:38.230242 systemd-logind[1989]: Removed session 3. Apr 13 19:23:38.392725 systemd[1]: Started sshd@3-172.31.31.24:22-4.175.71.9:36292.service - OpenSSH per-connection server daemon (4.175.71.9:36292). Apr 13 19:23:39.381907 sshd[2279]: Accepted publickey for core from 4.175.71.9 port 36292 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:39.383594 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:39.392038 systemd-logind[1989]: New session 4 of user core. Apr 13 19:23:39.398326 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 19:23:40.065883 sshd[2279]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:40.072262 systemd[1]: sshd@3-172.31.31.24:22-4.175.71.9:36292.service: Deactivated successfully. Apr 13 19:23:40.075785 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 19:23:40.078594 systemd-logind[1989]: Session 4 logged out. Waiting for processes to exit. Apr 13 19:23:40.080604 systemd-logind[1989]: Removed session 4. Apr 13 19:23:40.256569 systemd[1]: Started sshd@4-172.31.31.24:22-4.175.71.9:36296.service - OpenSSH per-connection server daemon (4.175.71.9:36296). Apr 13 19:23:41.288898 sshd[2286]: Accepted publickey for core from 4.175.71.9 port 36296 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:41.291516 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:41.299643 systemd-logind[1989]: New session 5 of user core. Apr 13 19:23:41.307323 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 19:23:41.852425 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 19:23:41.853584 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:41.872510 sudo[2289]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:42.040427 sshd[2286]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:42.047797 systemd[1]: sshd@4-172.31.31.24:22-4.175.71.9:36296.service: Deactivated successfully. Apr 13 19:23:42.051743 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 19:23:42.053077 systemd-logind[1989]: Session 5 logged out. Waiting for processes to exit. Apr 13 19:23:42.055872 systemd-logind[1989]: Removed session 5. Apr 13 19:23:42.213723 systemd[1]: Started sshd@5-172.31.31.24:22-4.175.71.9:36302.service - OpenSSH per-connection server daemon (4.175.71.9:36302). Apr 13 19:23:43.217522 sshd[2294]: Accepted publickey for core from 4.175.71.9 port 36302 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:43.219271 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:43.226621 systemd-logind[1989]: New session 6 of user core. Apr 13 19:23:43.238304 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 19:23:43.742777 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 19:23:43.743935 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:43.750372 sudo[2298]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:43.760659 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 19:23:43.761763 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:43.782679 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:43.797765 auditctl[2301]: No rules Apr 13 19:23:43.798574 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 19:23:43.798927 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:43.806896 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 19:23:43.860168 augenrules[2319]: No rules Apr 13 19:23:43.862828 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 19:23:43.865403 sudo[2297]: pam_unix(sudo:session): session closed for user root Apr 13 19:23:44.025441 sshd[2294]: pam_unix(sshd:session): session closed for user core Apr 13 19:23:44.030780 systemd-logind[1989]: Session 6 logged out. Waiting for processes to exit. Apr 13 19:23:44.031202 systemd[1]: sshd@5-172.31.31.24:22-4.175.71.9:36302.service: Deactivated successfully. Apr 13 19:23:44.034021 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 19:23:44.039458 systemd-logind[1989]: Removed session 6. Apr 13 19:23:44.211155 systemd[1]: Started sshd@6-172.31.31.24:22-4.175.71.9:36316.service - OpenSSH per-connection server daemon (4.175.71.9:36316). Apr 13 19:23:45.254200 sshd[2327]: Accepted publickey for core from 4.175.71.9 port 36316 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:23:45.256773 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:23:45.258029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 19:23:45.267456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:45.273178 systemd-logind[1989]: New session 7 of user core. Apr 13 19:23:45.277181 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 19:23:45.610021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:45.626532 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:45.699678 kubelet[2338]: E0413 19:23:45.699584 2338 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:45.707336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:45.707708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:45.802184 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 19:23:45.802838 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 19:23:46.632559 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 19:23:46.645548 (dockerd)[2362]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 19:23:47.173175 dockerd[2362]: time="2026-04-13T19:23:47.173073370Z" level=info msg="Starting up" Apr 13 19:23:47.403628 dockerd[2362]: time="2026-04-13T19:23:47.403286484Z" level=info msg="Loading containers: start." Apr 13 19:23:47.602104 kernel: Initializing XFRM netlink socket Apr 13 19:23:47.679173 (udev-worker)[2383]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:23:47.775711 systemd-networkd[1928]: docker0: Link UP Apr 13 19:23:47.803526 dockerd[2362]: time="2026-04-13T19:23:47.803476526Z" level=info msg="Loading containers: done." Apr 13 19:23:47.832153 dockerd[2362]: time="2026-04-13T19:23:47.832090850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 19:23:47.832615 dockerd[2362]: time="2026-04-13T19:23:47.832572038Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 19:23:47.832957 dockerd[2362]: time="2026-04-13T19:23:47.832917350Z" level=info msg="Daemon has completed initialization" Apr 13 19:23:47.907800 dockerd[2362]: time="2026-04-13T19:23:47.907421594Z" level=info msg="API listen on /run/docker.sock" Apr 13 19:23:47.909177 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 19:23:48.704802 containerd[2013]: time="2026-04-13T19:23:48.704736434Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 19:23:49.395499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822015344.mount: Deactivated successfully. Apr 13 19:23:50.855249 containerd[2013]: time="2026-04-13T19:23:50.855190805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.856347 containerd[2013]: time="2026-04-13T19:23:50.856297685Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=27283683" Apr 13 19:23:50.858094 containerd[2013]: time="2026-04-13T19:23:50.858019205Z" level=info msg="ImageCreate event name:\"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.863998 containerd[2013]: time="2026-04-13T19:23:50.863932421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:50.866669 containerd[2013]: time="2026-04-13T19:23:50.866618525Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"27280282\" in 2.161821695s" Apr 13 19:23:50.867222 containerd[2013]: time="2026-04-13T19:23:50.866814413Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:1edd049f11c0655b7dbb2b22afe15b8f3118f2780a0997762857ad3baee29c03\"" Apr 13 19:23:50.871327 containerd[2013]: time="2026-04-13T19:23:50.871248581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 19:23:52.286796 containerd[2013]: time="2026-04-13T19:23:52.286731796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:52.288468 containerd[2013]: time="2026-04-13T19:23:52.288363028Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=23551902" Apr 13 19:23:52.290403 containerd[2013]: time="2026-04-13T19:23:52.289479124Z" level=info msg="ImageCreate event name:\"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:52.295504 containerd[2013]: time="2026-04-13T19:23:52.295438684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:52.298130 containerd[2013]: time="2026-04-13T19:23:52.298040200Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"25029924\" in 1.426709095s" Apr 13 19:23:52.298238 containerd[2013]: time="2026-04-13T19:23:52.298129036Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:f331204a7439939f31f8e98461868cd4acd177a47c806dfc1dfe17f7725b18c2\"" Apr 13 19:23:52.298919 containerd[2013]: time="2026-04-13T19:23:52.298863064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 19:23:53.464542 containerd[2013]: time="2026-04-13T19:23:53.464458506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.467287 containerd[2013]: time="2026-04-13T19:23:53.467224446Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=18301233" Apr 13 19:23:53.469725 containerd[2013]: time="2026-04-13T19:23:53.469658430Z" level=info msg="ImageCreate event name:\"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.477166 containerd[2013]: time="2026-04-13T19:23:53.477098910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:53.481122 containerd[2013]: time="2026-04-13T19:23:53.480529050Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"19779273\" in 1.181604162s" Apr 13 19:23:53.481122 containerd[2013]: time="2026-04-13T19:23:53.480586002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:1dd8e26d7fcd4140e29ed9d408e8237c60ec560237440a99d64ccca50a7b10de\"" Apr 13 19:23:53.481609 containerd[2013]: time="2026-04-13T19:23:53.481507314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 19:23:54.946228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320213688.mount: Deactivated successfully. Apr 13 19:23:55.546362 containerd[2013]: time="2026-04-13T19:23:55.546298052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.547885 containerd[2013]: time="2026-04-13T19:23:55.547829648Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=28148953" Apr 13 19:23:55.549977 containerd[2013]: time="2026-04-13T19:23:55.548912396Z" level=info msg="ImageCreate event name:\"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.554084 containerd[2013]: time="2026-04-13T19:23:55.552514652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:55.554084 containerd[2013]: time="2026-04-13T19:23:55.553946792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"28147972\" in 2.072223706s" Apr 13 19:23:55.554084 containerd[2013]: time="2026-04-13T19:23:55.553991156Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:b1cf8dea216dd607b54b086906dc4c9d7b7272b82a517da6eab7e474a5286963\"" Apr 13 19:23:55.557318 containerd[2013]: time="2026-04-13T19:23:55.557262248Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 19:23:55.868489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 19:23:55.881392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:23:56.172039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2054393614.mount: Deactivated successfully. Apr 13 19:23:56.245937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:23:56.262560 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:23:56.366634 kubelet[2586]: E0413 19:23:56.366574 2586 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:23:56.372288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:23:56.372590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:23:57.487108 containerd[2013]: time="2026-04-13T19:23:57.486996694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.491753 containerd[2013]: time="2026-04-13T19:23:57.491685526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 13 19:23:57.493995 containerd[2013]: time="2026-04-13T19:23:57.493931854Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.502867 containerd[2013]: time="2026-04-13T19:23:57.502780366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:57.505280 containerd[2013]: time="2026-04-13T19:23:57.505232542Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.947656578s" Apr 13 19:23:57.505580 containerd[2013]: time="2026-04-13T19:23:57.505426486Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 13 19:23:57.506164 containerd[2013]: time="2026-04-13T19:23:57.506120410Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 19:23:58.014464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689609562.mount: Deactivated successfully. Apr 13 19:23:58.030302 containerd[2013]: time="2026-04-13T19:23:58.030230828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:58.034267 containerd[2013]: time="2026-04-13T19:23:58.034213532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 13 19:23:58.036444 containerd[2013]: time="2026-04-13T19:23:58.036375452Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:58.041494 containerd[2013]: time="2026-04-13T19:23:58.041427980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:23:58.043508 containerd[2013]: time="2026-04-13T19:23:58.043253780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 537.072626ms" Apr 13 19:23:58.043508 containerd[2013]: time="2026-04-13T19:23:58.043313756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 13 19:23:58.044061 containerd[2013]: time="2026-04-13T19:23:58.043985276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 19:23:58.639695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684246726.mount: Deactivated successfully. Apr 13 19:24:00.084663 containerd[2013]: time="2026-04-13T19:24:00.084580847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.086896 containerd[2013]: time="2026-04-13T19:24:00.086822087Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Apr 13 19:24:00.089974 containerd[2013]: time="2026-04-13T19:24:00.088959779Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.096413 containerd[2013]: time="2026-04-13T19:24:00.096349271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:00.099676 containerd[2013]: time="2026-04-13T19:24:00.099604655Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.055558035s" Apr 13 19:24:00.099676 containerd[2013]: time="2026-04-13T19:24:00.099664919Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 13 19:24:01.662933 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 19:24:06.618549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 19:24:06.628567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:06.996520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:07.006874 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 19:24:07.078002 kubelet[2741]: E0413 19:24:07.077942 2741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 19:24:07.082714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 19:24:07.083290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 19:24:09.926743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:09.942011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:09.994437 systemd[1]: Reloading requested from client PID 2755 ('systemctl') (unit session-7.scope)... Apr 13 19:24:09.994477 systemd[1]: Reloading... Apr 13 19:24:10.231527 zram_generator::config[2798]: No configuration found. Apr 13 19:24:10.473567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:10.646879 systemd[1]: Reloading finished in 651 ms. Apr 13 19:24:10.745700 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:10.753603 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:24:10.754032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:10.760596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:11.114315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:11.117668 (kubelet)[2860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:11.189131 kubelet[2860]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:11.189629 kubelet[2860]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:11.189718 kubelet[2860]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:11.189940 kubelet[2860]: I0413 19:24:11.189890 2860 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:12.884344 kubelet[2860]: I0413 19:24:12.884266 2860 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:24:12.884344 kubelet[2860]: I0413 19:24:12.884319 2860 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:12.884951 kubelet[2860]: I0413 19:24:12.884705 2860 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:12.933095 kubelet[2860]: E0413 19:24:12.931922 2860 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 19:24:12.934773 kubelet[2860]: I0413 19:24:12.934728 2860 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:12.946660 kubelet[2860]: E0413 19:24:12.946610 2860 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:12.946860 kubelet[2860]: I0413 19:24:12.946835 2860 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:12.953178 kubelet[2860]: I0413 19:24:12.953138 2860 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:24:12.953916 kubelet[2860]: I0413 19:24:12.953875 2860 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:12.954309 kubelet[2860]: I0413 19:24:12.954006 2860 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:24:12.954531 kubelet[2860]: I0413 19:24:12.954509 2860 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:12.954639 kubelet[2860]: I0413 19:24:12.954620 2860 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:24:12.955094 kubelet[2860]: I0413 19:24:12.955075 2860 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:12.961259 kubelet[2860]: I0413 19:24:12.961222 2860 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:24:12.961447 kubelet[2860]: I0413 19:24:12.961427 2860 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:12.961567 kubelet[2860]: I0413 19:24:12.961549 2860 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:24:12.961681 kubelet[2860]: I0413 19:24:12.961663 2860 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:12.968319 kubelet[2860]: I0413 19:24:12.968265 2860 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:12.969619 kubelet[2860]: I0413 19:24:12.969548 2860 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:12.969881 kubelet[2860]: W0413 19:24:12.969838 2860 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 19:24:12.976692 kubelet[2860]: I0413 19:24:12.975766 2860 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:24:12.976692 kubelet[2860]: I0413 19:24:12.975841 2860 server.go:1289] "Started kubelet" Apr 13 19:24:12.976692 kubelet[2860]: E0413 19:24:12.976238 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-24&limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:12.979302 kubelet[2860]: E0413 19:24:12.979253 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:12.979768 kubelet[2860]: I0413 19:24:12.979722 2860 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:12.982209 kubelet[2860]: I0413 19:24:12.982111 2860 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:12.982771 kubelet[2860]: I0413 19:24:12.982723 2860 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:12.983662 kubelet[2860]: I0413 19:24:12.983626 2860 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:24:12.986171 kubelet[2860]: I0413 19:24:12.986126 2860 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:12.993697 kubelet[2860]: E0413 19:24:12.990828 2860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.24:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-24.18a6010bbd431687 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-24,UID:ip-172-31-31-24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-24,},FirstTimestamp:2026-04-13 19:24:12.975797895 +0000 UTC m=+1.850529839,LastTimestamp:2026-04-13 19:24:12.975797895 +0000 UTC m=+1.850529839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-24,}" Apr 13 19:24:12.996090 kubelet[2860]: I0413 19:24:12.994864 2860 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:13.000232 kubelet[2860]: I0413 19:24:13.000189 2860 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:24:13.001020 kubelet[2860]: E0413 19:24:13.000979 2860 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-24\" not found" Apr 13 19:24:13.002766 kubelet[2860]: I0413 19:24:13.002718 2860 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:24:13.003127 kubelet[2860]: I0413 19:24:13.003103 2860 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:24:13.004390 kubelet[2860]: E0413 19:24:13.004309 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-24?timeout=10s\": dial tcp 172.31.31.24:6443: connect: connection refused" interval="200ms" Apr 13 19:24:13.004511 kubelet[2860]: I0413 19:24:13.004452 2860 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:13.005110 kubelet[2860]: E0413 19:24:13.005036 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:13.007475 kubelet[2860]: I0413 19:24:13.007396 2860 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:13.007660 kubelet[2860]: I0413 19:24:13.007608 2860 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:13.015935 kubelet[2860]: I0413 19:24:13.015875 2860 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:13.017108 kubelet[2860]: E0413 19:24:13.016642 2860 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:13.051758 kubelet[2860]: I0413 19:24:13.050904 2860 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:13.051758 kubelet[2860]: I0413 19:24:13.050935 2860 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:13.051758 kubelet[2860]: I0413 19:24:13.050983 2860 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:13.060272 kubelet[2860]: I0413 19:24:13.059809 2860 policy_none.go:49] "None policy: Start" Apr 13 19:24:13.060272 kubelet[2860]: I0413 19:24:13.059852 2860 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:24:13.060272 kubelet[2860]: I0413 19:24:13.059876 2860 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:24:13.060483 kubelet[2860]: I0413 19:24:13.060369 2860 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:13.060483 kubelet[2860]: I0413 19:24:13.060411 2860 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:24:13.060483 kubelet[2860]: I0413 19:24:13.060446 2860 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:13.060483 kubelet[2860]: I0413 19:24:13.060461 2860 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:24:13.060971 kubelet[2860]: E0413 19:24:13.060522 2860 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:13.064157 kubelet[2860]: E0413 19:24:13.063987 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:13.071863 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 19:24:13.089323 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 19:24:13.101320 kubelet[2860]: E0413 19:24:13.101263 2860 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-24\" not found" Apr 13 19:24:13.102630 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 19:24:13.107110 kubelet[2860]: E0413 19:24:13.106749 2860 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:13.107110 kubelet[2860]: I0413 19:24:13.107031 2860 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:13.107309 kubelet[2860]: I0413 19:24:13.107081 2860 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:13.108126 kubelet[2860]: I0413 19:24:13.108072 2860 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:13.110893 kubelet[2860]: E0413 19:24:13.110853 2860 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:13.111523 kubelet[2860]: E0413 19:24:13.111488 2860 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-24\" not found" Apr 13 19:24:13.194122 kubelet[2860]: E0413 19:24:13.193405 2860 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.24:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.24:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-24.18a6010bbd431687 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-24,UID:ip-172-31-31-24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-24,},FirstTimestamp:2026-04-13 19:24:12.975797895 +0000 UTC m=+1.850529839,LastTimestamp:2026-04-13 19:24:12.975797895 +0000 UTC m=+1.850529839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-24,}" Apr 13 19:24:13.193708 systemd[1]: Created slice kubepods-burstable-pode98d8f833c1b75250359aa9484404067.slice - libcontainer container kubepods-burstable-pode98d8f833c1b75250359aa9484404067.slice. Apr 13 19:24:13.205789 kubelet[2860]: E0413 19:24:13.205721 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:13.208094 kubelet[2860]: E0413 19:24:13.207908 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-24?timeout=10s\": dial tcp 172.31.31.24:6443: connect: connection refused" interval="400ms" Apr 13 19:24:13.210231 kubelet[2860]: I0413 19:24:13.209552 2860 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-24" Apr 13 19:24:13.210231 kubelet[2860]: E0413 19:24:13.210040 2860 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.24:6443/api/v1/nodes\": dial tcp 172.31.31.24:6443: connect: connection refused" node="ip-172-31-31-24" Apr 13 19:24:13.213445 systemd[1]: Created slice kubepods-burstable-podda0b03f8946cca958e4b68e75e47b8cd.slice - libcontainer container kubepods-burstable-podda0b03f8946cca958e4b68e75e47b8cd.slice. Apr 13 19:24:13.218333 kubelet[2860]: E0413 19:24:13.217945 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:13.226024 systemd[1]: Created slice kubepods-burstable-podb7fce242da5884bcef6e209c078f32e4.slice - libcontainer container kubepods-burstable-podb7fce242da5884bcef6e209c078f32e4.slice. Apr 13 19:24:13.229411 kubelet[2860]: E0413 19:24:13.229375 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:13.304615 kubelet[2860]: I0413 19:24:13.304578 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:13.304795 kubelet[2860]: I0413 19:24:13.304769 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:13.305271 kubelet[2860]: I0413 19:24:13.304912 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e98d8f833c1b75250359aa9484404067-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-24\" (UID: \"e98d8f833c1b75250359aa9484404067\") " pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:13.305271 kubelet[2860]: I0413 19:24:13.304955 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:13.305271 kubelet[2860]: I0413 19:24:13.304992 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:13.305271 kubelet[2860]: I0413 19:24:13.305079 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:13.305271 kubelet[2860]: I0413 19:24:13.305138 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7fce242da5884bcef6e209c078f32e4-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-24\" (UID: \"b7fce242da5884bcef6e209c078f32e4\") " pod="kube-system/kube-scheduler-ip-172-31-31-24" Apr 13 19:24:13.305542 kubelet[2860]: I0413 19:24:13.305176 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e98d8f833c1b75250359aa9484404067-ca-certs\") pod \"kube-apiserver-ip-172-31-31-24\" (UID: \"e98d8f833c1b75250359aa9484404067\") " pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:13.305542 kubelet[2860]: I0413 19:24:13.305226 2860 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e98d8f833c1b75250359aa9484404067-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-24\" (UID: \"e98d8f833c1b75250359aa9484404067\") " pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:13.413716 kubelet[2860]: I0413 19:24:13.413167 2860 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-24" Apr 13 19:24:13.413716 kubelet[2860]: E0413 19:24:13.413644 2860 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.24:6443/api/v1/nodes\": dial tcp 172.31.31.24:6443: connect: connection refused" node="ip-172-31-31-24" Apr 13 19:24:13.508252 containerd[2013]: time="2026-04-13T19:24:13.508090369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-24,Uid:e98d8f833c1b75250359aa9484404067,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:13.520982 containerd[2013]: time="2026-04-13T19:24:13.519954913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-24,Uid:da0b03f8946cca958e4b68e75e47b8cd,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:13.531142 containerd[2013]: time="2026-04-13T19:24:13.531036493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-24,Uid:b7fce242da5884bcef6e209c078f32e4,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:13.609129 kubelet[2860]: E0413 19:24:13.609028 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-24?timeout=10s\": dial tcp 172.31.31.24:6443: connect: connection refused" interval="800ms" Apr 13 19:24:13.793183 kubelet[2860]: E0413 19:24:13.792974 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-24&limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 19:24:13.815850 kubelet[2860]: I0413 19:24:13.815474 2860 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-24" Apr 13 19:24:13.816100 kubelet[2860]: E0413 19:24:13.816000 2860 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.24:6443/api/v1/nodes\": dial tcp 172.31.31.24:6443: connect: connection refused" node="ip-172-31-31-24" Apr 13 19:24:14.054481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078732007.mount: Deactivated successfully. Apr 13 19:24:14.070754 containerd[2013]: time="2026-04-13T19:24:14.070674924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:14.072898 containerd[2013]: time="2026-04-13T19:24:14.072816348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:14.074864 containerd[2013]: time="2026-04-13T19:24:14.074796012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:14.079083 containerd[2013]: time="2026-04-13T19:24:14.077684496Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:14.080369 containerd[2013]: time="2026-04-13T19:24:14.080320740Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:14.082290 containerd[2013]: time="2026-04-13T19:24:14.082251000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 13 19:24:14.083540 containerd[2013]: time="2026-04-13T19:24:14.083501292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 19:24:14.088524 containerd[2013]: time="2026-04-13T19:24:14.088472424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 19:24:14.092861 containerd[2013]: time="2026-04-13T19:24:14.092812908Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.582811ms" Apr 13 19:24:14.096456 containerd[2013]: time="2026-04-13T19:24:14.096378924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.178523ms" Apr 13 19:24:14.097366 containerd[2013]: time="2026-04-13T19:24:14.097307700Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.907131ms" Apr 13 19:24:14.209704 kubelet[2860]: E0413 19:24:14.209640 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 19:24:14.247968 kubelet[2860]: E0413 19:24:14.247896 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 19:24:14.314695 containerd[2013]: time="2026-04-13T19:24:14.313332577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:14.314695 containerd[2013]: time="2026-04-13T19:24:14.313525465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:14.314695 containerd[2013]: time="2026-04-13T19:24:14.314655601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:14.315439 containerd[2013]: time="2026-04-13T19:24:14.315226357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:14.320461 containerd[2013]: time="2026-04-13T19:24:14.320273389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:14.320461 containerd[2013]: time="2026-04-13T19:24:14.320394229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:14.321789 containerd[2013]: time="2026-04-13T19:24:14.321153037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:14.321789 containerd[2013]: time="2026-04-13T19:24:14.321365425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:14.329804 containerd[2013]: time="2026-04-13T19:24:14.329626237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:14.333482 containerd[2013]: time="2026-04-13T19:24:14.333148357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:14.333482 containerd[2013]: time="2026-04-13T19:24:14.333207949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:14.335190 containerd[2013]: time="2026-04-13T19:24:14.333399409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:14.379431 systemd[1]: Started cri-containerd-62b94d07ab2f7a5329fca01b14d6ad9ce699d7bb35b0ed68790de5d2ac4756d0.scope - libcontainer container 62b94d07ab2f7a5329fca01b14d6ad9ce699d7bb35b0ed68790de5d2ac4756d0. Apr 13 19:24:14.382550 systemd[1]: Started cri-containerd-e0adcaeaaac751a9c5bb405f6c98556d039005e36dd0be1e12d132b48ad667a1.scope - libcontainer container e0adcaeaaac751a9c5bb405f6c98556d039005e36dd0be1e12d132b48ad667a1. Apr 13 19:24:14.388711 kubelet[2860]: E0413 19:24:14.388604 2860 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 19:24:14.396931 systemd[1]: Started cri-containerd-08cdf3e4ca3f8a6c7cbb846be7d7e3ab820886f9fa7278eff26bc5ac4e16ff71.scope - libcontainer container 08cdf3e4ca3f8a6c7cbb846be7d7e3ab820886f9fa7278eff26bc5ac4e16ff71. Apr 13 19:24:14.410717 kubelet[2860]: E0413 19:24:14.410611 2860 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-24?timeout=10s\": dial tcp 172.31.31.24:6443: connect: connection refused" interval="1.6s" Apr 13 19:24:14.506905 containerd[2013]: time="2026-04-13T19:24:14.506839010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-24,Uid:e98d8f833c1b75250359aa9484404067,Namespace:kube-system,Attempt:0,} returns sandbox id \"08cdf3e4ca3f8a6c7cbb846be7d7e3ab820886f9fa7278eff26bc5ac4e16ff71\"" Apr 13 19:24:14.522888 containerd[2013]: time="2026-04-13T19:24:14.521816174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-24,Uid:da0b03f8946cca958e4b68e75e47b8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0adcaeaaac751a9c5bb405f6c98556d039005e36dd0be1e12d132b48ad667a1\"" Apr 13 19:24:14.531162 containerd[2013]: time="2026-04-13T19:24:14.530554850Z" level=info msg="CreateContainer within sandbox \"08cdf3e4ca3f8a6c7cbb846be7d7e3ab820886f9fa7278eff26bc5ac4e16ff71\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 19:24:14.537122 containerd[2013]: time="2026-04-13T19:24:14.536706962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-24,Uid:b7fce242da5884bcef6e209c078f32e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"62b94d07ab2f7a5329fca01b14d6ad9ce699d7bb35b0ed68790de5d2ac4756d0\"" Apr 13 19:24:14.550104 containerd[2013]: time="2026-04-13T19:24:14.549850034Z" level=info msg="CreateContainer within sandbox \"e0adcaeaaac751a9c5bb405f6c98556d039005e36dd0be1e12d132b48ad667a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 19:24:14.557065 containerd[2013]: time="2026-04-13T19:24:14.556993502Z" level=info msg="CreateContainer within sandbox \"62b94d07ab2f7a5329fca01b14d6ad9ce699d7bb35b0ed68790de5d2ac4756d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 19:24:14.590287 containerd[2013]: time="2026-04-13T19:24:14.590132679Z" level=info msg="CreateContainer within sandbox \"08cdf3e4ca3f8a6c7cbb846be7d7e3ab820886f9fa7278eff26bc5ac4e16ff71\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a07e15e142bec0af8572e618af604f42363b0c7f335a875beaa96449e47cbf0\"" Apr 13 19:24:14.594130 containerd[2013]: time="2026-04-13T19:24:14.592463331Z" level=info msg="CreateContainer within sandbox \"e0adcaeaaac751a9c5bb405f6c98556d039005e36dd0be1e12d132b48ad667a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44\"" Apr 13 19:24:14.594130 containerd[2013]: time="2026-04-13T19:24:14.592847391Z" level=info msg="StartContainer for \"4a07e15e142bec0af8572e618af604f42363b0c7f335a875beaa96449e47cbf0\"" Apr 13 19:24:14.601105 containerd[2013]: time="2026-04-13T19:24:14.600785643Z" level=info msg="CreateContainer within sandbox \"62b94d07ab2f7a5329fca01b14d6ad9ce699d7bb35b0ed68790de5d2ac4756d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c\"" Apr 13 19:24:14.601253 containerd[2013]: time="2026-04-13T19:24:14.601137807Z" level=info msg="StartContainer for \"faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44\"" Apr 13 19:24:14.613083 containerd[2013]: time="2026-04-13T19:24:14.612569799Z" level=info msg="StartContainer for \"b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c\"" Apr 13 19:24:14.618987 kubelet[2860]: I0413 19:24:14.618473 2860 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-24" Apr 13 19:24:14.618987 kubelet[2860]: E0413 19:24:14.618931 2860 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.24:6443/api/v1/nodes\": dial tcp 172.31.31.24:6443: connect: connection refused" node="ip-172-31-31-24" Apr 13 19:24:14.656490 systemd[1]: Started cri-containerd-faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44.scope - libcontainer container faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44. Apr 13 19:24:14.672440 systemd[1]: Started cri-containerd-4a07e15e142bec0af8572e618af604f42363b0c7f335a875beaa96449e47cbf0.scope - libcontainer container 4a07e15e142bec0af8572e618af604f42363b0c7f335a875beaa96449e47cbf0. Apr 13 19:24:14.726327 systemd[1]: Started cri-containerd-b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c.scope - libcontainer container b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c. Apr 13 19:24:14.777215 containerd[2013]: time="2026-04-13T19:24:14.777155595Z" level=info msg="StartContainer for \"4a07e15e142bec0af8572e618af604f42363b0c7f335a875beaa96449e47cbf0\" returns successfully" Apr 13 19:24:14.830461 containerd[2013]: time="2026-04-13T19:24:14.830163352Z" level=info msg="StartContainer for \"faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44\" returns successfully" Apr 13 19:24:14.868068 containerd[2013]: time="2026-04-13T19:24:14.867998188Z" level=info msg="StartContainer for \"b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c\" returns successfully" Apr 13 19:24:15.089501 kubelet[2860]: E0413 19:24:15.087934 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:15.094797 kubelet[2860]: E0413 19:24:15.094529 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:15.098637 kubelet[2860]: E0413 19:24:15.098589 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:16.099672 kubelet[2860]: E0413 19:24:16.099620 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:16.101167 kubelet[2860]: E0413 19:24:16.101102 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:16.221795 kubelet[2860]: I0413 19:24:16.221749 2860 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-24" Apr 13 19:24:16.247083 update_engine[1990]: I20260413 19:24:16.245090 1990 update_attempter.cc:509] Updating boot flags... Apr 13 19:24:16.392214 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3152) Apr 13 19:24:16.400236 kubelet[2860]: E0413 19:24:16.398901 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:16.764218 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3152) Apr 13 19:24:17.107781 kubelet[2860]: E0413 19:24:17.107730 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:17.183087 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3152) Apr 13 19:24:19.346444 kubelet[2860]: E0413 19:24:19.346379 2860 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:19.985555 kubelet[2860]: I0413 19:24:19.985185 2860 apiserver.go:52] "Watching apiserver" Apr 13 19:24:20.154844 kubelet[2860]: E0413 19:24:20.154788 2860 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-24\" not found" node="ip-172-31-31-24" Apr 13 19:24:20.187043 kubelet[2860]: I0413 19:24:20.186617 2860 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-24" Apr 13 19:24:20.187043 kubelet[2860]: E0413 19:24:20.186702 2860 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-24\": node \"ip-172-31-31-24\" not found" Apr 13 19:24:20.204722 kubelet[2860]: I0413 19:24:20.204339 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:20.205501 kubelet[2860]: I0413 19:24:20.205418 2860 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:24:20.291172 kubelet[2860]: E0413 19:24:20.289521 2860 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:20.291172 kubelet[2860]: I0413 19:24:20.289583 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:20.306719 kubelet[2860]: E0413 19:24:20.306658 2860 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:20.306719 kubelet[2860]: I0413 19:24:20.306709 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-24" Apr 13 19:24:20.319587 kubelet[2860]: E0413 19:24:20.319507 2860 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-24\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-24" Apr 13 19:24:25.353732 kubelet[2860]: I0413 19:24:25.352836 2860 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-24" Apr 13 19:24:25.400297 systemd[1]: Reloading requested from client PID 3413 ('systemctl') (unit session-7.scope)... Apr 13 19:24:25.400808 systemd[1]: Reloading... Apr 13 19:24:25.618105 zram_generator::config[3456]: No configuration found. Apr 13 19:24:25.888540 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 19:24:26.105284 systemd[1]: Reloading finished in 703 ms. Apr 13 19:24:26.198583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:26.216888 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 19:24:26.217487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:26.217585 systemd[1]: kubelet.service: Consumed 2.806s CPU time, 128.8M memory peak, 0B memory swap peak. Apr 13 19:24:26.231563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 19:24:26.960367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 19:24:26.971758 (kubelet)[3513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 19:24:27.090528 kubelet[3513]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:27.090528 kubelet[3513]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 19:24:27.090528 kubelet[3513]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 19:24:27.091776 kubelet[3513]: I0413 19:24:27.090683 3513 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 19:24:27.123942 kubelet[3513]: I0413 19:24:27.123554 3513 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 19:24:27.126101 kubelet[3513]: I0413 19:24:27.124188 3513 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 19:24:27.127102 kubelet[3513]: I0413 19:24:27.126644 3513 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 19:24:27.133144 kubelet[3513]: I0413 19:24:27.133105 3513 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 19:24:27.145688 kubelet[3513]: I0413 19:24:27.145623 3513 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 19:24:27.156251 kubelet[3513]: E0413 19:24:27.155237 3513 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 19:24:27.157633 kubelet[3513]: I0413 19:24:27.157169 3513 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 19:24:27.170191 kubelet[3513]: I0413 19:24:27.170142 3513 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 19:24:27.171014 kubelet[3513]: I0413 19:24:27.170947 3513 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 19:24:27.172110 kubelet[3513]: I0413 19:24:27.171238 3513 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 19:24:27.172620 kubelet[3513]: I0413 19:24:27.172392 3513 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 19:24:27.172620 kubelet[3513]: I0413 19:24:27.172433 3513 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 19:24:27.172620 kubelet[3513]: I0413 19:24:27.172540 3513 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:27.174880 kubelet[3513]: I0413 19:24:27.173770 3513 kubelet.go:480] "Attempting to sync node with API server" Apr 13 19:24:27.174880 kubelet[3513]: I0413 19:24:27.173810 3513 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 19:24:27.174880 kubelet[3513]: I0413 19:24:27.173874 3513 kubelet.go:386] "Adding apiserver pod source" Apr 13 19:24:27.174880 kubelet[3513]: I0413 19:24:27.173913 3513 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 19:24:27.183483 kubelet[3513]: I0413 19:24:27.182904 3513 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 19:24:27.187967 kubelet[3513]: I0413 19:24:27.185950 3513 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 19:24:27.208832 kubelet[3513]: I0413 19:24:27.208770 3513 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 19:24:27.208982 kubelet[3513]: I0413 19:24:27.208857 3513 server.go:1289] "Started kubelet" Apr 13 19:24:27.238460 kubelet[3513]: I0413 19:24:27.238248 3513 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 19:24:27.243617 kubelet[3513]: I0413 19:24:27.243550 3513 server.go:317] "Adding debug handlers to kubelet server" Apr 13 19:24:27.264027 kubelet[3513]: I0413 19:24:27.263911 3513 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 19:24:27.264596 kubelet[3513]: I0413 19:24:27.264527 3513 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 19:24:27.266152 kubelet[3513]: I0413 19:24:27.265988 3513 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 19:24:27.284214 sudo[3529]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 19:24:27.285125 sudo[3529]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 19:24:27.291209 kubelet[3513]: I0413 19:24:27.286217 3513 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 19:24:27.293030 kubelet[3513]: I0413 19:24:27.292978 3513 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 19:24:27.296908 kubelet[3513]: E0413 19:24:27.296402 3513 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-24\" not found" Apr 13 19:24:27.298527 kubelet[3513]: I0413 19:24:27.297738 3513 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 19:24:27.298527 kubelet[3513]: I0413 19:24:27.298010 3513 reconciler.go:26] "Reconciler: start to sync state" Apr 13 19:24:27.316811 kubelet[3513]: I0413 19:24:27.316756 3513 factory.go:223] Registration of the systemd container factory successfully Apr 13 19:24:27.317313 kubelet[3513]: I0413 19:24:27.317243 3513 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 19:24:27.321478 kubelet[3513]: E0413 19:24:27.321401 3513 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 19:24:27.339354 kubelet[3513]: I0413 19:24:27.339310 3513 factory.go:223] Registration of the containerd container factory successfully Apr 13 19:24:27.381951 kubelet[3513]: I0413 19:24:27.381900 3513 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 19:24:27.395039 kubelet[3513]: I0413 19:24:27.394974 3513 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 19:24:27.395803 kubelet[3513]: I0413 19:24:27.395765 3513 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 19:24:27.396602 kubelet[3513]: I0413 19:24:27.396109 3513 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 19:24:27.396602 kubelet[3513]: I0413 19:24:27.396136 3513 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 19:24:27.396602 kubelet[3513]: E0413 19:24:27.396217 3513 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 19:24:27.498629 kubelet[3513]: E0413 19:24:27.498287 3513 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586043 3513 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586158 3513 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586211 3513 state_mem.go:36] "Initialized new in-memory state store" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586442 3513 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586466 3513 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586501 3513 policy_none.go:49] "None policy: Start" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586520 3513 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586541 3513 state_mem.go:35] "Initializing new in-memory state store" Apr 13 19:24:27.586846 kubelet[3513]: I0413 19:24:27.586717 3513 state_mem.go:75] "Updated machine memory state" Apr 13 19:24:27.600409 kubelet[3513]: E0413 19:24:27.599631 3513 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 19:24:27.600409 kubelet[3513]: I0413 19:24:27.599941 3513 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 19:24:27.600409 kubelet[3513]: I0413 19:24:27.599963 3513 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 19:24:27.603316 kubelet[3513]: I0413 19:24:27.603226 3513 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 19:24:27.605480 containerd[2013]: time="2026-04-13T19:24:27.605263671Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 19:24:27.612241 kubelet[3513]: I0413 19:24:27.609465 3513 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 19:24:27.612241 kubelet[3513]: I0413 19:24:27.609764 3513 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 19:24:27.614011 kubelet[3513]: E0413 19:24:27.613974 3513 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 19:24:27.702510 kubelet[3513]: I0413 19:24:27.702469 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:27.703401 kubelet[3513]: I0413 19:24:27.703363 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:27.704080 kubelet[3513]: I0413 19:24:27.703584 3513 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-24" Apr 13 19:24:27.723087 kubelet[3513]: I0413 19:24:27.721894 3513 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-24" Apr 13 19:24:27.756779 kubelet[3513]: E0413 19:24:27.756572 3513 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-24\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-24" Apr 13 19:24:27.801120 kubelet[3513]: I0413 19:24:27.800925 3513 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-24" Apr 13 19:24:27.801357 kubelet[3513]: I0413 19:24:27.801042 3513 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-24" Apr 13 19:24:27.808728 kubelet[3513]: I0413 19:24:27.807316 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e98d8f833c1b75250359aa9484404067-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-24\" (UID: \"e98d8f833c1b75250359aa9484404067\") " pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:27.808728 kubelet[3513]: I0413 19:24:27.807377 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:27.808728 kubelet[3513]: I0413 19:24:27.807426 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:27.808728 kubelet[3513]: I0413 19:24:27.807468 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:27.808728 kubelet[3513]: I0413 19:24:27.807504 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e98d8f833c1b75250359aa9484404067-ca-certs\") pod \"kube-apiserver-ip-172-31-31-24\" (UID: \"e98d8f833c1b75250359aa9484404067\") " pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:27.809123 kubelet[3513]: I0413 19:24:27.807540 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e98d8f833c1b75250359aa9484404067-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-24\" (UID: \"e98d8f833c1b75250359aa9484404067\") " pod="kube-system/kube-apiserver-ip-172-31-31-24" Apr 13 19:24:27.809123 kubelet[3513]: I0413 19:24:27.807575 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:27.809123 kubelet[3513]: I0413 19:24:27.807611 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da0b03f8946cca958e4b68e75e47b8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-24\" (UID: \"da0b03f8946cca958e4b68e75e47b8cd\") " pod="kube-system/kube-controller-manager-ip-172-31-31-24" Apr 13 19:24:27.809123 kubelet[3513]: I0413 19:24:27.807651 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7fce242da5884bcef6e209c078f32e4-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-24\" (UID: \"b7fce242da5884bcef6e209c078f32e4\") " pod="kube-system/kube-scheduler-ip-172-31-31-24" Apr 13 19:24:28.177677 kubelet[3513]: I0413 19:24:28.177633 3513 apiserver.go:52] "Watching apiserver" Apr 13 19:24:28.198246 kubelet[3513]: I0413 19:24:28.198166 3513 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 19:24:28.228017 systemd[1]: Created slice kubepods-besteffort-pod9b3a5dfe_ae13_459d_8715_fd9eda127209.slice - libcontainer container kubepods-besteffort-pod9b3a5dfe_ae13_459d_8715_fd9eda127209.slice. Apr 13 19:24:28.313097 kubelet[3513]: I0413 19:24:28.311689 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b3a5dfe-ae13-459d-8715-fd9eda127209-xtables-lock\") pod \"kube-proxy-h2zgx\" (UID: \"9b3a5dfe-ae13-459d-8715-fd9eda127209\") " pod="kube-system/kube-proxy-h2zgx" Apr 13 19:24:28.313097 kubelet[3513]: I0413 19:24:28.311764 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b3a5dfe-ae13-459d-8715-fd9eda127209-lib-modules\") pod \"kube-proxy-h2zgx\" (UID: \"9b3a5dfe-ae13-459d-8715-fd9eda127209\") " pod="kube-system/kube-proxy-h2zgx" Apr 13 19:24:28.313097 kubelet[3513]: I0413 19:24:28.311811 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b3a5dfe-ae13-459d-8715-fd9eda127209-kube-proxy\") pod \"kube-proxy-h2zgx\" (UID: \"9b3a5dfe-ae13-459d-8715-fd9eda127209\") " pod="kube-system/kube-proxy-h2zgx" Apr 13 19:24:28.313097 kubelet[3513]: I0413 19:24:28.311851 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prtms\" (UniqueName: \"kubernetes.io/projected/9b3a5dfe-ae13-459d-8715-fd9eda127209-kube-api-access-prtms\") pod \"kube-proxy-h2zgx\" (UID: \"9b3a5dfe-ae13-459d-8715-fd9eda127209\") " pod="kube-system/kube-proxy-h2zgx" Apr 13 19:24:28.359874 kubelet[3513]: I0413 19:24:28.359209 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-24" podStartSLOduration=1.3591860310000001 podStartE2EDuration="1.359186031s" podCreationTimestamp="2026-04-13 19:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:28.319732047 +0000 UTC m=+1.334817860" watchObservedRunningTime="2026-04-13 19:24:28.359186031 +0000 UTC m=+1.374271820" Apr 13 19:24:28.390526 kubelet[3513]: I0413 19:24:28.390419 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-24" podStartSLOduration=1.3903932669999999 podStartE2EDuration="1.390393267s" podCreationTimestamp="2026-04-13 19:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:28.362380059 +0000 UTC m=+1.377465848" watchObservedRunningTime="2026-04-13 19:24:28.390393267 +0000 UTC m=+1.405479044" Apr 13 19:24:28.530846 sudo[3529]: pam_unix(sudo:session): session closed for user root Apr 13 19:24:28.540930 containerd[2013]: time="2026-04-13T19:24:28.540861544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2zgx,Uid:9b3a5dfe-ae13-459d-8715-fd9eda127209,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:28.640175 containerd[2013]: time="2026-04-13T19:24:28.638179108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:28.640175 containerd[2013]: time="2026-04-13T19:24:28.638325592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:28.640175 containerd[2013]: time="2026-04-13T19:24:28.638371744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.640175 containerd[2013]: time="2026-04-13T19:24:28.638606824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:28.716499 systemd[1]: Started cri-containerd-4abd186ddc9017db3781f47b5edf7d51a23066fe8af19d8fae2eeb92343279ee.scope - libcontainer container 4abd186ddc9017db3781f47b5edf7d51a23066fe8af19d8fae2eeb92343279ee. Apr 13 19:24:28.781662 containerd[2013]: time="2026-04-13T19:24:28.781426217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h2zgx,Uid:9b3a5dfe-ae13-459d-8715-fd9eda127209,Namespace:kube-system,Attempt:0,} returns sandbox id \"4abd186ddc9017db3781f47b5edf7d51a23066fe8af19d8fae2eeb92343279ee\"" Apr 13 19:24:28.793206 containerd[2013]: time="2026-04-13T19:24:28.793124057Z" level=info msg="CreateContainer within sandbox \"4abd186ddc9017db3781f47b5edf7d51a23066fe8af19d8fae2eeb92343279ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 19:24:28.831175 containerd[2013]: time="2026-04-13T19:24:28.831031085Z" level=info msg="CreateContainer within sandbox \"4abd186ddc9017db3781f47b5edf7d51a23066fe8af19d8fae2eeb92343279ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f5eb87b2b0c7c8c868c6f01a8eacac2528e60b88bbd151643dc78dc109f2641\"" Apr 13 19:24:28.833698 containerd[2013]: time="2026-04-13T19:24:28.833545769Z" level=info msg="StartContainer for \"5f5eb87b2b0c7c8c868c6f01a8eacac2528e60b88bbd151643dc78dc109f2641\"" Apr 13 19:24:28.912420 systemd[1]: Started cri-containerd-5f5eb87b2b0c7c8c868c6f01a8eacac2528e60b88bbd151643dc78dc109f2641.scope - libcontainer container 5f5eb87b2b0c7c8c868c6f01a8eacac2528e60b88bbd151643dc78dc109f2641. Apr 13 19:24:28.987890 containerd[2013]: time="2026-04-13T19:24:28.987806634Z" level=info msg="StartContainer for \"5f5eb87b2b0c7c8c868c6f01a8eacac2528e60b88bbd151643dc78dc109f2641\" returns successfully" Apr 13 19:24:29.517470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533935036.mount: Deactivated successfully. Apr 13 19:24:30.477088 kubelet[3513]: I0413 19:24:30.476972 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h2zgx" podStartSLOduration=2.4769492570000002 podStartE2EDuration="2.476949257s" podCreationTimestamp="2026-04-13 19:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:24:29.504706181 +0000 UTC m=+2.519792030" watchObservedRunningTime="2026-04-13 19:24:30.476949257 +0000 UTC m=+3.492035046" Apr 13 19:24:30.507412 systemd[1]: Created slice kubepods-burstable-poddc9ac902_6769_4eef_9928_68acd68be863.slice - libcontainer container kubepods-burstable-poddc9ac902_6769_4eef_9928_68acd68be863.slice. Apr 13 19:24:30.508471 kubelet[3513]: I0413 19:24:30.508133 3513 status_manager.go:895] "Failed to get status for pod" podUID="dc9ac902-6769-4eef-9928-68acd68be863" pod="kube-system/cilium-tghjh" err="pods \"cilium-tghjh\" is forbidden: User \"system:node:ip-172-31-31-24\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-24' and this object" Apr 13 19:24:30.508471 kubelet[3513]: E0413 19:24:30.508290 3513 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-31-24\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-24' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Apr 13 19:24:30.510106 kubelet[3513]: E0413 19:24:30.508292 3513 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-31-24\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-24' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Apr 13 19:24:30.512097 kubelet[3513]: E0413 19:24:30.510486 3513 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-31-24\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-24' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Apr 13 19:24:30.568675 systemd[1]: Created slice kubepods-besteffort-pod3fe4413d_b356_401d_bd7e_0a43bf499d0d.slice - libcontainer container kubepods-besteffort-pod3fe4413d_b356_401d_bd7e_0a43bf499d0d.slice. Apr 13 19:24:30.641200 kubelet[3513]: I0413 19:24:30.641143 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-run\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.641432 kubelet[3513]: I0413 19:24:30.641403 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ms9\" (UniqueName: \"kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-kube-api-access-d6ms9\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.641835 kubelet[3513]: I0413 19:24:30.641603 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fe4413d-b356-401d-bd7e-0a43bf499d0d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9fg5f\" (UID: \"3fe4413d-b356-401d-bd7e-0a43bf499d0d\") " pod="kube-system/cilium-operator-6c4d7847fc-9fg5f" Apr 13 19:24:30.641835 kubelet[3513]: I0413 19:24:30.641670 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc9ac902-6769-4eef-9928-68acd68be863-clustermesh-secrets\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.641835 kubelet[3513]: I0413 19:24:30.641707 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-hubble-tls\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.641835 kubelet[3513]: I0413 19:24:30.641744 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-kernel\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.642707 kubelet[3513]: I0413 19:24:30.641792 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4w5n\" (UniqueName: \"kubernetes.io/projected/3fe4413d-b356-401d-bd7e-0a43bf499d0d-kube-api-access-z4w5n\") pod \"cilium-operator-6c4d7847fc-9fg5f\" (UID: \"3fe4413d-b356-401d-bd7e-0a43bf499d0d\") " pod="kube-system/cilium-operator-6c4d7847fc-9fg5f" Apr 13 19:24:30.642707 kubelet[3513]: I0413 19:24:30.642660 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-xtables-lock\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.642923 kubelet[3513]: I0413 19:24:30.642736 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-etc-cni-netd\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.642923 kubelet[3513]: I0413 19:24:30.642791 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-bpf-maps\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.642923 kubelet[3513]: I0413 19:24:30.642839 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-lib-modules\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.642923 kubelet[3513]: I0413 19:24:30.642890 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc9ac902-6769-4eef-9928-68acd68be863-cilium-config-path\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.643180 kubelet[3513]: I0413 19:24:30.642940 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-hostproc\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.643180 kubelet[3513]: I0413 19:24:30.643016 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-net\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.643180 kubelet[3513]: I0413 19:24:30.643102 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-cgroup\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:30.643180 kubelet[3513]: I0413 19:24:30.643159 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cni-path\") pod \"cilium-tghjh\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " pod="kube-system/cilium-tghjh" Apr 13 19:24:31.745588 kubelet[3513]: E0413 19:24:31.745519 3513 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 13 19:24:31.746297 kubelet[3513]: E0413 19:24:31.745675 3513 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc9ac902-6769-4eef-9928-68acd68be863-cilium-config-path podName:dc9ac902-6769-4eef-9928-68acd68be863 nodeName:}" failed. No retries permitted until 2026-04-13 19:24:32.245639136 +0000 UTC m=+5.260724913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/dc9ac902-6769-4eef-9928-68acd68be863-cilium-config-path") pod "cilium-tghjh" (UID: "dc9ac902-6769-4eef-9928-68acd68be863") : failed to sync configmap cache: timed out waiting for the condition Apr 13 19:24:31.746297 kubelet[3513]: E0413 19:24:31.745861 3513 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 13 19:24:31.746297 kubelet[3513]: E0413 19:24:31.745921 3513 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3fe4413d-b356-401d-bd7e-0a43bf499d0d-cilium-config-path podName:3fe4413d-b356-401d-bd7e-0a43bf499d0d nodeName:}" failed. No retries permitted until 2026-04-13 19:24:32.245903604 +0000 UTC m=+5.260989381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/3fe4413d-b356-401d-bd7e-0a43bf499d0d-cilium-config-path") pod "cilium-operator-6c4d7847fc-9fg5f" (UID: "3fe4413d-b356-401d-bd7e-0a43bf499d0d") : failed to sync configmap cache: timed out waiting for the condition Apr 13 19:24:32.324438 containerd[2013]: time="2026-04-13T19:24:32.324188827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tghjh,Uid:dc9ac902-6769-4eef-9928-68acd68be863,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:32.364890 containerd[2013]: time="2026-04-13T19:24:32.364717375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:32.364890 containerd[2013]: time="2026-04-13T19:24:32.364838755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:32.364890 containerd[2013]: time="2026-04-13T19:24:32.364877359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:32.365490 containerd[2013]: time="2026-04-13T19:24:32.365112379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:32.377558 containerd[2013]: time="2026-04-13T19:24:32.377491483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9fg5f,Uid:3fe4413d-b356-401d-bd7e-0a43bf499d0d,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:32.427456 systemd[1]: Started cri-containerd-045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13.scope - libcontainer container 045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13. Apr 13 19:24:32.442382 containerd[2013]: time="2026-04-13T19:24:32.441736807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:24:32.442382 containerd[2013]: time="2026-04-13T19:24:32.441854515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:24:32.442382 containerd[2013]: time="2026-04-13T19:24:32.441909175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:32.442382 containerd[2013]: time="2026-04-13T19:24:32.442163119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:24:32.492381 systemd[1]: Started cri-containerd-515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81.scope - libcontainer container 515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81. Apr 13 19:24:32.513568 containerd[2013]: time="2026-04-13T19:24:32.513502700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tghjh,Uid:dc9ac902-6769-4eef-9928-68acd68be863,Namespace:kube-system,Attempt:0,} returns sandbox id \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\"" Apr 13 19:24:32.518109 containerd[2013]: time="2026-04-13T19:24:32.516878480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 19:24:32.584613 containerd[2013]: time="2026-04-13T19:24:32.584445464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9fg5f,Uid:3fe4413d-b356-401d-bd7e-0a43bf499d0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\"" Apr 13 19:24:40.877858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759279221.mount: Deactivated successfully. Apr 13 19:24:43.637480 containerd[2013]: time="2026-04-13T19:24:43.637205263Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:43.639242 containerd[2013]: time="2026-04-13T19:24:43.639168883Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 13 19:24:43.640097 containerd[2013]: time="2026-04-13T19:24:43.639987295Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:43.644094 containerd[2013]: time="2026-04-13T19:24:43.643696051Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.126752411s" Apr 13 19:24:43.644094 containerd[2013]: time="2026-04-13T19:24:43.643760983Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 13 19:24:43.647397 containerd[2013]: time="2026-04-13T19:24:43.646705687Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 19:24:43.658869 containerd[2013]: time="2026-04-13T19:24:43.657955639Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:24:43.682877 containerd[2013]: time="2026-04-13T19:24:43.682819135Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d\"" Apr 13 19:24:43.684262 containerd[2013]: time="2026-04-13T19:24:43.684070183Z" level=info msg="StartContainer for \"d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d\"" Apr 13 19:24:43.750354 systemd[1]: Started cri-containerd-d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d.scope - libcontainer container d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d. Apr 13 19:24:43.801847 containerd[2013]: time="2026-04-13T19:24:43.801781064Z" level=info msg="StartContainer for \"d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d\" returns successfully" Apr 13 19:24:43.833076 systemd[1]: cri-containerd-d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d.scope: Deactivated successfully. Apr 13 19:24:43.872841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d-rootfs.mount: Deactivated successfully. Apr 13 19:24:45.140481 containerd[2013]: time="2026-04-13T19:24:45.140355642Z" level=info msg="shim disconnected" id=d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d namespace=k8s.io Apr 13 19:24:45.140481 containerd[2013]: time="2026-04-13T19:24:45.140432526Z" level=warning msg="cleaning up after shim disconnected" id=d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d namespace=k8s.io Apr 13 19:24:45.140481 containerd[2013]: time="2026-04-13T19:24:45.140453094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:45.566101 containerd[2013]: time="2026-04-13T19:24:45.564905744Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:24:45.617218 containerd[2013]: time="2026-04-13T19:24:45.617137389Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985\"" Apr 13 19:24:45.619129 containerd[2013]: time="2026-04-13T19:24:45.618974625Z" level=info msg="StartContainer for \"168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985\"" Apr 13 19:24:45.704418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784248023.mount: Deactivated successfully. Apr 13 19:24:45.736384 systemd[1]: Started cri-containerd-168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985.scope - libcontainer container 168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985. Apr 13 19:24:45.825709 containerd[2013]: time="2026-04-13T19:24:45.825408214Z" level=info msg="StartContainer for \"168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985\" returns successfully" Apr 13 19:24:45.856670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 19:24:45.857287 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:24:45.857402 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:24:45.870250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 19:24:45.870729 systemd[1]: cri-containerd-168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985.scope: Deactivated successfully. Apr 13 19:24:45.907153 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 19:24:45.952637 containerd[2013]: time="2026-04-13T19:24:45.952276510Z" level=info msg="shim disconnected" id=168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985 namespace=k8s.io Apr 13 19:24:45.952637 containerd[2013]: time="2026-04-13T19:24:45.952379674Z" level=warning msg="cleaning up after shim disconnected" id=168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985 namespace=k8s.io Apr 13 19:24:45.952637 containerd[2013]: time="2026-04-13T19:24:45.952400710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:46.506329 containerd[2013]: time="2026-04-13T19:24:46.506270013Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:46.509185 containerd[2013]: time="2026-04-13T19:24:46.509021157Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 13 19:24:46.510445 containerd[2013]: time="2026-04-13T19:24:46.510305997Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 19:24:46.514677 containerd[2013]: time="2026-04-13T19:24:46.513509709Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.866735514s" Apr 13 19:24:46.514677 containerd[2013]: time="2026-04-13T19:24:46.513584241Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 13 19:24:46.519039 containerd[2013]: time="2026-04-13T19:24:46.518964453Z" level=info msg="CreateContainer within sandbox \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 19:24:46.548740 containerd[2013]: time="2026-04-13T19:24:46.548556501Z" level=info msg="CreateContainer within sandbox \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\"" Apr 13 19:24:46.552150 containerd[2013]: time="2026-04-13T19:24:46.551999685Z" level=info msg="StartContainer for \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\"" Apr 13 19:24:46.586945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985-rootfs.mount: Deactivated successfully. Apr 13 19:24:46.591245 containerd[2013]: time="2026-04-13T19:24:46.590305018Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:24:46.637467 containerd[2013]: time="2026-04-13T19:24:46.637385602Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3\"" Apr 13 19:24:46.642899 containerd[2013]: time="2026-04-13T19:24:46.642817162Z" level=info msg="StartContainer for \"57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3\"" Apr 13 19:24:46.673828 systemd[1]: Started cri-containerd-6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246.scope - libcontainer container 6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246. Apr 13 19:24:46.731517 systemd[1]: Started cri-containerd-57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3.scope - libcontainer container 57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3. Apr 13 19:24:46.765871 containerd[2013]: time="2026-04-13T19:24:46.765384586Z" level=info msg="StartContainer for \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\" returns successfully" Apr 13 19:24:46.811160 containerd[2013]: time="2026-04-13T19:24:46.811004603Z" level=info msg="StartContainer for \"57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3\" returns successfully" Apr 13 19:24:46.822869 systemd[1]: cri-containerd-57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3.scope: Deactivated successfully. Apr 13 19:24:46.956187 containerd[2013]: time="2026-04-13T19:24:46.956034899Z" level=info msg="shim disconnected" id=57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3 namespace=k8s.io Apr 13 19:24:46.956187 containerd[2013]: time="2026-04-13T19:24:46.956179187Z" level=warning msg="cleaning up after shim disconnected" id=57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3 namespace=k8s.io Apr 13 19:24:46.956537 containerd[2013]: time="2026-04-13T19:24:46.956205047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:47.587334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3-rootfs.mount: Deactivated successfully. Apr 13 19:24:47.592078 containerd[2013]: time="2026-04-13T19:24:47.592001386Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:24:47.619862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670360354.mount: Deactivated successfully. Apr 13 19:24:47.623688 containerd[2013]: time="2026-04-13T19:24:47.623613371Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96\"" Apr 13 19:24:47.627563 containerd[2013]: time="2026-04-13T19:24:47.627483167Z" level=info msg="StartContainer for \"197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96\"" Apr 13 19:24:47.705346 systemd[1]: Started cri-containerd-197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96.scope - libcontainer container 197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96. Apr 13 19:24:47.806779 containerd[2013]: time="2026-04-13T19:24:47.806676204Z" level=info msg="StartContainer for \"197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96\" returns successfully" Apr 13 19:24:47.807832 systemd[1]: cri-containerd-197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96.scope: Deactivated successfully. Apr 13 19:24:47.899333 containerd[2013]: time="2026-04-13T19:24:47.899185728Z" level=info msg="shim disconnected" id=197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96 namespace=k8s.io Apr 13 19:24:47.899333 containerd[2013]: time="2026-04-13T19:24:47.899328132Z" level=warning msg="cleaning up after shim disconnected" id=197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96 namespace=k8s.io Apr 13 19:24:47.899333 containerd[2013]: time="2026-04-13T19:24:47.899350788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:24:48.038955 kubelet[3513]: I0413 19:24:48.038559 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9fg5f" podStartSLOduration=4.111596924 podStartE2EDuration="18.038534505s" podCreationTimestamp="2026-04-13 19:24:30 +0000 UTC" firstStartedPulling="2026-04-13 19:24:32.587853896 +0000 UTC m=+5.602939673" lastFinishedPulling="2026-04-13 19:24:46.514791477 +0000 UTC m=+19.529877254" observedRunningTime="2026-04-13 19:24:47.835655556 +0000 UTC m=+20.850741369" watchObservedRunningTime="2026-04-13 19:24:48.038534505 +0000 UTC m=+21.053620330" Apr 13 19:24:48.585471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96-rootfs.mount: Deactivated successfully. Apr 13 19:24:48.603705 containerd[2013]: time="2026-04-13T19:24:48.603645720Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:24:48.632253 containerd[2013]: time="2026-04-13T19:24:48.632035548Z" level=info msg="CreateContainer within sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\"" Apr 13 19:24:48.634485 containerd[2013]: time="2026-04-13T19:24:48.632829540Z" level=info msg="StartContainer for \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\"" Apr 13 19:24:48.633449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3350903374.mount: Deactivated successfully. Apr 13 19:24:48.683695 systemd[1]: Started cri-containerd-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97.scope - libcontainer container 651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97. Apr 13 19:24:48.744078 containerd[2013]: time="2026-04-13T19:24:48.743697000Z" level=info msg="StartContainer for \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\" returns successfully" Apr 13 19:24:48.950447 kubelet[3513]: I0413 19:24:48.950408 3513 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 19:24:49.043757 systemd[1]: Created slice kubepods-burstable-pod955464e5_78ba_4060_8457_28daa59737d1.slice - libcontainer container kubepods-burstable-pod955464e5_78ba_4060_8457_28daa59737d1.slice. Apr 13 19:24:49.062554 systemd[1]: Created slice kubepods-burstable-pod9abdd903_08d1_469e_bc55_8f4f1e6cbab8.slice - libcontainer container kubepods-burstable-pod9abdd903_08d1_469e_bc55_8f4f1e6cbab8.slice. Apr 13 19:24:49.183515 kubelet[3513]: I0413 19:24:49.183449 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/955464e5-78ba-4060-8457-28daa59737d1-config-volume\") pod \"coredns-674b8bbfcf-cshp9\" (UID: \"955464e5-78ba-4060-8457-28daa59737d1\") " pod="kube-system/coredns-674b8bbfcf-cshp9" Apr 13 19:24:49.184310 kubelet[3513]: I0413 19:24:49.184271 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6mh\" (UniqueName: \"kubernetes.io/projected/9abdd903-08d1-469e-bc55-8f4f1e6cbab8-kube-api-access-jc6mh\") pod \"coredns-674b8bbfcf-h4kgz\" (UID: \"9abdd903-08d1-469e-bc55-8f4f1e6cbab8\") " pod="kube-system/coredns-674b8bbfcf-h4kgz" Apr 13 19:24:49.184453 kubelet[3513]: I0413 19:24:49.184428 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whb2t\" (UniqueName: \"kubernetes.io/projected/955464e5-78ba-4060-8457-28daa59737d1-kube-api-access-whb2t\") pod \"coredns-674b8bbfcf-cshp9\" (UID: \"955464e5-78ba-4060-8457-28daa59737d1\") " pod="kube-system/coredns-674b8bbfcf-cshp9" Apr 13 19:24:49.184702 kubelet[3513]: I0413 19:24:49.184674 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9abdd903-08d1-469e-bc55-8f4f1e6cbab8-config-volume\") pod \"coredns-674b8bbfcf-h4kgz\" (UID: \"9abdd903-08d1-469e-bc55-8f4f1e6cbab8\") " pod="kube-system/coredns-674b8bbfcf-h4kgz" Apr 13 19:24:49.355837 containerd[2013]: time="2026-04-13T19:24:49.354255587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cshp9,Uid:955464e5-78ba-4060-8457-28daa59737d1,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:49.377100 containerd[2013]: time="2026-04-13T19:24:49.375479411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h4kgz,Uid:9abdd903-08d1-469e-bc55-8f4f1e6cbab8,Namespace:kube-system,Attempt:0,}" Apr 13 19:24:52.040118 (udev-worker)[4289]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:52.047018 systemd-networkd[1928]: cilium_host: Link UP Apr 13 19:24:52.048696 systemd-networkd[1928]: cilium_net: Link UP Apr 13 19:24:52.049117 systemd-networkd[1928]: cilium_net: Gained carrier Apr 13 19:24:52.049477 systemd-networkd[1928]: cilium_host: Gained carrier Apr 13 19:24:52.050572 (udev-worker)[4323]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:52.146920 systemd-networkd[1928]: cilium_host: Gained IPv6LL Apr 13 19:24:52.259687 systemd[1]: run-containerd-runc-k8s.io-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97-runc.2QOkgg.mount: Deactivated successfully. Apr 13 19:24:52.322890 (udev-worker)[4339]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:52.341324 systemd-networkd[1928]: cilium_vxlan: Link UP Apr 13 19:24:52.341345 systemd-networkd[1928]: cilium_vxlan: Gained carrier Apr 13 19:24:52.538359 systemd-networkd[1928]: cilium_net: Gained IPv6LL Apr 13 19:24:52.915396 kernel: NET: Registered PF_ALG protocol family Apr 13 19:24:53.794374 systemd-networkd[1928]: cilium_vxlan: Gained IPv6LL Apr 13 19:24:54.542752 systemd-networkd[1928]: lxc_health: Link UP Apr 13 19:24:54.543499 systemd-networkd[1928]: lxc_health: Gained carrier Apr 13 19:24:54.968134 kernel: eth0: renamed from tmp025c2 Apr 13 19:24:54.970165 systemd-networkd[1928]: lxcb723f35df245: Link UP Apr 13 19:24:54.974767 systemd-networkd[1928]: lxcb723f35df245: Gained carrier Apr 13 19:24:55.024489 systemd-networkd[1928]: lxc81b03c7369e2: Link UP Apr 13 19:24:55.036102 kernel: eth0: renamed from tmp30f8c Apr 13 19:24:55.041881 (udev-worker)[4707]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:24:55.044307 systemd-networkd[1928]: lxc81b03c7369e2: Gained carrier Apr 13 19:24:55.907355 systemd-networkd[1928]: lxc_health: Gained IPv6LL Apr 13 19:24:56.226393 systemd-networkd[1928]: lxcb723f35df245: Gained IPv6LL Apr 13 19:24:56.372665 kubelet[3513]: I0413 19:24:56.371764 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tghjh" podStartSLOduration=15.242129215 podStartE2EDuration="26.371745486s" podCreationTimestamp="2026-04-13 19:24:30 +0000 UTC" firstStartedPulling="2026-04-13 19:24:32.516004232 +0000 UTC m=+5.531090021" lastFinishedPulling="2026-04-13 19:24:43.645620515 +0000 UTC m=+16.660706292" observedRunningTime="2026-04-13 19:24:49.706555117 +0000 UTC m=+22.721640954" watchObservedRunningTime="2026-04-13 19:24:56.371745486 +0000 UTC m=+29.386831263" Apr 13 19:24:56.994602 systemd-networkd[1928]: lxc81b03c7369e2: Gained IPv6LL Apr 13 19:24:59.234666 systemd[1]: run-containerd-runc-k8s.io-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97-runc.vyLB9r.mount: Deactivated successfully. Apr 13 19:24:59.734745 ntpd[1982]: Listen normally on 8 cilium_host 192.168.0.207:123 Apr 13 19:24:59.735982 ntpd[1982]: 13 Apr 19:24:59 ntpd[1982]: Listen normally on 8 cilium_host 192.168.0.207:123 Apr 13 19:24:59.735982 ntpd[1982]: 13 Apr 19:24:59 ntpd[1982]: Listen normally on 9 cilium_net [fe80::f04f:3bff:fe53:4038%4]:123 Apr 13 19:24:59.735982 ntpd[1982]: 13 Apr 19:24:59 ntpd[1982]: Listen normally on 10 cilium_host [fe80::ac6e:53ff:fe18:526f%5]:123 Apr 13 19:24:59.735982 ntpd[1982]: 13 Apr 19:24:59 ntpd[1982]: Listen normally on 11 cilium_vxlan [fe80::5069:ecff:fea2:ceda%6]:123 Apr 13 19:24:59.735982 ntpd[1982]: 13 Apr 19:24:59 ntpd[1982]: Listen normally on 12 lxc_health [fe80::dc7b:c7ff:fe50:ac64%8]:123 Apr 13 19:24:59.735982 ntpd[1982]: 13 Apr 19:24:59 ntpd[1982]: Listen normally on 13 lxcb723f35df245 [fe80::f820:69ff:fe19:136%10]:123 Apr 13 19:24:59.735982 ntpd[1982]: 13 Apr 19:24:59 ntpd[1982]: Listen normally on 14 lxc81b03c7369e2 [fe80::2c21:65ff:fece:5ebb%12]:123 Apr 13 19:24:59.734879 ntpd[1982]: Listen normally on 9 cilium_net [fe80::f04f:3bff:fe53:4038%4]:123 Apr 13 19:24:59.734962 ntpd[1982]: Listen normally on 10 cilium_host [fe80::ac6e:53ff:fe18:526f%5]:123 Apr 13 19:24:59.735031 ntpd[1982]: Listen normally on 11 cilium_vxlan [fe80::5069:ecff:fea2:ceda%6]:123 Apr 13 19:24:59.735177 ntpd[1982]: Listen normally on 12 lxc_health [fe80::dc7b:c7ff:fe50:ac64%8]:123 Apr 13 19:24:59.735251 ntpd[1982]: Listen normally on 13 lxcb723f35df245 [fe80::f820:69ff:fe19:136%10]:123 Apr 13 19:24:59.735319 ntpd[1982]: Listen normally on 14 lxc81b03c7369e2 [fe80::2c21:65ff:fece:5ebb%12]:123 Apr 13 19:25:01.856625 systemd[1]: run-containerd-runc-k8s.io-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97-runc.2pMRvY.mount: Deactivated successfully. Apr 13 19:25:02.861696 sudo[2346]: pam_unix(sudo:session): session closed for user root Apr 13 19:25:03.029779 sshd[2327]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:03.038813 systemd[1]: sshd@6-172.31.31.24:22-4.175.71.9:36316.service: Deactivated successfully. Apr 13 19:25:03.045583 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 19:25:03.046363 systemd[1]: session-7.scope: Consumed 14.590s CPU time, 155.6M memory peak, 0B memory swap peak. Apr 13 19:25:03.051005 systemd-logind[1989]: Session 7 logged out. Waiting for processes to exit. Apr 13 19:25:03.054328 systemd-logind[1989]: Removed session 7. Apr 13 19:25:04.926691 containerd[2013]: time="2026-04-13T19:25:04.923927945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:04.926691 containerd[2013]: time="2026-04-13T19:25:04.926258669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:04.926691 containerd[2013]: time="2026-04-13T19:25:04.926304869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:04.926691 containerd[2013]: time="2026-04-13T19:25:04.926483933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:04.938274 containerd[2013]: time="2026-04-13T19:25:04.937161353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:25:04.938274 containerd[2013]: time="2026-04-13T19:25:04.937246661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:25:04.938274 containerd[2013]: time="2026-04-13T19:25:04.937285253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:04.940162 containerd[2013]: time="2026-04-13T19:25:04.939253565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:25:05.031099 systemd[1]: Started cri-containerd-025c258f65d274f8cc00ea6471e73176ad4cc95c2692b8e49c5b98a28c9ad35b.scope - libcontainer container 025c258f65d274f8cc00ea6471e73176ad4cc95c2692b8e49c5b98a28c9ad35b. Apr 13 19:25:05.037360 systemd[1]: Started cri-containerd-30f8c164d27e9658285eb27230ba420950d56216eeab14d6f6eb3b386e269148.scope - libcontainer container 30f8c164d27e9658285eb27230ba420950d56216eeab14d6f6eb3b386e269148. Apr 13 19:25:05.173220 containerd[2013]: time="2026-04-13T19:25:05.173040470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cshp9,Uid:955464e5-78ba-4060-8457-28daa59737d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"025c258f65d274f8cc00ea6471e73176ad4cc95c2692b8e49c5b98a28c9ad35b\"" Apr 13 19:25:05.188681 containerd[2013]: time="2026-04-13T19:25:05.187599350Z" level=info msg="CreateContainer within sandbox \"025c258f65d274f8cc00ea6471e73176ad4cc95c2692b8e49c5b98a28c9ad35b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:05.195856 containerd[2013]: time="2026-04-13T19:25:05.195776042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h4kgz,Uid:9abdd903-08d1-469e-bc55-8f4f1e6cbab8,Namespace:kube-system,Attempt:0,} returns sandbox id \"30f8c164d27e9658285eb27230ba420950d56216eeab14d6f6eb3b386e269148\"" Apr 13 19:25:05.216101 containerd[2013]: time="2026-04-13T19:25:05.215023850Z" level=info msg="CreateContainer within sandbox \"30f8c164d27e9658285eb27230ba420950d56216eeab14d6f6eb3b386e269148\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 19:25:05.260653 containerd[2013]: time="2026-04-13T19:25:05.258894170Z" level=info msg="CreateContainer within sandbox \"025c258f65d274f8cc00ea6471e73176ad4cc95c2692b8e49c5b98a28c9ad35b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"140a63213a6703812c95dd4b7b9516b3552741284f4d2053ddb288d585d69eab\"" Apr 13 19:25:05.261036 containerd[2013]: time="2026-04-13T19:25:05.260966630Z" level=info msg="StartContainer for \"140a63213a6703812c95dd4b7b9516b3552741284f4d2053ddb288d585d69eab\"" Apr 13 19:25:05.279904 containerd[2013]: time="2026-04-13T19:25:05.279114974Z" level=info msg="CreateContainer within sandbox \"30f8c164d27e9658285eb27230ba420950d56216eeab14d6f6eb3b386e269148\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aaf33a54fbb72627bae8b2a946c2f51f93e9acb67177121219f7012e36f431a6\"" Apr 13 19:25:05.280697 containerd[2013]: time="2026-04-13T19:25:05.280533074Z" level=info msg="StartContainer for \"aaf33a54fbb72627bae8b2a946c2f51f93e9acb67177121219f7012e36f431a6\"" Apr 13 19:25:05.344145 systemd[1]: Started cri-containerd-140a63213a6703812c95dd4b7b9516b3552741284f4d2053ddb288d585d69eab.scope - libcontainer container 140a63213a6703812c95dd4b7b9516b3552741284f4d2053ddb288d585d69eab. Apr 13 19:25:05.392429 systemd[1]: Started cri-containerd-aaf33a54fbb72627bae8b2a946c2f51f93e9acb67177121219f7012e36f431a6.scope - libcontainer container aaf33a54fbb72627bae8b2a946c2f51f93e9acb67177121219f7012e36f431a6. Apr 13 19:25:05.477971 containerd[2013]: time="2026-04-13T19:25:05.477244671Z" level=info msg="StartContainer for \"140a63213a6703812c95dd4b7b9516b3552741284f4d2053ddb288d585d69eab\" returns successfully" Apr 13 19:25:05.493315 containerd[2013]: time="2026-04-13T19:25:05.493216911Z" level=info msg="StartContainer for \"aaf33a54fbb72627bae8b2a946c2f51f93e9acb67177121219f7012e36f431a6\" returns successfully" Apr 13 19:25:05.698598 kubelet[3513]: I0413 19:25:05.698100 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h4kgz" podStartSLOduration=37.698077624 podStartE2EDuration="37.698077624s" podCreationTimestamp="2026-04-13 19:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:05.6948598 +0000 UTC m=+38.709945601" watchObservedRunningTime="2026-04-13 19:25:05.698077624 +0000 UTC m=+38.713163437" Apr 13 19:25:05.943593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112315257.mount: Deactivated successfully. Apr 13 19:25:47.337659 systemd[1]: Started sshd@7-172.31.31.24:22-4.175.71.9:37574.service - OpenSSH per-connection server daemon (4.175.71.9:37574). Apr 13 19:25:48.339263 sshd[5020]: Accepted publickey for core from 4.175.71.9 port 37574 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:48.342174 sshd[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:48.353130 systemd-logind[1989]: New session 8 of user core. Apr 13 19:25:48.361819 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 19:25:49.161958 sshd[5020]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:49.168763 systemd[1]: sshd@7-172.31.31.24:22-4.175.71.9:37574.service: Deactivated successfully. Apr 13 19:25:49.174372 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 19:25:49.176291 systemd-logind[1989]: Session 8 logged out. Waiting for processes to exit. Apr 13 19:25:49.179342 systemd-logind[1989]: Removed session 8. Apr 13 19:25:54.342714 systemd[1]: Started sshd@8-172.31.31.24:22-4.175.71.9:37578.service - OpenSSH per-connection server daemon (4.175.71.9:37578). Apr 13 19:25:55.355038 sshd[5034]: Accepted publickey for core from 4.175.71.9 port 37578 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:25:55.358948 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:25:55.372317 systemd-logind[1989]: New session 9 of user core. Apr 13 19:25:55.382461 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 19:25:56.178219 sshd[5034]: pam_unix(sshd:session): session closed for user core Apr 13 19:25:56.185843 systemd[1]: sshd@8-172.31.31.24:22-4.175.71.9:37578.service: Deactivated successfully. Apr 13 19:25:56.190795 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 19:25:56.195160 systemd-logind[1989]: Session 9 logged out. Waiting for processes to exit. Apr 13 19:25:56.197562 systemd-logind[1989]: Removed session 9. Apr 13 19:26:01.375637 systemd[1]: Started sshd@9-172.31.31.24:22-4.175.71.9:45878.service - OpenSSH per-connection server daemon (4.175.71.9:45878). Apr 13 19:26:02.383519 sshd[5050]: Accepted publickey for core from 4.175.71.9 port 45878 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:02.386291 sshd[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:02.394571 systemd-logind[1989]: New session 10 of user core. Apr 13 19:26:02.403359 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 19:26:03.199243 sshd[5050]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:03.206641 systemd-logind[1989]: Session 10 logged out. Waiting for processes to exit. Apr 13 19:26:03.208418 systemd[1]: sshd@9-172.31.31.24:22-4.175.71.9:45878.service: Deactivated successfully. Apr 13 19:26:03.213037 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 19:26:03.216983 systemd-logind[1989]: Removed session 10. Apr 13 19:26:03.363495 systemd[1]: Started sshd@10-172.31.31.24:22-4.175.71.9:45884.service - OpenSSH per-connection server daemon (4.175.71.9:45884). Apr 13 19:26:04.333927 sshd[5064]: Accepted publickey for core from 4.175.71.9 port 45884 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:04.336922 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:04.348114 systemd-logind[1989]: New session 11 of user core. Apr 13 19:26:04.355363 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 19:26:05.221493 sshd[5064]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:05.232189 systemd[1]: sshd@10-172.31.31.24:22-4.175.71.9:45884.service: Deactivated successfully. Apr 13 19:26:05.238602 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 19:26:05.240798 systemd-logind[1989]: Session 11 logged out. Waiting for processes to exit. Apr 13 19:26:05.244645 systemd-logind[1989]: Removed session 11. Apr 13 19:26:05.411584 systemd[1]: Started sshd@11-172.31.31.24:22-4.175.71.9:45892.service - OpenSSH per-connection server daemon (4.175.71.9:45892). Apr 13 19:26:06.413032 sshd[5075]: Accepted publickey for core from 4.175.71.9 port 45892 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:06.415990 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:06.425718 systemd-logind[1989]: New session 12 of user core. Apr 13 19:26:06.432398 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 19:26:07.244244 sshd[5075]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:07.251867 systemd[1]: sshd@11-172.31.31.24:22-4.175.71.9:45892.service: Deactivated successfully. Apr 13 19:26:07.258768 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 19:26:07.260952 systemd-logind[1989]: Session 12 logged out. Waiting for processes to exit. Apr 13 19:26:07.264307 systemd-logind[1989]: Removed session 12. Apr 13 19:26:12.423627 systemd[1]: Started sshd@12-172.31.31.24:22-4.175.71.9:46038.service - OpenSSH per-connection server daemon (4.175.71.9:46038). Apr 13 19:26:13.427970 sshd[5090]: Accepted publickey for core from 4.175.71.9 port 46038 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:13.430825 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:13.440984 systemd-logind[1989]: New session 13 of user core. Apr 13 19:26:13.450750 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 19:26:14.245788 sshd[5090]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:14.251778 systemd[1]: sshd@12-172.31.31.24:22-4.175.71.9:46038.service: Deactivated successfully. Apr 13 19:26:14.258393 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 19:26:14.262527 systemd-logind[1989]: Session 13 logged out. Waiting for processes to exit. Apr 13 19:26:14.265976 systemd-logind[1989]: Removed session 13. Apr 13 19:26:19.426625 systemd[1]: Started sshd@13-172.31.31.24:22-4.175.71.9:44284.service - OpenSSH per-connection server daemon (4.175.71.9:44284). Apr 13 19:26:20.428099 sshd[5104]: Accepted publickey for core from 4.175.71.9 port 44284 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:20.430376 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:20.437999 systemd-logind[1989]: New session 14 of user core. Apr 13 19:26:20.443372 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 19:26:21.241574 sshd[5104]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:21.248595 systemd[1]: sshd@13-172.31.31.24:22-4.175.71.9:44284.service: Deactivated successfully. Apr 13 19:26:21.253614 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 19:26:21.256146 systemd-logind[1989]: Session 14 logged out. Waiting for processes to exit. Apr 13 19:26:21.258855 systemd-logind[1989]: Removed session 14. Apr 13 19:26:21.410101 systemd[1]: Started sshd@14-172.31.31.24:22-4.175.71.9:44296.service - OpenSSH per-connection server daemon (4.175.71.9:44296). Apr 13 19:26:22.378646 sshd[5116]: Accepted publickey for core from 4.175.71.9 port 44296 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:22.380127 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:22.389211 systemd-logind[1989]: New session 15 of user core. Apr 13 19:26:22.398450 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 19:26:23.251973 sshd[5116]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:23.260636 systemd-logind[1989]: Session 15 logged out. Waiting for processes to exit. Apr 13 19:26:23.263030 systemd[1]: sshd@14-172.31.31.24:22-4.175.71.9:44296.service: Deactivated successfully. Apr 13 19:26:23.267035 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 19:26:23.270695 systemd-logind[1989]: Removed session 15. Apr 13 19:26:23.430574 systemd[1]: Started sshd@15-172.31.31.24:22-4.175.71.9:44306.service - OpenSSH per-connection server daemon (4.175.71.9:44306). Apr 13 19:26:24.410132 sshd[5127]: Accepted publickey for core from 4.175.71.9 port 44306 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:24.414419 sshd[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:24.422716 systemd-logind[1989]: New session 16 of user core. Apr 13 19:26:24.436396 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 19:26:26.035866 sshd[5127]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:26.043216 systemd[1]: sshd@15-172.31.31.24:22-4.175.71.9:44306.service: Deactivated successfully. Apr 13 19:26:26.048790 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 19:26:26.051978 systemd-logind[1989]: Session 16 logged out. Waiting for processes to exit. Apr 13 19:26:26.055125 systemd-logind[1989]: Removed session 16. Apr 13 19:26:26.216591 systemd[1]: Started sshd@16-172.31.31.24:22-4.175.71.9:35468.service - OpenSSH per-connection server daemon (4.175.71.9:35468). Apr 13 19:26:26.243570 update_engine[1990]: I20260413 19:26:26.243485 1990 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 19:26:26.243570 update_engine[1990]: I20260413 19:26:26.243562 1990 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 19:26:26.244189 update_engine[1990]: I20260413 19:26:26.243964 1990 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246212 1990 omaha_request_params.cc:62] Current group set to lts Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246390 1990 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246413 1990 update_attempter.cc:643] Scheduling an action processor start. Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246446 1990 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246513 1990 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246644 1990 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246665 1990 omaha_request_action.cc:272] Request: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: Apr 13 19:26:26.247144 update_engine[1990]: I20260413 19:26:26.246683 1990 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:26:26.249017 locksmithd[2046]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 19:26:26.249916 update_engine[1990]: I20260413 19:26:26.249837 1990 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:26:26.250583 update_engine[1990]: I20260413 19:26:26.250490 1990 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:26:26.258892 update_engine[1990]: E20260413 19:26:26.258780 1990 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:26:26.259083 update_engine[1990]: I20260413 19:26:26.258942 1990 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 19:26:27.206271 sshd[5145]: Accepted publickey for core from 4.175.71.9 port 35468 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:27.209346 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:27.218927 systemd-logind[1989]: New session 17 of user core. Apr 13 19:26:27.229562 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 19:26:28.299813 sshd[5145]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:28.310584 systemd[1]: sshd@16-172.31.31.24:22-4.175.71.9:35468.service: Deactivated successfully. Apr 13 19:26:28.317700 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 19:26:28.323794 systemd-logind[1989]: Session 17 logged out. Waiting for processes to exit. Apr 13 19:26:28.330039 systemd-logind[1989]: Removed session 17. Apr 13 19:26:28.489651 systemd[1]: Started sshd@17-172.31.31.24:22-4.175.71.9:35482.service - OpenSSH per-connection server daemon (4.175.71.9:35482). Apr 13 19:26:29.524970 sshd[5158]: Accepted publickey for core from 4.175.71.9 port 35482 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:29.528336 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:29.539111 systemd-logind[1989]: New session 18 of user core. Apr 13 19:26:29.547444 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 19:26:30.352413 sshd[5158]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:30.362137 systemd[1]: sshd@17-172.31.31.24:22-4.175.71.9:35482.service: Deactivated successfully. Apr 13 19:26:30.367342 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 19:26:30.369041 systemd-logind[1989]: Session 18 logged out. Waiting for processes to exit. Apr 13 19:26:30.371792 systemd-logind[1989]: Removed session 18. Apr 13 19:26:35.536579 systemd[1]: Started sshd@18-172.31.31.24:22-4.175.71.9:56814.service - OpenSSH per-connection server daemon (4.175.71.9:56814). Apr 13 19:26:36.249904 update_engine[1990]: I20260413 19:26:36.249212 1990 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:26:36.249904 update_engine[1990]: I20260413 19:26:36.249549 1990 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:26:36.249904 update_engine[1990]: I20260413 19:26:36.249823 1990 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:26:36.250976 update_engine[1990]: E20260413 19:26:36.250928 1990 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:26:36.251280 update_engine[1990]: I20260413 19:26:36.251245 1990 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 19:26:36.581218 sshd[5175]: Accepted publickey for core from 4.175.71.9 port 56814 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:36.583671 sshd[5175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:36.591422 systemd-logind[1989]: New session 19 of user core. Apr 13 19:26:36.599384 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 19:26:37.428169 sshd[5175]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:37.435209 systemd[1]: sshd@18-172.31.31.24:22-4.175.71.9:56814.service: Deactivated successfully. Apr 13 19:26:37.439645 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 19:26:37.441489 systemd-logind[1989]: Session 19 logged out. Waiting for processes to exit. Apr 13 19:26:37.444562 systemd-logind[1989]: Removed session 19. Apr 13 19:26:42.591617 systemd[1]: Started sshd@19-172.31.31.24:22-4.175.71.9:56816.service - OpenSSH per-connection server daemon (4.175.71.9:56816). Apr 13 19:26:43.560645 sshd[5189]: Accepted publickey for core from 4.175.71.9 port 56816 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:43.564122 sshd[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:43.573178 systemd-logind[1989]: New session 20 of user core. Apr 13 19:26:43.578403 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 19:26:44.352650 sshd[5189]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:44.364790 systemd[1]: sshd@19-172.31.31.24:22-4.175.71.9:56816.service: Deactivated successfully. Apr 13 19:26:44.369154 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 19:26:44.372533 systemd-logind[1989]: Session 20 logged out. Waiting for processes to exit. Apr 13 19:26:44.374769 systemd-logind[1989]: Removed session 20. Apr 13 19:26:44.543614 systemd[1]: Started sshd@20-172.31.31.24:22-4.175.71.9:56832.service - OpenSSH per-connection server daemon (4.175.71.9:56832). Apr 13 19:26:45.583462 sshd[5202]: Accepted publickey for core from 4.175.71.9 port 56832 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:45.586433 sshd[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:45.596388 systemd-logind[1989]: New session 21 of user core. Apr 13 19:26:45.604377 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 19:26:46.244200 update_engine[1990]: I20260413 19:26:46.244093 1990 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:26:46.244783 update_engine[1990]: I20260413 19:26:46.244462 1990 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:26:46.244783 update_engine[1990]: I20260413 19:26:46.244753 1990 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:26:46.247202 update_engine[1990]: E20260413 19:26:46.247114 1990 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:26:46.247344 update_engine[1990]: I20260413 19:26:46.247239 1990 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 19:26:49.630754 kubelet[3513]: I0413 19:26:49.628884 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cshp9" podStartSLOduration=141.628863177 podStartE2EDuration="2m21.628863177s" podCreationTimestamp="2026-04-13 19:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:25:05.756447581 +0000 UTC m=+38.771533394" watchObservedRunningTime="2026-04-13 19:26:49.628863177 +0000 UTC m=+142.643948966" Apr 13 19:26:49.662264 containerd[2013]: time="2026-04-13T19:26:49.661218693Z" level=info msg="StopContainer for \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\" with timeout 30 (s)" Apr 13 19:26:49.665882 containerd[2013]: time="2026-04-13T19:26:49.664754037Z" level=info msg="Stop container \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\" with signal terminated" Apr 13 19:26:49.689284 systemd[1]: run-containerd-runc-k8s.io-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97-runc.hCWjCL.mount: Deactivated successfully. Apr 13 19:26:49.709379 containerd[2013]: time="2026-04-13T19:26:49.709304661Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 19:26:49.712657 systemd[1]: cri-containerd-6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246.scope: Deactivated successfully. Apr 13 19:26:49.731995 containerd[2013]: time="2026-04-13T19:26:49.731632029Z" level=info msg="StopContainer for \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\" with timeout 2 (s)" Apr 13 19:26:49.733494 containerd[2013]: time="2026-04-13T19:26:49.733445673Z" level=info msg="Stop container \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\" with signal terminated" Apr 13 19:26:49.750541 systemd-networkd[1928]: lxc_health: Link DOWN Apr 13 19:26:49.750564 systemd-networkd[1928]: lxc_health: Lost carrier Apr 13 19:26:49.787664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246-rootfs.mount: Deactivated successfully. Apr 13 19:26:49.795417 systemd[1]: cri-containerd-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97.scope: Deactivated successfully. Apr 13 19:26:49.795900 systemd[1]: cri-containerd-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97.scope: Consumed 16.654s CPU time. Apr 13 19:26:49.810985 containerd[2013]: time="2026-04-13T19:26:49.810756106Z" level=info msg="shim disconnected" id=6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246 namespace=k8s.io Apr 13 19:26:49.810985 containerd[2013]: time="2026-04-13T19:26:49.810862510Z" level=warning msg="cleaning up after shim disconnected" id=6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246 namespace=k8s.io Apr 13 19:26:49.810985 containerd[2013]: time="2026-04-13T19:26:49.810910834Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:49.844759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97-rootfs.mount: Deactivated successfully. Apr 13 19:26:49.845628 containerd[2013]: time="2026-04-13T19:26:49.845485774Z" level=info msg="shim disconnected" id=651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97 namespace=k8s.io Apr 13 19:26:49.847255 containerd[2013]: time="2026-04-13T19:26:49.846909826Z" level=warning msg="cleaning up after shim disconnected" id=651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97 namespace=k8s.io Apr 13 19:26:49.847255 containerd[2013]: time="2026-04-13T19:26:49.846954226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:49.861390 containerd[2013]: time="2026-04-13T19:26:49.861321178Z" level=info msg="StopContainer for \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\" returns successfully" Apr 13 19:26:49.863342 containerd[2013]: time="2026-04-13T19:26:49.863255266Z" level=info msg="StopPodSandbox for \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\"" Apr 13 19:26:49.863497 containerd[2013]: time="2026-04-13T19:26:49.863349010Z" level=info msg="Container to stop \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:49.871039 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81-shm.mount: Deactivated successfully. Apr 13 19:26:49.887082 containerd[2013]: time="2026-04-13T19:26:49.885432874Z" level=info msg="StopContainer for \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\" returns successfully" Apr 13 19:26:49.887843 systemd[1]: cri-containerd-515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81.scope: Deactivated successfully. Apr 13 19:26:49.888585 containerd[2013]: time="2026-04-13T19:26:49.888079126Z" level=info msg="StopPodSandbox for \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\"" Apr 13 19:26:49.888585 containerd[2013]: time="2026-04-13T19:26:49.888144994Z" level=info msg="Container to stop \"57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:49.888585 containerd[2013]: time="2026-04-13T19:26:49.888170110Z" level=info msg="Container to stop \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:49.888585 containerd[2013]: time="2026-04-13T19:26:49.888193906Z" level=info msg="Container to stop \"197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:49.888585 containerd[2013]: time="2026-04-13T19:26:49.888235426Z" level=info msg="Container to stop \"d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:49.888585 containerd[2013]: time="2026-04-13T19:26:49.888260494Z" level=info msg="Container to stop \"168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 19:26:49.904811 systemd[1]: cri-containerd-045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13.scope: Deactivated successfully. Apr 13 19:26:49.953277 containerd[2013]: time="2026-04-13T19:26:49.952737538Z" level=info msg="shim disconnected" id=515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81 namespace=k8s.io Apr 13 19:26:49.953277 containerd[2013]: time="2026-04-13T19:26:49.952926166Z" level=warning msg="cleaning up after shim disconnected" id=515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81 namespace=k8s.io Apr 13 19:26:49.953277 containerd[2013]: time="2026-04-13T19:26:49.952971514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:49.977276 containerd[2013]: time="2026-04-13T19:26:49.977168290Z" level=info msg="shim disconnected" id=045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13 namespace=k8s.io Apr 13 19:26:49.977514 containerd[2013]: time="2026-04-13T19:26:49.977274010Z" level=warning msg="cleaning up after shim disconnected" id=045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13 namespace=k8s.io Apr 13 19:26:49.977514 containerd[2013]: time="2026-04-13T19:26:49.977298238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:50.001342 containerd[2013]: time="2026-04-13T19:26:50.001124538Z" level=info msg="TearDown network for sandbox \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" successfully" Apr 13 19:26:50.001342 containerd[2013]: time="2026-04-13T19:26:50.001177878Z" level=info msg="StopPodSandbox for \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" returns successfully" Apr 13 19:26:50.011028 containerd[2013]: time="2026-04-13T19:26:50.010962607Z" level=info msg="TearDown network for sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" successfully" Apr 13 19:26:50.011028 containerd[2013]: time="2026-04-13T19:26:50.011014147Z" level=info msg="StopPodSandbox for \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" returns successfully" Apr 13 19:26:50.084117 kubelet[3513]: I0413 19:26:50.083458 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-cgroup\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084117 kubelet[3513]: I0413 19:26:50.083525 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-hostproc\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084117 kubelet[3513]: I0413 19:26:50.083559 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-net\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084117 kubelet[3513]: I0413 19:26:50.083604 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4w5n\" (UniqueName: \"kubernetes.io/projected/3fe4413d-b356-401d-bd7e-0a43bf499d0d-kube-api-access-z4w5n\") pod \"3fe4413d-b356-401d-bd7e-0a43bf499d0d\" (UID: \"3fe4413d-b356-401d-bd7e-0a43bf499d0d\") " Apr 13 19:26:50.084117 kubelet[3513]: I0413 19:26:50.083646 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-hubble-tls\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084117 kubelet[3513]: I0413 19:26:50.083682 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-run\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084572 kubelet[3513]: I0413 19:26:50.083716 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc9ac902-6769-4eef-9928-68acd68be863-clustermesh-secrets\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084572 kubelet[3513]: I0413 19:26:50.083749 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-etc-cni-netd\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084572 kubelet[3513]: I0413 19:26:50.083785 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-xtables-lock\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084572 kubelet[3513]: I0413 19:26:50.083815 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-bpf-maps\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084572 kubelet[3513]: I0413 19:26:50.083850 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc9ac902-6769-4eef-9928-68acd68be863-cilium-config-path\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.084572 kubelet[3513]: I0413 19:26:50.083882 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cni-path\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.087187 kubelet[3513]: I0413 19:26:50.083915 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-lib-modules\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.087187 kubelet[3513]: I0413 19:26:50.083953 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6ms9\" (UniqueName: \"kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-kube-api-access-d6ms9\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.087187 kubelet[3513]: I0413 19:26:50.084009 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fe4413d-b356-401d-bd7e-0a43bf499d0d-cilium-config-path\") pod \"3fe4413d-b356-401d-bd7e-0a43bf499d0d\" (UID: \"3fe4413d-b356-401d-bd7e-0a43bf499d0d\") " Apr 13 19:26:50.087187 kubelet[3513]: I0413 19:26:50.084079 3513 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-kernel\") pod \"dc9ac902-6769-4eef-9928-68acd68be863\" (UID: \"dc9ac902-6769-4eef-9928-68acd68be863\") " Apr 13 19:26:50.087187 kubelet[3513]: I0413 19:26:50.084264 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.087528 kubelet[3513]: I0413 19:26:50.084333 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.087528 kubelet[3513]: I0413 19:26:50.084372 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.087528 kubelet[3513]: I0413 19:26:50.084408 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.087528 kubelet[3513]: I0413 19:26:50.084923 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.087528 kubelet[3513]: I0413 19:26:50.085562 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.088302 kubelet[3513]: I0413 19:26:50.088247 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.088564 kubelet[3513]: I0413 19:26:50.088520 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.093947 kubelet[3513]: I0413 19:26:50.093895 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.095476 kubelet[3513]: I0413 19:26:50.094651 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 19:26:50.099420 kubelet[3513]: I0413 19:26:50.099343 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe4413d-b356-401d-bd7e-0a43bf499d0d-kube-api-access-z4w5n" (OuterVolumeSpecName: "kube-api-access-z4w5n") pod "3fe4413d-b356-401d-bd7e-0a43bf499d0d" (UID: "3fe4413d-b356-401d-bd7e-0a43bf499d0d"). InnerVolumeSpecName "kube-api-access-z4w5n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:50.104295 kubelet[3513]: I0413 19:26:50.104213 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9ac902-6769-4eef-9928-68acd68be863-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 19:26:50.108388 kubelet[3513]: I0413 19:26:50.108329 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc9ac902-6769-4eef-9928-68acd68be863-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:26:50.112603 kubelet[3513]: I0413 19:26:50.112526 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-kube-api-access-d6ms9" (OuterVolumeSpecName: "kube-api-access-d6ms9") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "kube-api-access-d6ms9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:50.112879 kubelet[3513]: I0413 19:26:50.112758 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc9ac902-6769-4eef-9928-68acd68be863" (UID: "dc9ac902-6769-4eef-9928-68acd68be863"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 19:26:50.115960 kubelet[3513]: I0413 19:26:50.115887 3513 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fe4413d-b356-401d-bd7e-0a43bf499d0d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3fe4413d-b356-401d-bd7e-0a43bf499d0d" (UID: "3fe4413d-b356-401d-bd7e-0a43bf499d0d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 19:26:50.185267 kubelet[3513]: I0413 19:26:50.185122 3513 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-xtables-lock\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185267 kubelet[3513]: I0413 19:26:50.185175 3513 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-bpf-maps\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185267 kubelet[3513]: I0413 19:26:50.185198 3513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc9ac902-6769-4eef-9928-68acd68be863-cilium-config-path\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185267 kubelet[3513]: I0413 19:26:50.185229 3513 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cni-path\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185267 kubelet[3513]: I0413 19:26:50.185250 3513 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-lib-modules\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185267 kubelet[3513]: I0413 19:26:50.185270 3513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6ms9\" (UniqueName: \"kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-kube-api-access-d6ms9\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185292 3513 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fe4413d-b356-401d-bd7e-0a43bf499d0d-cilium-config-path\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185313 3513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-kernel\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185337 3513 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-cgroup\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185358 3513 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-hostproc\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185378 3513 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-host-proc-sys-net\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185411 3513 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4w5n\" (UniqueName: \"kubernetes.io/projected/3fe4413d-b356-401d-bd7e-0a43bf499d0d-kube-api-access-z4w5n\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185433 3513 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc9ac902-6769-4eef-9928-68acd68be863-hubble-tls\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.185656 kubelet[3513]: I0413 19:26:50.185454 3513 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-cilium-run\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.186034 kubelet[3513]: I0413 19:26:50.185477 3513 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc9ac902-6769-4eef-9928-68acd68be863-clustermesh-secrets\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.186034 kubelet[3513]: I0413 19:26:50.185498 3513 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc9ac902-6769-4eef-9928-68acd68be863-etc-cni-netd\") on node \"ip-172-31-31-24\" DevicePath \"\"" Apr 13 19:26:50.668815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81-rootfs.mount: Deactivated successfully. Apr 13 19:26:50.670340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13-rootfs.mount: Deactivated successfully. Apr 13 19:26:50.670495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13-shm.mount: Deactivated successfully. Apr 13 19:26:50.670670 systemd[1]: var-lib-kubelet-pods-dc9ac902\x2d6769\x2d4eef\x2d9928\x2d68acd68be863-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 19:26:50.670819 systemd[1]: var-lib-kubelet-pods-dc9ac902\x2d6769\x2d4eef\x2d9928\x2d68acd68be863-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 19:26:50.670970 systemd[1]: var-lib-kubelet-pods-dc9ac902\x2d6769\x2d4eef\x2d9928\x2d68acd68be863-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd6ms9.mount: Deactivated successfully. Apr 13 19:26:50.671197 systemd[1]: var-lib-kubelet-pods-3fe4413d\x2db356\x2d401d\x2dbd7e\x2d0a43bf499d0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz4w5n.mount: Deactivated successfully. Apr 13 19:26:50.982008 kubelet[3513]: I0413 19:26:50.981826 3513 scope.go:117] "RemoveContainer" containerID="6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246" Apr 13 19:26:50.989766 containerd[2013]: time="2026-04-13T19:26:50.989604227Z" level=info msg="RemoveContainer for \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\"" Apr 13 19:26:51.004853 systemd[1]: Removed slice kubepods-besteffort-pod3fe4413d_b356_401d_bd7e_0a43bf499d0d.slice - libcontainer container kubepods-besteffort-pod3fe4413d_b356_401d_bd7e_0a43bf499d0d.slice. Apr 13 19:26:51.014869 containerd[2013]: time="2026-04-13T19:26:51.013999268Z" level=info msg="RemoveContainer for \"6b8c864d5c56e8439a72f0d5e17e536a2bcd2fb76b8dd6f9092a3f682f1ff246\" returns successfully" Apr 13 19:26:51.018693 kubelet[3513]: I0413 19:26:51.017229 3513 scope.go:117] "RemoveContainer" containerID="651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97" Apr 13 19:26:51.034568 systemd[1]: Removed slice kubepods-burstable-poddc9ac902_6769_4eef_9928_68acd68be863.slice - libcontainer container kubepods-burstable-poddc9ac902_6769_4eef_9928_68acd68be863.slice. Apr 13 19:26:51.034846 systemd[1]: kubepods-burstable-poddc9ac902_6769_4eef_9928_68acd68be863.slice: Consumed 16.826s CPU time. Apr 13 19:26:51.041871 containerd[2013]: time="2026-04-13T19:26:51.041789444Z" level=info msg="RemoveContainer for \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\"" Apr 13 19:26:51.054784 containerd[2013]: time="2026-04-13T19:26:51.054721892Z" level=info msg="RemoveContainer for \"651156e691d5b36b8975345309b3f45106bbd5e5d775e6759fd9191114f92d97\" returns successfully" Apr 13 19:26:51.055565 kubelet[3513]: I0413 19:26:51.055522 3513 scope.go:117] "RemoveContainer" containerID="197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96" Apr 13 19:26:51.060651 containerd[2013]: time="2026-04-13T19:26:51.060588464Z" level=info msg="RemoveContainer for \"197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96\"" Apr 13 19:26:51.067624 containerd[2013]: time="2026-04-13T19:26:51.067553528Z" level=info msg="RemoveContainer for \"197a9f63099dee4e0488b5d050456cef7a83f177c7910e89abf72c34c4594c96\" returns successfully" Apr 13 19:26:51.069735 kubelet[3513]: I0413 19:26:51.068637 3513 scope.go:117] "RemoveContainer" containerID="57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3" Apr 13 19:26:51.073499 containerd[2013]: time="2026-04-13T19:26:51.072137768Z" level=info msg="RemoveContainer for \"57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3\"" Apr 13 19:26:51.081447 containerd[2013]: time="2026-04-13T19:26:51.081377276Z" level=info msg="RemoveContainer for \"57b84f41ca7b72828d3b2e310b5bdbcc4d875278f2a01f1aab6b15848a4ce1b3\" returns successfully" Apr 13 19:26:51.081984 kubelet[3513]: I0413 19:26:51.081742 3513 scope.go:117] "RemoveContainer" containerID="168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985" Apr 13 19:26:51.084284 containerd[2013]: time="2026-04-13T19:26:51.084232280Z" level=info msg="RemoveContainer for \"168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985\"" Apr 13 19:26:51.090394 containerd[2013]: time="2026-04-13T19:26:51.090254924Z" level=info msg="RemoveContainer for \"168725ffa3b806f1813299c3d71a57cfb116c1e18aa4f897160ff1904414f985\" returns successfully" Apr 13 19:26:51.090623 kubelet[3513]: I0413 19:26:51.090577 3513 scope.go:117] "RemoveContainer" containerID="d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d" Apr 13 19:26:51.093231 containerd[2013]: time="2026-04-13T19:26:51.093178340Z" level=info msg="RemoveContainer for \"d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d\"" Apr 13 19:26:51.099290 containerd[2013]: time="2026-04-13T19:26:51.099191768Z" level=info msg="RemoveContainer for \"d0038765cf1e72f057e4ac34529f3009de587a9f35690f504b252211e2f2bb5d\" returns successfully" Apr 13 19:26:51.403964 kubelet[3513]: I0413 19:26:51.403907 3513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe4413d-b356-401d-bd7e-0a43bf499d0d" path="/var/lib/kubelet/pods/3fe4413d-b356-401d-bd7e-0a43bf499d0d/volumes" Apr 13 19:26:51.405298 kubelet[3513]: I0413 19:26:51.404949 3513 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc9ac902-6769-4eef-9928-68acd68be863" path="/var/lib/kubelet/pods/dc9ac902-6769-4eef-9928-68acd68be863/volumes" Apr 13 19:26:51.724500 sshd[5202]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:51.732066 systemd[1]: sshd@20-172.31.31.24:22-4.175.71.9:56832.service: Deactivated successfully. Apr 13 19:26:51.738936 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 19:26:51.741430 systemd[1]: session-21.scope: Consumed 2.794s CPU time. Apr 13 19:26:51.743961 systemd-logind[1989]: Session 21 logged out. Waiting for processes to exit. Apr 13 19:26:51.748501 systemd-logind[1989]: Removed session 21. Apr 13 19:26:51.903600 systemd[1]: Started sshd@21-172.31.31.24:22-4.175.71.9:38202.service - OpenSSH per-connection server daemon (4.175.71.9:38202). Apr 13 19:26:52.651895 kubelet[3513]: E0413 19:26:52.651807 3513 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:26:52.734705 ntpd[1982]: Deleting interface #12 lxc_health, fe80::dc7b:c7ff:fe50:ac64%8#123, interface stats: received=0, sent=0, dropped=0, active_time=113 secs Apr 13 19:26:52.735211 ntpd[1982]: 13 Apr 19:26:52 ntpd[1982]: Deleting interface #12 lxc_health, fe80::dc7b:c7ff:fe50:ac64%8#123, interface stats: received=0, sent=0, dropped=0, active_time=113 secs Apr 13 19:26:52.900830 sshd[5364]: Accepted publickey for core from 4.175.71.9 port 38202 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:52.903761 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:52.912532 systemd-logind[1989]: New session 22 of user core. Apr 13 19:26:52.918364 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 19:26:54.450151 systemd[1]: Created slice kubepods-burstable-pod51cfa784_4527_4abe_988e_ade6e8e97bca.slice - libcontainer container kubepods-burstable-pod51cfa784_4527_4abe_988e_ade6e8e97bca.slice. Apr 13 19:26:54.514264 kubelet[3513]: I0413 19:26:54.514209 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-cilium-cgroup\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.517422 kubelet[3513]: I0413 19:26:54.516699 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51cfa784-4527-4abe-988e-ade6e8e97bca-cilium-config-path\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.517422 kubelet[3513]: I0413 19:26:54.516884 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-lib-modules\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.517422 kubelet[3513]: I0413 19:26:54.516981 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51cfa784-4527-4abe-988e-ade6e8e97bca-clustermesh-secrets\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.517422 kubelet[3513]: I0413 19:26:54.517098 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/51cfa784-4527-4abe-988e-ade6e8e97bca-cilium-ipsec-secrets\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.517422 kubelet[3513]: I0413 19:26:54.517196 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51cfa784-4527-4abe-988e-ade6e8e97bca-hubble-tls\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.517422 kubelet[3513]: I0413 19:26:54.517287 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-bpf-maps\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.517844 kubelet[3513]: I0413 19:26:54.517359 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-etc-cni-netd\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.518819 kubelet[3513]: I0413 19:26:54.518115 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89r8l\" (UniqueName: \"kubernetes.io/projected/51cfa784-4527-4abe-988e-ade6e8e97bca-kube-api-access-89r8l\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.518819 kubelet[3513]: I0413 19:26:54.518333 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-cilium-run\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.518819 kubelet[3513]: I0413 19:26:54.518427 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-host-proc-sys-net\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.518819 kubelet[3513]: I0413 19:26:54.518571 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-hostproc\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.518819 kubelet[3513]: I0413 19:26:54.518672 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-host-proc-sys-kernel\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.518819 kubelet[3513]: I0413 19:26:54.518776 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-cni-path\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.519548 kubelet[3513]: I0413 19:26:54.519407 3513 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51cfa784-4527-4abe-988e-ade6e8e97bca-xtables-lock\") pod \"cilium-wdcxc\" (UID: \"51cfa784-4527-4abe-988e-ade6e8e97bca\") " pod="kube-system/cilium-wdcxc" Apr 13 19:26:54.538556 sshd[5364]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:54.545639 systemd[1]: sshd@21-172.31.31.24:22-4.175.71.9:38202.service: Deactivated successfully. Apr 13 19:26:54.551608 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 19:26:54.555263 systemd-logind[1989]: Session 22 logged out. Waiting for processes to exit. Apr 13 19:26:54.558449 systemd-logind[1989]: Removed session 22. Apr 13 19:26:54.705623 systemd[1]: Started sshd@22-172.31.31.24:22-4.175.71.9:38214.service - OpenSSH per-connection server daemon (4.175.71.9:38214). Apr 13 19:26:54.759161 containerd[2013]: time="2026-04-13T19:26:54.759016562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdcxc,Uid:51cfa784-4527-4abe-988e-ade6e8e97bca,Namespace:kube-system,Attempt:0,}" Apr 13 19:26:54.801457 containerd[2013]: time="2026-04-13T19:26:54.800723450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 19:26:54.801457 containerd[2013]: time="2026-04-13T19:26:54.800929466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 19:26:54.801457 containerd[2013]: time="2026-04-13T19:26:54.800979014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:54.801457 containerd[2013]: time="2026-04-13T19:26:54.801184298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 19:26:54.840382 systemd[1]: Started cri-containerd-f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1.scope - libcontainer container f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1. Apr 13 19:26:54.890869 containerd[2013]: time="2026-04-13T19:26:54.890771655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdcxc,Uid:51cfa784-4527-4abe-988e-ade6e8e97bca,Namespace:kube-system,Attempt:0,} returns sandbox id \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\"" Apr 13 19:26:54.908636 containerd[2013]: time="2026-04-13T19:26:54.908566107Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 19:26:54.930355 containerd[2013]: time="2026-04-13T19:26:54.930276195Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1\"" Apr 13 19:26:54.934010 containerd[2013]: time="2026-04-13T19:26:54.932367663Z" level=info msg="StartContainer for \"6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1\"" Apr 13 19:26:54.977405 systemd[1]: Started cri-containerd-6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1.scope - libcontainer container 6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1. Apr 13 19:26:55.037618 containerd[2013]: time="2026-04-13T19:26:55.037531668Z" level=info msg="StartContainer for \"6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1\" returns successfully" Apr 13 19:26:55.063784 systemd[1]: cri-containerd-6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1.scope: Deactivated successfully. Apr 13 19:26:55.120569 containerd[2013]: time="2026-04-13T19:26:55.120268416Z" level=info msg="shim disconnected" id=6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1 namespace=k8s.io Apr 13 19:26:55.120569 containerd[2013]: time="2026-04-13T19:26:55.120344844Z" level=warning msg="cleaning up after shim disconnected" id=6e06710082d11e02c06c6e8847877156ee50ef4a509009d644b36f11d2a550a1 namespace=k8s.io Apr 13 19:26:55.120569 containerd[2013]: time="2026-04-13T19:26:55.120456060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:55.678446 sshd[5380]: Accepted publickey for core from 4.175.71.9 port 38214 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:55.681784 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:55.690848 systemd-logind[1989]: New session 23 of user core. Apr 13 19:26:55.699401 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 19:26:56.039560 containerd[2013]: time="2026-04-13T19:26:56.039139692Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 19:26:56.072095 containerd[2013]: time="2026-04-13T19:26:56.071970577Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28\"" Apr 13 19:26:56.073914 containerd[2013]: time="2026-04-13T19:26:56.073816141Z" level=info msg="StartContainer for \"14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28\"" Apr 13 19:26:56.152479 systemd[1]: Started cri-containerd-14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28.scope - libcontainer container 14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28. Apr 13 19:26:56.244156 update_engine[1990]: I20260413 19:26:56.244037 1990 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:26:56.247532 update_engine[1990]: I20260413 19:26:56.245569 1990 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:26:56.247532 update_engine[1990]: I20260413 19:26:56.245926 1990 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:26:56.250100 update_engine[1990]: E20260413 19:26:56.248850 1990 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:26:56.250100 update_engine[1990]: I20260413 19:26:56.248975 1990 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:26:56.250100 update_engine[1990]: I20260413 19:26:56.249001 1990 omaha_request_action.cc:617] Omaha request response: Apr 13 19:26:56.251591 update_engine[1990]: E20260413 19:26:56.250680 1990 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 19:26:56.251591 update_engine[1990]: I20260413 19:26:56.250759 1990 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 19:26:56.251591 update_engine[1990]: I20260413 19:26:56.250778 1990 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:26:56.251591 update_engine[1990]: I20260413 19:26:56.250794 1990 update_attempter.cc:306] Processing Done. Apr 13 19:26:56.251591 update_engine[1990]: E20260413 19:26:56.250826 1990 update_attempter.cc:619] Update failed. Apr 13 19:26:56.251591 update_engine[1990]: I20260413 19:26:56.250844 1990 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 19:26:56.251591 update_engine[1990]: I20260413 19:26:56.250859 1990 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 19:26:56.251591 update_engine[1990]: I20260413 19:26:56.250875 1990 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 19:26:56.256853 locksmithd[2046]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 19:26:56.257553 update_engine[1990]: I20260413 19:26:56.253039 1990 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 19:26:56.257553 update_engine[1990]: I20260413 19:26:56.253175 1990 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 19:26:56.257553 update_engine[1990]: I20260413 19:26:56.253196 1990 omaha_request_action.cc:272] Request: Apr 13 19:26:56.257553 update_engine[1990]: Apr 13 19:26:56.257553 update_engine[1990]: Apr 13 19:26:56.257553 update_engine[1990]: Apr 13 19:26:56.257553 update_engine[1990]: Apr 13 19:26:56.257553 update_engine[1990]: Apr 13 19:26:56.257553 update_engine[1990]: Apr 13 19:26:56.257553 update_engine[1990]: I20260413 19:26:56.253214 1990 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 19:26:56.257553 update_engine[1990]: I20260413 19:26:56.253549 1990 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 19:26:56.257553 update_engine[1990]: I20260413 19:26:56.253916 1990 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 19:26:56.259392 update_engine[1990]: E20260413 19:26:56.258764 1990 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 19:26:56.259392 update_engine[1990]: I20260413 19:26:56.258870 1990 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 19:26:56.259392 update_engine[1990]: I20260413 19:26:56.258894 1990 omaha_request_action.cc:617] Omaha request response: Apr 13 19:26:56.259392 update_engine[1990]: I20260413 19:26:56.258914 1990 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:26:56.259392 update_engine[1990]: I20260413 19:26:56.258931 1990 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 19:26:56.259392 update_engine[1990]: I20260413 19:26:56.258950 1990 update_attempter.cc:306] Processing Done. Apr 13 19:26:56.259392 update_engine[1990]: I20260413 19:26:56.258970 1990 update_attempter.cc:310] Error event sent. Apr 13 19:26:56.259392 update_engine[1990]: I20260413 19:26:56.258996 1990 update_check_scheduler.cc:74] Next update check in 45m35s Apr 13 19:26:56.269328 locksmithd[2046]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 19:26:56.292743 containerd[2013]: time="2026-04-13T19:26:56.292383290Z" level=info msg="StartContainer for \"14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28\" returns successfully" Apr 13 19:26:56.316604 systemd[1]: cri-containerd-14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28.scope: Deactivated successfully. Apr 13 19:26:56.347981 sshd[5380]: pam_unix(sshd:session): session closed for user core Apr 13 19:26:56.363916 systemd[1]: sshd@22-172.31.31.24:22-4.175.71.9:38214.service: Deactivated successfully. Apr 13 19:26:56.371759 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 19:26:56.374075 systemd-logind[1989]: Session 23 logged out. Waiting for processes to exit. Apr 13 19:26:56.380221 systemd-logind[1989]: Removed session 23. Apr 13 19:26:56.392673 containerd[2013]: time="2026-04-13T19:26:56.392290778Z" level=info msg="shim disconnected" id=14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28 namespace=k8s.io Apr 13 19:26:56.392673 containerd[2013]: time="2026-04-13T19:26:56.392381462Z" level=warning msg="cleaning up after shim disconnected" id=14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28 namespace=k8s.io Apr 13 19:26:56.392673 containerd[2013]: time="2026-04-13T19:26:56.392403650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:56.537644 systemd[1]: Started sshd@23-172.31.31.24:22-4.175.71.9:56854.service - OpenSSH per-connection server daemon (4.175.71.9:56854). Apr 13 19:26:56.633121 systemd[1]: run-containerd-runc-k8s.io-14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28-runc.nc3gh2.mount: Deactivated successfully. Apr 13 19:26:56.633305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14b17c9445c1619535037f9dda92186f96fcfeb777b3d194ff9e12c76b048f28-rootfs.mount: Deactivated successfully. Apr 13 19:26:57.048376 containerd[2013]: time="2026-04-13T19:26:57.048097418Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 19:26:57.086180 containerd[2013]: time="2026-04-13T19:26:57.086042474Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7\"" Apr 13 19:26:57.086903 containerd[2013]: time="2026-04-13T19:26:57.086855930Z" level=info msg="StartContainer for \"5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7\"" Apr 13 19:26:57.181937 systemd[1]: Started cri-containerd-5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7.scope - libcontainer container 5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7. Apr 13 19:26:57.255358 containerd[2013]: time="2026-04-13T19:26:57.255224319Z" level=info msg="StartContainer for \"5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7\" returns successfully" Apr 13 19:26:57.263831 systemd[1]: cri-containerd-5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7.scope: Deactivated successfully. Apr 13 19:26:57.315183 containerd[2013]: time="2026-04-13T19:26:57.314685543Z" level=info msg="shim disconnected" id=5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7 namespace=k8s.io Apr 13 19:26:57.315183 containerd[2013]: time="2026-04-13T19:26:57.314773083Z" level=warning msg="cleaning up after shim disconnected" id=5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7 namespace=k8s.io Apr 13 19:26:57.315183 containerd[2013]: time="2026-04-13T19:26:57.314794695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:57.579190 sshd[5551]: Accepted publickey for core from 4.175.71.9 port 56854 ssh2: RSA SHA256:XRjnN6xbGuPFo5x1ktmnQqeOBg+Z6w3BeaL3tCcJXCo Apr 13 19:26:57.582088 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 19:26:57.591321 systemd-logind[1989]: New session 24 of user core. Apr 13 19:26:57.601349 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 19:26:57.634311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5813fa9f61c2fd754eaf06d7a4d2e3e8043c0564ec4ff3bc14a18650bbabb6f7-rootfs.mount: Deactivated successfully. Apr 13 19:26:57.653638 kubelet[3513]: E0413 19:26:57.653546 3513 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 19:26:58.054975 containerd[2013]: time="2026-04-13T19:26:58.054767415Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 19:26:58.087014 containerd[2013]: time="2026-04-13T19:26:58.086684919Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62\"" Apr 13 19:26:58.089100 containerd[2013]: time="2026-04-13T19:26:58.088824699Z" level=info msg="StartContainer for \"49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62\"" Apr 13 19:26:58.179360 systemd[1]: Started cri-containerd-49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62.scope - libcontainer container 49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62. Apr 13 19:26:58.246270 systemd[1]: cri-containerd-49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62.scope: Deactivated successfully. Apr 13 19:26:58.251778 containerd[2013]: time="2026-04-13T19:26:58.251600475Z" level=info msg="StartContainer for \"49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62\" returns successfully" Apr 13 19:26:58.304827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62-rootfs.mount: Deactivated successfully. Apr 13 19:26:58.317127 containerd[2013]: time="2026-04-13T19:26:58.316927168Z" level=info msg="shim disconnected" id=49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62 namespace=k8s.io Apr 13 19:26:58.317127 containerd[2013]: time="2026-04-13T19:26:58.317009968Z" level=warning msg="cleaning up after shim disconnected" id=49e8076f37f7faab07219e4c96665c823af0ed4d5843a6d9f9c2595e4e91ce62 namespace=k8s.io Apr 13 19:26:58.317127 containerd[2013]: time="2026-04-13T19:26:58.317031976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:26:59.073557 containerd[2013]: time="2026-04-13T19:26:59.073432408Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 19:26:59.111189 containerd[2013]: time="2026-04-13T19:26:59.111090388Z" level=info msg="CreateContainer within sandbox \"f825cf55c63e60545562ae06d7a71a45f20ffe6de47e39053849c6e3131746e1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d8e6e0a168e848e6a9e4743b938d9b7c07ee87f50000bcff8b8bf6f2152be12\"" Apr 13 19:26:59.112737 containerd[2013]: time="2026-04-13T19:26:59.112619440Z" level=info msg="StartContainer for \"2d8e6e0a168e848e6a9e4743b938d9b7c07ee87f50000bcff8b8bf6f2152be12\"" Apr 13 19:26:59.172379 systemd[1]: Started cri-containerd-2d8e6e0a168e848e6a9e4743b938d9b7c07ee87f50000bcff8b8bf6f2152be12.scope - libcontainer container 2d8e6e0a168e848e6a9e4743b938d9b7c07ee87f50000bcff8b8bf6f2152be12. Apr 13 19:26:59.233569 containerd[2013]: time="2026-04-13T19:26:59.233494420Z" level=info msg="StartContainer for \"2d8e6e0a168e848e6a9e4743b938d9b7c07ee87f50000bcff8b8bf6f2152be12\" returns successfully" Apr 13 19:27:00.099209 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 13 19:27:00.113077 kubelet[3513]: I0413 19:27:00.112118 3513 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdcxc" podStartSLOduration=6.112092893 podStartE2EDuration="6.112092893s" podCreationTimestamp="2026-04-13 19:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 19:27:00.108454829 +0000 UTC m=+153.123540642" watchObservedRunningTime="2026-04-13 19:27:00.112092893 +0000 UTC m=+153.127178706" Apr 13 19:27:00.628631 kubelet[3513]: I0413 19:27:00.628560 3513 setters.go:618] "Node became not ready" node="ip-172-31-31-24" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-13T19:27:00Z","lastTransitionTime":"2026-04-13T19:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 13 19:27:04.499007 systemd-networkd[1928]: lxc_health: Link UP Apr 13 19:27:04.510706 (udev-worker)[6221]: Network interface NamePolicy= disabled on kernel command line. Apr 13 19:27:04.539235 systemd-networkd[1928]: lxc_health: Gained carrier Apr 13 19:27:06.338355 systemd-networkd[1928]: lxc_health: Gained IPv6LL Apr 13 19:27:08.734858 ntpd[1982]: Listen normally on 15 lxc_health [fe80::540f:14ff:fe3f:e5cb%14]:123 Apr 13 19:27:08.735427 ntpd[1982]: 13 Apr 19:27:08 ntpd[1982]: Listen normally on 15 lxc_health [fe80::540f:14ff:fe3f:e5cb%14]:123 Apr 13 19:27:09.777248 systemd[1]: run-containerd-runc-k8s.io-2d8e6e0a168e848e6a9e4743b938d9b7c07ee87f50000bcff8b8bf6f2152be12-runc.oOGiYY.mount: Deactivated successfully. Apr 13 19:27:10.036446 sshd[5551]: pam_unix(sshd:session): session closed for user core Apr 13 19:27:10.044971 systemd-logind[1989]: Session 24 logged out. Waiting for processes to exit. Apr 13 19:27:10.048704 systemd[1]: sshd@23-172.31.31.24:22-4.175.71.9:56854.service: Deactivated successfully. Apr 13 19:27:10.058242 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 19:27:10.062587 systemd-logind[1989]: Removed session 24. Apr 13 19:27:25.137726 systemd[1]: cri-containerd-faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44.scope: Deactivated successfully. Apr 13 19:27:25.139617 systemd[1]: cri-containerd-faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44.scope: Consumed 4.991s CPU time, 28.2M memory peak, 0B memory swap peak. Apr 13 19:27:25.189756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44-rootfs.mount: Deactivated successfully. Apr 13 19:27:25.198586 containerd[2013]: time="2026-04-13T19:27:25.198073169Z" level=info msg="shim disconnected" id=faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44 namespace=k8s.io Apr 13 19:27:25.198586 containerd[2013]: time="2026-04-13T19:27:25.198179381Z" level=warning msg="cleaning up after shim disconnected" id=faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44 namespace=k8s.io Apr 13 19:27:25.198586 containerd[2013]: time="2026-04-13T19:27:25.198203249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:26.160109 kubelet[3513]: I0413 19:27:26.160014 3513 scope.go:117] "RemoveContainer" containerID="faa2233bb91da554a86ea10b81469bfc9e2b36bb4ecb437c9a67fe121c231a44" Apr 13 19:27:26.165261 containerd[2013]: time="2026-04-13T19:27:26.165189786Z" level=info msg="CreateContainer within sandbox \"e0adcaeaaac751a9c5bb405f6c98556d039005e36dd0be1e12d132b48ad667a1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 19:27:26.201034 containerd[2013]: time="2026-04-13T19:27:26.200822694Z" level=info msg="CreateContainer within sandbox \"e0adcaeaaac751a9c5bb405f6c98556d039005e36dd0be1e12d132b48ad667a1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c4a2baad4bcaa09df16c6c8f96070f92b3644ddcfd77082a3bc50051c5f87412\"" Apr 13 19:27:26.201807 containerd[2013]: time="2026-04-13T19:27:26.201732858Z" level=info msg="StartContainer for \"c4a2baad4bcaa09df16c6c8f96070f92b3644ddcfd77082a3bc50051c5f87412\"" Apr 13 19:27:26.270980 systemd[1]: run-containerd-runc-k8s.io-c4a2baad4bcaa09df16c6c8f96070f92b3644ddcfd77082a3bc50051c5f87412-runc.0j60ni.mount: Deactivated successfully. Apr 13 19:27:26.281455 systemd[1]: Started cri-containerd-c4a2baad4bcaa09df16c6c8f96070f92b3644ddcfd77082a3bc50051c5f87412.scope - libcontainer container c4a2baad4bcaa09df16c6c8f96070f92b3644ddcfd77082a3bc50051c5f87412. Apr 13 19:27:26.364351 containerd[2013]: time="2026-04-13T19:27:26.364283083Z" level=info msg="StartContainer for \"c4a2baad4bcaa09df16c6c8f96070f92b3644ddcfd77082a3bc50051c5f87412\" returns successfully" Apr 13 19:27:27.330811 containerd[2013]: time="2026-04-13T19:27:27.329895440Z" level=info msg="StopPodSandbox for \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\"" Apr 13 19:27:27.330811 containerd[2013]: time="2026-04-13T19:27:27.330139688Z" level=info msg="TearDown network for sandbox \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" successfully" Apr 13 19:27:27.330811 containerd[2013]: time="2026-04-13T19:27:27.330176660Z" level=info msg="StopPodSandbox for \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" returns successfully" Apr 13 19:27:27.332309 containerd[2013]: time="2026-04-13T19:27:27.331883996Z" level=info msg="RemovePodSandbox for \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\"" Apr 13 19:27:27.332309 containerd[2013]: time="2026-04-13T19:27:27.331955708Z" level=info msg="Forcibly stopping sandbox \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\"" Apr 13 19:27:27.332825 containerd[2013]: time="2026-04-13T19:27:27.332437520Z" level=info msg="TearDown network for sandbox \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" successfully" Apr 13 19:27:27.340954 containerd[2013]: time="2026-04-13T19:27:27.340282196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:27:27.340954 containerd[2013]: time="2026-04-13T19:27:27.340397480Z" level=info msg="RemovePodSandbox \"515fd745cc3a1da203860b9f328137c3928444e6dccadaa052fd6b2b77f58c81\" returns successfully" Apr 13 19:27:27.341797 containerd[2013]: time="2026-04-13T19:27:27.341518784Z" level=info msg="StopPodSandbox for \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\"" Apr 13 19:27:27.341797 containerd[2013]: time="2026-04-13T19:27:27.341678624Z" level=info msg="TearDown network for sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" successfully" Apr 13 19:27:27.341797 containerd[2013]: time="2026-04-13T19:27:27.341707880Z" level=info msg="StopPodSandbox for \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" returns successfully" Apr 13 19:27:27.343969 containerd[2013]: time="2026-04-13T19:27:27.342924380Z" level=info msg="RemovePodSandbox for \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\"" Apr 13 19:27:27.343969 containerd[2013]: time="2026-04-13T19:27:27.342985436Z" level=info msg="Forcibly stopping sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\"" Apr 13 19:27:27.343969 containerd[2013]: time="2026-04-13T19:27:27.343121000Z" level=info msg="TearDown network for sandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" successfully" Apr 13 19:27:27.350918 containerd[2013]: time="2026-04-13T19:27:27.350799488Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 19:27:27.351237 containerd[2013]: time="2026-04-13T19:27:27.351194420Z" level=info msg="RemovePodSandbox \"045484859c1baf75170e4fe0f61a2c1eaacd0fbfc517db5f16a90377fdb2fc13\" returns successfully" Apr 13 19:27:30.837912 systemd[1]: cri-containerd-b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c.scope: Deactivated successfully. Apr 13 19:27:30.838921 systemd[1]: cri-containerd-b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c.scope: Consumed 8.890s CPU time, 15.6M memory peak, 0B memory swap peak. Apr 13 19:27:30.891003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c-rootfs.mount: Deactivated successfully. Apr 13 19:27:30.905320 containerd[2013]: time="2026-04-13T19:27:30.905098742Z" level=info msg="shim disconnected" id=b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c namespace=k8s.io Apr 13 19:27:30.905320 containerd[2013]: time="2026-04-13T19:27:30.905172170Z" level=warning msg="cleaning up after shim disconnected" id=b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c namespace=k8s.io Apr 13 19:27:30.905320 containerd[2013]: time="2026-04-13T19:27:30.905191898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 19:27:31.114405 kubelet[3513]: E0413 19:27:31.112860 3513 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-24?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 19:27:31.182947 kubelet[3513]: I0413 19:27:31.180885 3513 scope.go:117] "RemoveContainer" containerID="b9e66895be468309d2fcb2bddd752bd6aa8c39b0d94c4cb1e4ba40096f1fde7c" Apr 13 19:27:31.184545 containerd[2013]: time="2026-04-13T19:27:31.184493459Z" level=info msg="CreateContainer within sandbox \"62b94d07ab2f7a5329fca01b14d6ad9ce699d7bb35b0ed68790de5d2ac4756d0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 19:27:31.215212 containerd[2013]: time="2026-04-13T19:27:31.215149115Z" level=info msg="CreateContainer within sandbox \"62b94d07ab2f7a5329fca01b14d6ad9ce699d7bb35b0ed68790de5d2ac4756d0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8cb79cb275d85d3f9f07b4dc95a1300d417ebc3c6ba24759fa326113e4f0d2a0\"" Apr 13 19:27:31.216457 containerd[2013]: time="2026-04-13T19:27:31.216405203Z" level=info msg="StartContainer for \"8cb79cb275d85d3f9f07b4dc95a1300d417ebc3c6ba24759fa326113e4f0d2a0\"" Apr 13 19:27:31.297896 systemd[1]: Started cri-containerd-8cb79cb275d85d3f9f07b4dc95a1300d417ebc3c6ba24759fa326113e4f0d2a0.scope - libcontainer container 8cb79cb275d85d3f9f07b4dc95a1300d417ebc3c6ba24759fa326113e4f0d2a0. Apr 13 19:27:31.369879 containerd[2013]: time="2026-04-13T19:27:31.369328932Z" level=info msg="StartContainer for \"8cb79cb275d85d3f9f07b4dc95a1300d417ebc3c6ba24759fa326113e4f0d2a0\" returns successfully"