Jul 7 05:52:49.222829 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 7 05:52:49.222874 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:52:49.222899 kernel: KASLR disabled due to lack of seed Jul 7 05:52:49.222916 kernel: efi: EFI v2.7 by EDK II Jul 7 05:52:49.222932 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Jul 7 05:52:49.222947 kernel: ACPI: Early table checksum verification disabled Jul 7 05:52:49.222965 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 7 05:52:49.222980 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 05:52:49.222996 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 05:52:49.223012 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 7 05:52:49.223032 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 05:52:49.223048 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 7 05:52:49.223063 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 7 05:52:49.223079 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 7 05:52:49.223098 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 05:52:49.223118 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 7 05:52:49.223136 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 7 05:52:49.223152 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 7 05:52:49.223168 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 7 05:52:49.223185 kernel: printk: bootconsole [uart0] enabled Jul 7 05:52:49.223201 kernel: NUMA: Failed to initialise from firmware Jul 7 05:52:49.223218 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 7 05:52:49.223234 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 7 05:52:49.223251 kernel: Zone ranges: Jul 7 05:52:49.223267 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 7 05:52:49.223284 kernel: DMA32 empty Jul 7 05:52:49.223304 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 7 05:52:49.223321 kernel: Movable zone start for each node Jul 7 05:52:49.223337 kernel: Early memory node ranges Jul 7 05:52:49.223353 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 7 05:52:49.223370 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 7 05:52:49.223386 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 7 05:52:49.223402 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 7 05:52:49.223418 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 7 05:52:49.223434 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 7 05:52:49.223450 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 7 05:52:49.223467 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 7 05:52:49.223483 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 7 05:52:49.223504 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 7 05:52:49.223521 kernel: psci: probing for conduit method from ACPI. Jul 7 05:52:49.223545 kernel: psci: PSCIv1.0 detected in firmware. Jul 7 05:52:49.223562 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:52:49.223580 kernel: psci: Trusted OS migration not required Jul 7 05:52:49.223602 kernel: psci: SMC Calling Convention v1.1 Jul 7 05:52:49.223620 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 7 05:52:49.223637 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:52:49.223655 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:52:49.223672 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:52:49.225786 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:52:49.225809 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:52:49.225827 kernel: CPU features: detected: Spectre-v2 Jul 7 05:52:49.225845 kernel: CPU features: detected: Spectre-v3a Jul 7 05:52:49.225863 kernel: CPU features: detected: Spectre-BHB Jul 7 05:52:49.225880 kernel: CPU features: detected: ARM erratum 1742098 Jul 7 05:52:49.225910 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 7 05:52:49.225928 kernel: alternatives: applying boot alternatives Jul 7 05:52:49.225948 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:49.225968 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:52:49.225988 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:52:49.226007 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:52:49.226025 kernel: Fallback order for Node 0: 0 Jul 7 05:52:49.226043 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 7 05:52:49.226061 kernel: Policy zone: Normal Jul 7 05:52:49.226080 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:52:49.226098 kernel: software IO TLB: area num 2. Jul 7 05:52:49.226123 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 7 05:52:49.226142 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Jul 7 05:52:49.226161 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:52:49.226180 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:52:49.226199 kernel: rcu: RCU event tracing is enabled. Jul 7 05:52:49.226217 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:52:49.226236 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:52:49.226256 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:52:49.226276 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:52:49.226295 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:52:49.226313 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:52:49.226336 kernel: GICv3: 96 SPIs implemented Jul 7 05:52:49.226355 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:52:49.226372 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:52:49.226389 kernel: GICv3: GICv3 features: 16 PPIs Jul 7 05:52:49.226407 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 7 05:52:49.226424 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 7 05:52:49.226442 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 05:52:49.226460 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 7 05:52:49.226477 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 7 05:52:49.226495 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 7 05:52:49.226513 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 7 05:52:49.226530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:52:49.226552 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 7 05:52:49.226570 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 7 05:52:49.226588 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 7 05:52:49.226606 kernel: Console: colour dummy device 80x25 Jul 7 05:52:49.226624 kernel: printk: console [tty1] enabled Jul 7 05:52:49.226642 kernel: ACPI: Core revision 20230628 Jul 7 05:52:49.226660 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 7 05:52:49.226705 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:52:49.226731 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:52:49.226755 kernel: landlock: Up and running. Jul 7 05:52:49.226774 kernel: SELinux: Initializing. Jul 7 05:52:49.226792 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:49.226811 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:49.226829 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:49.226847 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:49.226865 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:52:49.226883 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:52:49.226901 kernel: Platform MSI: ITS@0x10080000 domain created Jul 7 05:52:49.226924 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 7 05:52:49.226942 kernel: Remapping and enabling EFI services. Jul 7 05:52:49.226959 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:52:49.226977 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:52:49.226995 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 7 05:52:49.227013 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 7 05:52:49.227031 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 7 05:52:49.227048 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:52:49.227066 kernel: SMP: Total of 2 processors activated. Jul 7 05:52:49.227084 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:52:49.227116 kernel: CPU features: detected: 32-bit EL1 Support Jul 7 05:52:49.227134 kernel: CPU features: detected: CRC32 instructions Jul 7 05:52:49.227163 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:52:49.227187 kernel: alternatives: applying system-wide alternatives Jul 7 05:52:49.227205 kernel: devtmpfs: initialized Jul 7 05:52:49.227225 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:52:49.227244 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:52:49.227262 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:52:49.227282 kernel: SMBIOS 3.0.0 present. Jul 7 05:52:49.227304 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 7 05:52:49.227323 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:52:49.227342 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:52:49.227361 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:52:49.227379 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:52:49.227398 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:52:49.227417 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jul 7 05:52:49.227440 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:52:49.227459 kernel: cpuidle: using governor menu Jul 7 05:52:49.227478 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:52:49.227497 kernel: ASID allocator initialised with 65536 entries Jul 7 05:52:49.227515 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:52:49.227534 kernel: Serial: AMBA PL011 UART driver Jul 7 05:52:49.227552 kernel: Modules: 17488 pages in range for non-PLT usage Jul 7 05:52:49.227571 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:52:49.227590 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:52:49.227613 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:52:49.227632 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:52:49.227651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:52:49.227669 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:52:49.229781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:52:49.229810 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:52:49.229829 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:52:49.229848 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:52:49.229867 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:52:49.229895 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:52:49.229914 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:52:49.229933 kernel: ACPI: Interpreter enabled Jul 7 05:52:49.229951 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:52:49.229970 kernel: ACPI: MCFG table detected, 1 entries Jul 7 05:52:49.229989 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 7 05:52:49.230288 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 05:52:49.230498 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 05:52:49.230727 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 05:52:49.230938 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 7 05:52:49.231139 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 7 05:52:49.231164 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 7 05:52:49.231184 kernel: acpiphp: Slot [1] registered Jul 7 05:52:49.231203 kernel: acpiphp: Slot [2] registered Jul 7 05:52:49.231222 kernel: acpiphp: Slot [3] registered Jul 7 05:52:49.231241 kernel: acpiphp: Slot [4] registered Jul 7 05:52:49.231266 kernel: acpiphp: Slot [5] registered Jul 7 05:52:49.231286 kernel: acpiphp: Slot [6] registered Jul 7 05:52:49.231305 kernel: acpiphp: Slot [7] registered Jul 7 05:52:49.231324 kernel: acpiphp: Slot [8] registered Jul 7 05:52:49.231343 kernel: acpiphp: Slot [9] registered Jul 7 05:52:49.231362 kernel: acpiphp: Slot [10] registered Jul 7 05:52:49.231380 kernel: acpiphp: Slot [11] registered Jul 7 05:52:49.231400 kernel: acpiphp: Slot [12] registered Jul 7 05:52:49.231418 kernel: acpiphp: Slot [13] registered Jul 7 05:52:49.231437 kernel: acpiphp: Slot [14] registered Jul 7 05:52:49.231460 kernel: acpiphp: Slot [15] registered Jul 7 05:52:49.231479 kernel: acpiphp: Slot [16] registered Jul 7 05:52:49.231497 kernel: acpiphp: Slot [17] registered Jul 7 05:52:49.231515 kernel: acpiphp: Slot [18] registered Jul 7 05:52:49.231534 kernel: acpiphp: Slot [19] registered Jul 7 05:52:49.231552 kernel: acpiphp: Slot [20] registered Jul 7 05:52:49.231571 kernel: acpiphp: Slot [21] registered Jul 7 05:52:49.231589 kernel: acpiphp: Slot [22] registered Jul 7 05:52:49.231608 kernel: acpiphp: Slot [23] registered Jul 7 05:52:49.231630 kernel: acpiphp: Slot [24] registered Jul 7 05:52:49.231649 kernel: acpiphp: Slot [25] registered Jul 7 05:52:49.231667 kernel: acpiphp: Slot [26] registered Jul 7 05:52:49.236446 kernel: acpiphp: Slot [27] registered Jul 7 05:52:49.236477 kernel: acpiphp: Slot [28] registered Jul 7 05:52:49.236497 kernel: acpiphp: Slot [29] registered Jul 7 05:52:49.236516 kernel: acpiphp: Slot [30] registered Jul 7 05:52:49.236535 kernel: acpiphp: Slot [31] registered Jul 7 05:52:49.236554 kernel: PCI host bridge to bus 0000:00 Jul 7 05:52:49.236997 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 7 05:52:49.237199 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 05:52:49.237381 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 7 05:52:49.237560 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 7 05:52:49.237838 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 7 05:52:49.238069 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 7 05:52:49.238276 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 7 05:52:49.238507 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 7 05:52:49.240755 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 7 05:52:49.240982 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 05:52:49.241216 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 7 05:52:49.241426 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 7 05:52:49.241636 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 7 05:52:49.241879 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 7 05:52:49.242090 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 05:52:49.242299 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 7 05:52:49.242512 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 7 05:52:49.243882 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 7 05:52:49.244165 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 7 05:52:49.244385 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 7 05:52:49.244570 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 7 05:52:49.245859 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 05:52:49.246066 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 7 05:52:49.246092 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 05:52:49.246113 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 05:52:49.246133 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 05:52:49.246152 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 05:52:49.246172 kernel: iommu: Default domain type: Translated Jul 7 05:52:49.246191 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:52:49.246219 kernel: efivars: Registered efivars operations Jul 7 05:52:49.246238 kernel: vgaarb: loaded Jul 7 05:52:49.246257 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:52:49.246275 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:52:49.246294 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:52:49.246313 kernel: pnp: PnP ACPI init Jul 7 05:52:49.246525 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 7 05:52:49.246555 kernel: pnp: PnP ACPI: found 1 devices Jul 7 05:52:49.246580 kernel: NET: Registered PF_INET protocol family Jul 7 05:52:49.246599 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:52:49.246620 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:52:49.246639 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:52:49.246658 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:52:49.247574 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:52:49.247605 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:52:49.247624 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:49.247643 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:49.247670 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:52:49.247732 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:52:49.247752 kernel: kvm [1]: HYP mode not available Jul 7 05:52:49.247771 kernel: Initialise system trusted keyrings Jul 7 05:52:49.247790 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:52:49.247808 kernel: Key type asymmetric registered Jul 7 05:52:49.247826 kernel: Asymmetric key parser 'x509' registered Jul 7 05:52:49.247845 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:52:49.247863 kernel: io scheduler mq-deadline registered Jul 7 05:52:49.247888 kernel: io scheduler kyber registered Jul 7 05:52:49.247907 kernel: io scheduler bfq registered Jul 7 05:52:49.248172 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 7 05:52:49.248206 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 05:52:49.248247 kernel: ACPI: button: Power Button [PWRB] Jul 7 05:52:49.248294 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 7 05:52:49.248329 kernel: ACPI: button: Sleep Button [SLPB] Jul 7 05:52:49.248348 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:52:49.248375 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 7 05:52:49.248593 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 7 05:52:49.248620 kernel: printk: console [ttyS0] disabled Jul 7 05:52:49.248639 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 7 05:52:49.248658 kernel: printk: console [ttyS0] enabled Jul 7 05:52:49.248704 kernel: printk: bootconsole [uart0] disabled Jul 7 05:52:49.248729 kernel: thunder_xcv, ver 1.0 Jul 7 05:52:49.248748 kernel: thunder_bgx, ver 1.0 Jul 7 05:52:49.248766 kernel: nicpf, ver 1.0 Jul 7 05:52:49.248791 kernel: nicvf, ver 1.0 Jul 7 05:52:49.249001 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:52:49.249190 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:52:48 UTC (1751867568) Jul 7 05:52:49.249216 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:52:49.249235 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 7 05:52:49.249254 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:52:49.249272 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:52:49.249291 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:52:49.249316 kernel: Segment Routing with IPv6 Jul 7 05:52:49.249335 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:52:49.249354 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:52:49.249374 kernel: Key type dns_resolver registered Jul 7 05:52:49.249392 kernel: registered taskstats version 1 Jul 7 05:52:49.249411 kernel: Loading compiled-in X.509 certificates Jul 7 05:52:49.249430 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:52:49.249449 kernel: Key type .fscrypt registered Jul 7 05:52:49.249467 kernel: Key type fscrypt-provisioning registered Jul 7 05:52:49.249486 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:52:49.249510 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:52:49.249528 kernel: ima: No architecture policies found Jul 7 05:52:49.249547 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:52:49.249566 kernel: clk: Disabling unused clocks Jul 7 05:52:49.249585 kernel: Freeing unused kernel memory: 39424K Jul 7 05:52:49.249603 kernel: Run /init as init process Jul 7 05:52:49.249622 kernel: with arguments: Jul 7 05:52:49.249640 kernel: /init Jul 7 05:52:49.249659 kernel: with environment: Jul 7 05:52:49.250885 kernel: HOME=/ Jul 7 05:52:49.250917 kernel: TERM=linux Jul 7 05:52:49.250935 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:52:49.250958 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:52:49.250982 systemd[1]: Detected virtualization amazon. Jul 7 05:52:49.251003 systemd[1]: Detected architecture arm64. Jul 7 05:52:49.251023 systemd[1]: Running in initrd. Jul 7 05:52:49.251051 systemd[1]: No hostname configured, using default hostname. Jul 7 05:52:49.251071 systemd[1]: Hostname set to . Jul 7 05:52:49.251092 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:52:49.251112 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:52:49.251133 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:49.251153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:49.251175 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:52:49.251196 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:52:49.251221 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:52:49.251243 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:52:49.251266 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:52:49.251287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:52:49.251308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:49.251328 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:49.251349 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:52:49.251374 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:52:49.251396 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:52:49.251416 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:52:49.251436 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:49.251457 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:49.251478 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:52:49.251499 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:52:49.251519 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:49.251540 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:49.251566 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:49.251586 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:52:49.251606 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:52:49.251628 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:52:49.251648 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:52:49.251669 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:52:49.251725 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:52:49.251751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:52:49.251782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:49.251803 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:49.251824 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:49.251845 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:52:49.251913 systemd-journald[250]: Collecting audit messages is disabled. Jul 7 05:52:49.251979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:52:49.252004 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:52:49.252025 kernel: Bridge firewalling registered Jul 7 05:52:49.252046 systemd-journald[250]: Journal started Jul 7 05:52:49.252089 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2456ad375d3ca03bc0ddcb0a3c1564) is 8.0M, max 75.3M, 67.3M free. Jul 7 05:52:49.253039 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:52:49.209986 systemd-modules-load[252]: Inserted module 'overlay' Jul 7 05:52:49.251794 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 7 05:52:49.260335 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:49.263434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:49.270922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:49.286209 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:49.295944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:52:49.297962 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:52:49.315783 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:52:49.356149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:49.363807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:49.367926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:49.375733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:49.390981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:52:49.406952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:52:49.424356 dracut-cmdline[287]: dracut-dracut-053 Jul 7 05:52:49.432136 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:49.507728 systemd-resolved[288]: Positive Trust Anchors: Jul 7 05:52:49.507760 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:52:49.507826 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:52:49.615723 kernel: SCSI subsystem initialized Jul 7 05:52:49.623721 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:52:49.637710 kernel: iscsi: registered transport (tcp) Jul 7 05:52:49.659046 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:52:49.659117 kernel: QLogic iSCSI HBA Driver Jul 7 05:52:49.744737 kernel: random: crng init done Jul 7 05:52:49.743243 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 7 05:52:49.747696 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:52:49.752414 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:49.776502 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:49.787989 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:52:49.832801 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:52:49.832877 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:52:49.834649 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:52:49.901739 kernel: raid6: neonx8 gen() 6719 MB/s Jul 7 05:52:49.918713 kernel: raid6: neonx4 gen() 6544 MB/s Jul 7 05:52:49.935725 kernel: raid6: neonx2 gen() 5428 MB/s Jul 7 05:52:49.952714 kernel: raid6: neonx1 gen() 3950 MB/s Jul 7 05:52:49.969714 kernel: raid6: int64x8 gen() 3798 MB/s Jul 7 05:52:49.986714 kernel: raid6: int64x4 gen() 3725 MB/s Jul 7 05:52:50.003726 kernel: raid6: int64x2 gen() 3571 MB/s Jul 7 05:52:50.021682 kernel: raid6: int64x1 gen() 2775 MB/s Jul 7 05:52:50.021718 kernel: raid6: using algorithm neonx8 gen() 6719 MB/s Jul 7 05:52:50.039654 kernel: raid6: .... xor() 4928 MB/s, rmw enabled Jul 7 05:52:50.039737 kernel: raid6: using neon recovery algorithm Jul 7 05:52:50.047719 kernel: xor: measuring software checksum speed Jul 7 05:52:50.047782 kernel: 8regs : 10259 MB/sec Jul 7 05:52:50.051087 kernel: 32regs : 11008 MB/sec Jul 7 05:52:50.051122 kernel: arm64_neon : 9571 MB/sec Jul 7 05:52:50.051147 kernel: xor: using function: 32regs (11008 MB/sec) Jul 7 05:52:50.136741 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:52:50.155536 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:50.165008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:50.208779 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 7 05:52:50.217306 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:50.236260 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:52:50.278734 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jul 7 05:52:50.334425 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:50.345221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:52:50.460773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:50.477979 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:52:50.516422 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:50.523872 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:50.529349 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:50.538669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:52:50.554112 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:52:50.600396 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:50.677961 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 05:52:50.678040 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 7 05:52:50.682828 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 05:52:50.683156 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 05:52:50.690084 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:50.692390 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:50.702740 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:50.711073 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:70:36:67:9a:1f Jul 7 05:52:50.705356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:50.705653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:50.721155 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 7 05:52:50.721194 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 05:52:50.708447 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:50.709324 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:52:50.733716 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 05:52:50.739125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:50.751538 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 05:52:50.751584 kernel: GPT:9289727 != 16777215 Jul 7 05:52:50.751610 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 05:52:50.751635 kernel: GPT:9289727 != 16777215 Jul 7 05:52:50.751659 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 05:52:50.751722 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:50.780941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:50.794762 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:50.845801 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:50.869771 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (529) Jul 7 05:52:50.883755 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (528) Jul 7 05:52:50.980557 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 05:52:50.998874 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 05:52:51.016850 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 05:52:51.041608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 05:52:51.047199 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 05:52:51.073030 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:52:51.088015 disk-uuid[661]: Primary Header is updated. Jul 7 05:52:51.088015 disk-uuid[661]: Secondary Entries is updated. Jul 7 05:52:51.088015 disk-uuid[661]: Secondary Header is updated. Jul 7 05:52:51.101713 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:51.110779 kernel: GPT:disk_guids don't match. Jul 7 05:52:51.110838 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 05:52:51.110864 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:51.119725 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:52.123729 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:52.126542 disk-uuid[662]: The operation has completed successfully. Jul 7 05:52:52.300444 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:52:52.302577 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:52:52.362980 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:52:52.370648 sh[1005]: Success Jul 7 05:52:52.396117 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:52:52.505612 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:52:52.514901 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:52:52.519763 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:52:52.562818 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:52:52.562892 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:52.564791 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:52:52.566184 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:52:52.567382 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:52:52.696706 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 05:52:52.717886 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:52:52.722457 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:52:52.740987 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:52:52.747984 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:52:52.783023 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:52.783097 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:52.784859 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:52:52.801751 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:52:52.819961 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:52:52.822739 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:52.833533 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:52:52.846997 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:52:52.960449 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:52.977172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:52:53.046379 systemd-networkd[1197]: lo: Link UP Jul 7 05:52:53.046400 systemd-networkd[1197]: lo: Gained carrier Jul 7 05:52:53.051738 systemd-networkd[1197]: Enumeration completed Jul 7 05:52:53.052557 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:52:53.052781 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:53.052787 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:53.060638 systemd-networkd[1197]: eth0: Link UP Jul 7 05:52:53.060645 systemd-networkd[1197]: eth0: Gained carrier Jul 7 05:52:53.060663 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:53.075266 systemd[1]: Reached target network.target - Network. Jul 7 05:52:53.098768 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.23.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 05:52:53.254472 ignition[1122]: Ignition 2.19.0 Jul 7 05:52:53.254494 ignition[1122]: Stage: fetch-offline Jul 7 05:52:53.258653 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:53.258711 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:53.263397 ignition[1122]: Ignition finished successfully Jul 7 05:52:53.267290 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:53.279987 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:52:53.306390 ignition[1206]: Ignition 2.19.0 Jul 7 05:52:53.306412 ignition[1206]: Stage: fetch Jul 7 05:52:53.307631 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:53.307657 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:53.307838 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:53.336528 ignition[1206]: PUT result: OK Jul 7 05:52:53.340524 ignition[1206]: parsed url from cmdline: "" Jul 7 05:52:53.340540 ignition[1206]: no config URL provided Jul 7 05:52:53.340555 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:53.340581 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:53.340615 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:53.343898 ignition[1206]: PUT result: OK Jul 7 05:52:53.343989 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 05:52:53.349875 ignition[1206]: GET result: OK Jul 7 05:52:53.350015 ignition[1206]: parsing config with SHA512: 77e6823fe226ad3bf109cfa2f0bbe6fd2f1f66e20559e19f522750f7b0ba2cd0df5eaafb64cb742b524c0d6481fcc70c5257dda047a8adf2d933150865d47bdc Jul 7 05:52:53.360985 unknown[1206]: fetched base config from "system" Jul 7 05:52:53.361028 unknown[1206]: fetched base config from "system" Jul 7 05:52:53.361044 unknown[1206]: fetched user config from "aws" Jul 7 05:52:53.366954 ignition[1206]: fetch: fetch complete Jul 7 05:52:53.366968 ignition[1206]: fetch: fetch passed Jul 7 05:52:53.367090 ignition[1206]: Ignition finished successfully Jul 7 05:52:53.374773 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:52:53.387099 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:52:53.414322 ignition[1213]: Ignition 2.19.0 Jul 7 05:52:53.414851 ignition[1213]: Stage: kargs Jul 7 05:52:53.415516 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:53.415541 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:53.415750 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:53.428151 ignition[1213]: PUT result: OK Jul 7 05:52:53.433024 ignition[1213]: kargs: kargs passed Jul 7 05:52:53.433125 ignition[1213]: Ignition finished successfully Jul 7 05:52:53.438740 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:52:53.452141 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:52:53.477894 ignition[1219]: Ignition 2.19.0 Jul 7 05:52:53.478637 ignition[1219]: Stage: disks Jul 7 05:52:53.479362 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:53.479388 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:53.479881 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:53.492642 ignition[1219]: PUT result: OK Jul 7 05:52:53.497353 ignition[1219]: disks: disks passed Jul 7 05:52:53.497655 ignition[1219]: Ignition finished successfully Jul 7 05:52:53.506306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:52:53.513015 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:53.515562 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:52:53.520383 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:52:53.522610 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:52:53.524930 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:52:53.541380 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:52:53.580093 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 05:52:53.588253 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:52:53.605929 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:52:53.695733 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:52:53.697242 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:52:53.702198 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:52:53.721982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:53.725927 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:52:53.733447 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 05:52:53.733548 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:52:53.733641 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:53.760808 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1246) Jul 7 05:52:53.765071 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:53.765130 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:53.765158 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:52:53.765045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:52:53.781953 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:52:53.789744 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:52:53.792226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:54.219918 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:52:54.231063 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:52:54.240637 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:52:54.250056 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:52:54.555807 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:54.569858 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:52:54.581299 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:52:54.598153 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:54.592269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:52:54.636834 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:52:54.641985 ignition[1359]: INFO : Ignition 2.19.0 Jul 7 05:52:54.641985 ignition[1359]: INFO : Stage: mount Jul 7 05:52:54.645858 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:54.645858 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:54.645858 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:54.645858 ignition[1359]: INFO : PUT result: OK Jul 7 05:52:54.659832 ignition[1359]: INFO : mount: mount passed Jul 7 05:52:54.662354 ignition[1359]: INFO : Ignition finished successfully Jul 7 05:52:54.665002 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:52:54.675929 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:52:54.708373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:54.733733 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1370) Jul 7 05:52:54.733808 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:54.737368 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:54.737404 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:52:54.743715 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:52:54.748120 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:54.787455 ignition[1387]: INFO : Ignition 2.19.0 Jul 7 05:52:54.787455 ignition[1387]: INFO : Stage: files Jul 7 05:52:54.796215 ignition[1387]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:54.796215 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:54.796215 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:54.796215 ignition[1387]: INFO : PUT result: OK Jul 7 05:52:54.788977 systemd-networkd[1197]: eth0: Gained IPv6LL Jul 7 05:52:54.809399 ignition[1387]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:52:54.809399 ignition[1387]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:52:54.809399 ignition[1387]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:52:54.828115 ignition[1387]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:52:54.832533 ignition[1387]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:52:54.837775 unknown[1387]: wrote ssh authorized keys file for user: core Jul 7 05:52:54.840418 ignition[1387]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:52:54.850662 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:54.855898 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:52:55.891846 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 05:52:56.496634 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:56.496634 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 05:52:56.505583 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 7 05:52:56.837973 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 05:52:56.963381 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:56.968301 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:52:57.638519 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 05:52:57.959207 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:57.959207 ignition[1387]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:52:57.967847 ignition[1387]: INFO : files: files passed Jul 7 05:52:57.967847 ignition[1387]: INFO : Ignition finished successfully Jul 7 05:52:57.972541 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:52:58.009055 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:52:58.013978 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:52:58.030518 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:52:58.032931 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:52:58.050427 initrd-setup-root-after-ignition[1415]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:58.050427 initrd-setup-root-after-ignition[1415]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:58.058068 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:52:58.064822 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:58.070582 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:52:58.082107 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:52:58.131352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:52:58.133881 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:52:58.136486 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:52:58.136660 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:52:58.137446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:52:58.152158 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:52:58.182206 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:58.193271 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:52:58.220362 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:58.228291 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:58.233786 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:52:58.234800 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:52:58.235073 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:52:58.236358 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:52:58.237532 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:52:58.238650 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:52:58.239417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:58.249219 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:58.250121 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:52:58.251506 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:58.252639 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:52:58.253339 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:52:58.254466 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:52:58.255536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:52:58.255848 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:58.265074 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:58.268607 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:58.273577 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:52:58.282983 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:58.286941 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:52:58.287228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:58.311647 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:52:58.312060 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:52:58.317650 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:52:58.318358 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:52:58.352606 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:52:58.358224 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:52:58.358532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:58.368438 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:52:58.372706 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:52:58.373536 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:58.384562 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:52:58.385457 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:58.402742 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:52:58.405806 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:52:58.424756 ignition[1439]: INFO : Ignition 2.19.0 Jul 7 05:52:58.424756 ignition[1439]: INFO : Stage: umount Jul 7 05:52:58.424756 ignition[1439]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:58.424756 ignition[1439]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:58.435080 ignition[1439]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:58.435080 ignition[1439]: INFO : PUT result: OK Jul 7 05:52:58.447787 ignition[1439]: INFO : umount: umount passed Jul 7 05:52:58.447787 ignition[1439]: INFO : Ignition finished successfully Jul 7 05:52:58.438142 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:52:58.449243 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:52:58.449795 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:52:58.456489 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:52:58.456917 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:52:58.461697 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:52:58.461820 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:52:58.464130 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:52:58.464226 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:52:58.466466 systemd[1]: Stopped target network.target - Network. Jul 7 05:52:58.468782 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:52:58.468881 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:58.471617 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:52:58.479275 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:52:58.481483 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:58.482214 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:52:58.483119 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:52:58.488031 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:52:58.489237 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:58.510167 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:52:58.510262 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:58.512776 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:52:58.512894 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:52:58.515249 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:52:58.515360 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:58.518211 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:52:58.521711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:52:58.527276 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:52:58.527495 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:52:58.533562 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:52:58.533773 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:58.539094 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jul 7 05:52:58.562897 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:52:58.563216 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:52:58.575305 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:52:58.575951 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:52:58.591150 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:52:58.592981 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:58.608927 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:52:58.611631 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:52:58.611802 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:58.623243 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:52:58.623368 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:58.625970 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:52:58.626077 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:58.628818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:52:58.628921 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:58.631886 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:58.674385 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:52:58.674779 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:58.678427 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:52:58.678512 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:58.682362 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:52:58.682431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:58.685496 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:52:58.685599 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:58.693550 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:52:58.693652 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:58.702942 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:58.703037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:58.726975 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:52:58.732091 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:52:58.732224 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:58.742167 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:58.742274 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:58.746787 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:52:58.747855 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:52:58.761433 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:52:58.761625 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:52:58.766469 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:52:58.781889 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:52:58.797762 systemd[1]: Switching root. Jul 7 05:52:58.852033 systemd-journald[250]: Journal stopped Jul 7 05:53:01.505366 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jul 7 05:53:01.505502 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:53:01.505546 kernel: SELinux: policy capability open_perms=1 Jul 7 05:53:01.505578 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:53:01.505609 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:53:01.505641 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:53:01.505672 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:53:01.505736 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:53:01.505776 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:53:01.505808 kernel: audit: type=1403 audit(1751867579.408:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:53:01.505840 systemd[1]: Successfully loaded SELinux policy in 85.068ms. Jul 7 05:53:01.505895 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.755ms. Jul 7 05:53:01.505932 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:53:01.505965 systemd[1]: Detected virtualization amazon. Jul 7 05:53:01.505995 systemd[1]: Detected architecture arm64. Jul 7 05:53:01.506029 systemd[1]: Detected first boot. Jul 7 05:53:01.506061 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:53:01.506094 zram_generator::config[1482]: No configuration found. Jul 7 05:53:01.506129 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:53:01.506172 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 05:53:01.506204 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 05:53:01.506237 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 05:53:01.506277 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:53:01.506315 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:53:01.506345 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:53:01.506374 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:53:01.506405 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:53:01.506437 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:53:01.506469 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:53:01.506498 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:53:01.506528 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:53:01.506558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:53:01.506592 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:53:01.506621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:53:01.506653 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:53:01.507616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:53:01.507660 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 05:53:01.507717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:53:01.507753 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 05:53:01.507786 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 05:53:01.507819 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 05:53:01.507858 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:53:01.507914 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:53:01.507954 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:53:01.507990 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:53:01.508024 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:53:01.508057 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:53:01.508089 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:53:01.508120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:53:01.508157 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:53:01.508191 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:53:01.508223 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:53:01.508253 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:53:01.508287 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:53:01.508320 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:53:01.508353 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:53:01.508383 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:53:01.508415 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:53:01.508452 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:53:01.508487 systemd[1]: Reached target machines.target - Containers. Jul 7 05:53:01.508516 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:53:01.508546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:53:01.508580 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:53:01.508610 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:53:01.508643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:53:01.509527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:53:01.509608 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:53:01.509643 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:53:01.509673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:53:01.509754 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:53:01.509790 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 05:53:01.509825 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 05:53:01.509856 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 05:53:01.509887 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 05:53:01.509917 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:53:01.509954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:53:01.509985 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:53:01.510016 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:53:01.510048 kernel: loop: module loaded Jul 7 05:53:01.510083 kernel: fuse: init (API version 7.39) Jul 7 05:53:01.510115 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:53:01.510150 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 05:53:01.510185 systemd[1]: Stopped verity-setup.service. Jul 7 05:53:01.510219 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:53:01.510257 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:53:01.510290 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:53:01.510321 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:53:01.510350 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:53:01.510381 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:53:01.510411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:53:01.510445 kernel: ACPI: bus type drm_connector registered Jul 7 05:53:01.510475 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:53:01.510507 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:53:01.510539 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:53:01.510568 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:53:01.510598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:53:01.510627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:53:01.510662 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:53:01.511554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:53:01.511607 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:53:01.511637 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:53:01.511668 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:53:01.511731 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:53:01.511777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:53:01.511808 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:53:01.511838 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:53:01.511868 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:53:01.511912 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:53:01.511949 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:53:01.511980 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:53:01.512010 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:53:01.512046 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:53:01.512130 systemd-journald[1561]: Collecting audit messages is disabled. Jul 7 05:53:01.512186 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:53:01.512222 systemd-journald[1561]: Journal started Jul 7 05:53:01.512271 systemd-journald[1561]: Runtime Journal (/run/log/journal/ec2456ad375d3ca03bc0ddcb0a3c1564) is 8.0M, max 75.3M, 67.3M free. Jul 7 05:53:00.716288 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:53:00.796410 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 05:53:00.797263 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 05:53:01.532730 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:53:01.538726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:53:01.552036 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:53:01.552132 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:53:01.578504 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:53:01.578607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:53:01.600551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:53:01.610734 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:53:01.610835 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:53:01.620867 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:53:01.625238 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:53:01.628350 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:53:01.631703 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:53:01.650262 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:53:01.694375 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:53:01.711098 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:53:01.716300 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:53:01.723937 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:53:01.752402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:53:01.763729 kernel: loop0: detected capacity change from 0 to 52536 Jul 7 05:53:01.803857 systemd-journald[1561]: Time spent on flushing to /var/log/journal/ec2456ad375d3ca03bc0ddcb0a3c1564 is 96.124ms for 917 entries. Jul 7 05:53:01.803857 systemd-journald[1561]: System Journal (/var/log/journal/ec2456ad375d3ca03bc0ddcb0a3c1564) is 8.0M, max 195.6M, 187.6M free. Jul 7 05:53:01.920420 systemd-journald[1561]: Received client request to flush runtime journal. Jul 7 05:53:01.920489 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:53:01.812669 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:53:01.817471 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:53:01.879632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:53:01.900225 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:53:01.917426 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:53:01.928227 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:53:01.932020 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:53:01.968283 kernel: loop1: detected capacity change from 0 to 114328 Jul 7 05:53:01.974520 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 05:53:02.046765 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Jul 7 05:53:02.047955 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Jul 7 05:53:02.065798 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:53:02.092848 kernel: loop2: detected capacity change from 0 to 114432 Jul 7 05:53:02.212745 kernel: loop3: detected capacity change from 0 to 203944 Jul 7 05:53:02.453745 kernel: loop4: detected capacity change from 0 to 52536 Jul 7 05:53:02.467728 kernel: loop5: detected capacity change from 0 to 114328 Jul 7 05:53:02.478851 kernel: loop6: detected capacity change from 0 to 114432 Jul 7 05:53:02.492737 kernel: loop7: detected capacity change from 0 to 203944 Jul 7 05:53:02.518484 (sd-merge)[1637]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 05:53:02.519531 (sd-merge)[1637]: Merged extensions into '/usr'. Jul 7 05:53:02.529971 systemd[1]: Reloading requested from client PID 1593 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:53:02.530497 systemd[1]: Reloading... Jul 7 05:53:02.742310 zram_generator::config[1666]: No configuration found. Jul 7 05:53:03.022127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:03.137265 systemd[1]: Reloading finished in 605 ms. Jul 7 05:53:03.176193 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:53:03.181771 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:53:03.200049 systemd[1]: Starting ensure-sysext.service... Jul 7 05:53:03.211148 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:53:03.218772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:53:03.241878 systemd[1]: Reloading requested from client PID 1715 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:53:03.241910 systemd[1]: Reloading... Jul 7 05:53:03.284412 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:53:03.286584 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:53:03.289470 systemd-tmpfiles[1716]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:53:03.290260 systemd-tmpfiles[1716]: ACLs are not supported, ignoring. Jul 7 05:53:03.290412 systemd-tmpfiles[1716]: ACLs are not supported, ignoring. Jul 7 05:53:03.309464 systemd-tmpfiles[1716]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:53:03.309493 systemd-tmpfiles[1716]: Skipping /boot Jul 7 05:53:03.348385 systemd-tmpfiles[1716]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:53:03.348418 systemd-tmpfiles[1716]: Skipping /boot Jul 7 05:53:03.367427 systemd-udevd[1717]: Using default interface naming scheme 'v255'. Jul 7 05:53:03.404743 ldconfig[1589]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:53:03.481717 zram_generator::config[1755]: No configuration found. Jul 7 05:53:03.625530 (udev-worker)[1781]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:53:03.874630 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1778) Jul 7 05:53:03.937362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:04.131610 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 05:53:04.134826 systemd[1]: Reloading finished in 892 ms. Jul 7 05:53:04.187451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:53:04.195638 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:53:04.209818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:53:04.281812 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:53:04.325277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 05:53:04.361227 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:04.373484 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:53:04.376535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:53:04.381278 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:53:04.398273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:53:04.406270 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:53:04.413342 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:53:04.423173 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:53:04.426056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:53:04.432284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:53:04.445753 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:53:04.465480 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:53:04.479276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:53:04.482243 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:53:04.492285 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:53:04.499239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:53:04.511641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:53:04.513837 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:53:04.525074 systemd[1]: Finished ensure-sysext.service. Jul 7 05:53:04.529395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:53:04.530935 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:53:04.534891 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:53:04.535258 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:53:04.563524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:53:04.564956 lvm[1916]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:53:04.574085 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:53:04.600500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:53:04.601404 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:53:04.605129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:53:04.651433 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:53:04.676431 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:53:04.694419 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:53:04.705818 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:53:04.709311 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:53:04.722494 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:53:04.731496 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:53:04.744837 augenrules[1956]: No rules Jul 7 05:53:04.747944 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:04.783983 lvm[1952]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:53:04.794910 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:53:04.830599 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:53:04.833593 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:53:04.844018 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:53:04.854105 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:53:04.911896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:53:05.008474 systemd-resolved[1930]: Positive Trust Anchors: Jul 7 05:53:05.008511 systemd-resolved[1930]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:53:05.008576 systemd-resolved[1930]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:53:05.009623 systemd-networkd[1927]: lo: Link UP Jul 7 05:53:05.009632 systemd-networkd[1927]: lo: Gained carrier Jul 7 05:53:05.012899 systemd-networkd[1927]: Enumeration completed Jul 7 05:53:05.013133 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:53:05.016628 systemd-networkd[1927]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:53:05.016637 systemd-networkd[1927]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:53:05.024760 systemd-networkd[1927]: eth0: Link UP Jul 7 05:53:05.025005 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:53:05.033135 systemd-networkd[1927]: eth0: Gained carrier Jul 7 05:53:05.033197 systemd-networkd[1927]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:53:05.034799 systemd-resolved[1930]: Defaulting to hostname 'linux'. Jul 7 05:53:05.039375 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:53:05.043026 systemd[1]: Reached target network.target - Network. Jul 7 05:53:05.045916 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:53:05.048909 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:53:05.053081 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:53:05.056100 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:53:05.059156 systemd-networkd[1927]: eth0: DHCPv4 address 172.31.23.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 05:53:05.060240 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:53:05.064224 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:53:05.067904 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:53:05.070886 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:53:05.070949 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:53:05.073575 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:53:05.076751 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:53:05.082381 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:53:05.096811 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:53:05.100636 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:53:05.103583 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:53:05.106194 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:53:05.108803 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:53:05.108876 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:53:05.119940 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:53:05.129218 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 05:53:05.140113 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:53:05.148018 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:53:05.166620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:53:05.169124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:53:05.174407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:53:05.185538 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 05:53:05.203127 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 05:53:05.206910 jq[1982]: false Jul 7 05:53:05.212962 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 05:53:05.218350 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:53:05.227097 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:53:05.242021 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:53:05.247820 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:53:05.250193 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:53:05.256956 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:53:05.268002 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:53:05.279656 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:53:05.280199 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:53:05.336595 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:53:05.349966 extend-filesystems[1983]: Found loop4 Jul 7 05:53:05.349966 extend-filesystems[1983]: Found loop5 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found loop6 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found loop7 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1p1 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1p2 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1p3 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found usr Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1p4 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1p6 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1p7 Jul 7 05:53:05.359923 extend-filesystems[1983]: Found nvme0n1p9 Jul 7 05:53:05.359923 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:27 UTC 2025 (1): Starting Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: ---------------------------------------------------- Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: corporation. Support and training for ntp-4 are Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: available at https://www.nwtime.org/support Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: ---------------------------------------------------- Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: proto: precision = 0.108 usec (-23) Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: basedate set to 2025-06-24 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: gps base set to 2025-06-29 (week 2373) Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Listen normally on 3 eth0 172.31.23.146:123 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Listen normally on 4 lo [::1]:123 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: bind(21) AF_INET6 fe80::470:36ff:fe67:9a1f%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: unable to create socket on eth0 (5) for fe80::470:36ff:fe67:9a1f%2#123 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: failed to init interface for address fe80::470:36ff:fe67:9a1f%2 Jul 7 05:53:05.438291 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Jul 7 05:53:05.439610 jq[1993]: true Jul 7 05:53:05.389359 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:27 UTC 2025 (1): Starting Jul 7 05:53:05.443482 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:53:05.389411 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 05:53:05.489988 tar[1997]: linux-arm64/helm Jul 7 05:53:05.490445 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:05.490445 ntpd[1985]: 7 Jul 05:53:05 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:05.443914 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:53:05.389432 ntpd[1985]: ---------------------------------------------------- Jul 7 05:53:05.509391 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 Jul 7 05:53:05.481655 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:53:05.389453 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Jul 7 05:53:05.492215 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:53:05.389473 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 05:53:05.492333 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:53:05.389492 ntpd[1985]: corporation. Support and training for ntp-4 are Jul 7 05:53:05.497344 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:53:05.389516 ntpd[1985]: available at https://www.nwtime.org/support Jul 7 05:53:05.497390 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:53:05.389536 ntpd[1985]: ---------------------------------------------------- Jul 7 05:53:05.546242 extend-filesystems[2021]: resize2fs 1.47.1 (20-May-2024) Jul 7 05:53:05.563889 update_engine[1992]: I20250707 05:53:05.539074 1992 main.cc:92] Flatcar Update Engine starting Jul 7 05:53:05.402502 ntpd[1985]: proto: precision = 0.108 usec (-23) Jul 7 05:53:05.550495 (ntainerd)[2025]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:53:05.419058 ntpd[1985]: basedate set to 2025-06-24 Jul 7 05:53:05.551522 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:53:05.419094 ntpd[1985]: gps base set to 2025-06-29 (week 2373) Jul 7 05:53:05.591522 update_engine[1992]: I20250707 05:53:05.591251 1992 update_check_scheduler.cc:74] Next update check in 11m14s Jul 7 05:53:05.554473 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:53:05.425947 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 05:53:05.588214 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:53:05.426035 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 05:53:05.429428 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 05:53:05.429510 ntpd[1985]: Listen normally on 3 eth0 172.31.23.146:123 Jul 7 05:53:05.429582 ntpd[1985]: Listen normally on 4 lo [::1]:123 Jul 7 05:53:05.429668 ntpd[1985]: bind(21) AF_INET6 fe80::470:36ff:fe67:9a1f%2#123 flags 0x11 failed: Cannot assign requested address Jul 7 05:53:05.432804 ntpd[1985]: unable to create socket on eth0 (5) for fe80::470:36ff:fe67:9a1f%2#123 Jul 7 05:53:05.432841 ntpd[1985]: failed to init interface for address fe80::470:36ff:fe67:9a1f%2 Jul 7 05:53:05.432919 ntpd[1985]: Listening on routing socket on fd #21 for interface updates Jul 7 05:53:05.463773 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:05.463830 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:05.477095 dbus-daemon[1981]: [system] SELinux support is enabled Jul 7 05:53:05.525909 dbus-daemon[1981]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1927 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 05:53:05.593466 systemd-logind[1991]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 05:53:05.603873 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 05:53:05.547343 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 05:53:05.604031 jq[2013]: true Jul 7 05:53:05.593506 systemd-logind[1991]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 7 05:53:05.594381 systemd-logind[1991]: New seat seat0. Jul 7 05:53:05.611006 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 05:53:05.638466 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:53:05.641899 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:53:05.726621 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 05:53:05.736622 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 05:53:05.751216 extend-filesystems[2021]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 05:53:05.751216 extend-filesystems[2021]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 05:53:05.751216 extend-filesystems[2021]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 05:53:05.782180 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 Jul 7 05:53:05.764255 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:53:05.764742 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:53:05.920142 bash[2060]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:53:05.929445 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:53:05.991484 coreos-metadata[1980]: Jul 07 05:53:05.991 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 05:53:06.060989 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1767) Jul 7 05:53:06.054297 systemd[1]: Starting sshkeys.service... Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:05.997 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.002 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.012 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.012 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.017 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.022 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.028 INFO Fetch failed with 404: resource not found Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.028 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.030 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.031 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.042 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.045 INFO Fetch successful Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.045 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 05:53:06.061641 coreos-metadata[1980]: Jul 07 05:53:06.048 INFO Fetch successful Jul 7 05:53:06.129814 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 05:53:06.138796 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 05:53:06.173602 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 05:53:06.177302 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:53:06.182354 systemd-networkd[1927]: eth0: Gained IPv6LL Jul 7 05:53:06.194266 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:53:06.205948 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:53:06.235978 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 05:53:06.244186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:06.251004 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:53:06.285491 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 05:53:06.285840 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 05:53:06.287378 dbus-daemon[1981]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 05:53:06.317726 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 05:53:06.377568 polkitd[2127]: Started polkitd version 121 Jul 7 05:53:06.407850 polkitd[2127]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 05:53:06.408027 polkitd[2127]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 05:53:06.415253 polkitd[2127]: Finished loading, compiling and executing 2 rules Jul 7 05:53:06.417669 dbus-daemon[1981]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 05:53:06.420934 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 05:53:06.430643 polkitd[2127]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 05:53:06.544771 amazon-ssm-agent[2112]: Initializing new seelog logger Jul 7 05:53:06.544771 amazon-ssm-agent[2112]: New Seelog Logger Creation Complete Jul 7 05:53:06.544771 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.544771 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 processing appconfig overrides Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 processing appconfig overrides Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 processing appconfig overrides Jul 7 05:53:06.549868 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO Proxy environment variables: Jul 7 05:53:06.558442 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.558442 amazon-ssm-agent[2112]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:06.558442 amazon-ssm-agent[2112]: 2025/07/07 05:53:06 processing appconfig overrides Jul 7 05:53:06.558651 coreos-metadata[2104]: Jul 07 05:53:06.557 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 05:53:06.570510 coreos-metadata[2104]: Jul 07 05:53:06.562 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 05:53:06.570510 coreos-metadata[2104]: Jul 07 05:53:06.565 INFO Fetch successful Jul 7 05:53:06.570510 coreos-metadata[2104]: Jul 07 05:53:06.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 05:53:06.573653 coreos-metadata[2104]: Jul 07 05:53:06.572 INFO Fetch successful Jul 7 05:53:06.571839 systemd-hostnamed[2030]: Hostname set to (transient) Jul 7 05:53:06.578628 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:53:06.586657 systemd-resolved[1930]: System hostname changed to 'ip-172-31-23-146'. Jul 7 05:53:06.597847 unknown[2104]: wrote ssh authorized keys file for user: core Jul 7 05:53:06.650799 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO https_proxy: Jul 7 05:53:06.661613 update-ssh-keys[2177]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:53:06.666843 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 05:53:06.679150 systemd[1]: Finished sshkeys.service. Jul 7 05:53:06.755083 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO http_proxy: Jul 7 05:53:06.857816 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO no_proxy: Jul 7 05:53:06.859518 locksmithd[2032]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:53:06.960946 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO Checking if agent identity type OnPrem can be assumed Jul 7 05:53:07.047499 containerd[2025]: time="2025-07-07T05:53:07.047367324Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:53:07.059848 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO Checking if agent identity type EC2 can be assumed Jul 7 05:53:07.160020 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO Agent will take identity from EC2 Jul 7 05:53:07.237200 containerd[2025]: time="2025-07-07T05:53:07.236310841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:07.245086 containerd[2025]: time="2025-07-07T05:53:07.244970329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:07.245086 containerd[2025]: time="2025-07-07T05:53:07.245067325Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:53:07.245313 containerd[2025]: time="2025-07-07T05:53:07.245121793Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:53:07.245849 containerd[2025]: time="2025-07-07T05:53:07.245507809Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:53:07.245849 containerd[2025]: time="2025-07-07T05:53:07.245598433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.247941433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.248022073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.248434741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.248486533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.248535217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.248569417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.248817889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.249313297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.249573385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:07.249724 containerd[2025]: time="2025-07-07T05:53:07.249612889Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:53:07.252733 containerd[2025]: time="2025-07-07T05:53:07.252000301Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:53:07.252733 containerd[2025]: time="2025-07-07T05:53:07.252216457Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:53:07.259133 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.259296577Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.259412449Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.259466281Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.259504057Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.259537981Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.259895617Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.260319289Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.260620957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.260665933Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.263788381Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.263833777Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.263905081Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.263940445Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.264307 containerd[2025]: time="2025-07-07T05:53:07.263974441Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264007669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264042313Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264075469Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264108997Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264153061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264187573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264216877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264247729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264276805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264308425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264337489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264368245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264415621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265126 containerd[2025]: time="2025-07-07T05:53:07.264454525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.264484549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.264515641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.264554605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.264601645Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.264656053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.264722341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.264756397Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.265011145Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.265065673Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.265094449Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.265127017Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.265152121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.265182493Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:53:07.265949 containerd[2025]: time="2025-07-07T05:53:07.265206913Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:53:07.266632 containerd[2025]: time="2025-07-07T05:53:07.265235845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:53:07.270744 containerd[2025]: time="2025-07-07T05:53:07.268029217Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:53:07.270744 containerd[2025]: time="2025-07-07T05:53:07.268179973Z" level=info msg="Connect containerd service" Jul 7 05:53:07.270744 containerd[2025]: time="2025-07-07T05:53:07.268420285Z" level=info msg="using legacy CRI server" Jul 7 05:53:07.270744 containerd[2025]: time="2025-07-07T05:53:07.268452769Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:53:07.270744 containerd[2025]: time="2025-07-07T05:53:07.268633549Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:53:07.274823 containerd[2025]: time="2025-07-07T05:53:07.274467253Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:53:07.276744 containerd[2025]: time="2025-07-07T05:53:07.276331501Z" level=info msg="Start subscribing containerd event" Jul 7 05:53:07.276744 containerd[2025]: time="2025-07-07T05:53:07.276439585Z" level=info msg="Start recovering state" Jul 7 05:53:07.276744 containerd[2025]: time="2025-07-07T05:53:07.276584149Z" level=info msg="Start event monitor" Jul 7 05:53:07.276744 containerd[2025]: time="2025-07-07T05:53:07.276613777Z" level=info msg="Start snapshots syncer" Jul 7 05:53:07.276744 containerd[2025]: time="2025-07-07T05:53:07.276637801Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:53:07.276744 containerd[2025]: time="2025-07-07T05:53:07.276657217Z" level=info msg="Start streaming server" Jul 7 05:53:07.284541 containerd[2025]: time="2025-07-07T05:53:07.280045657Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:53:07.284541 containerd[2025]: time="2025-07-07T05:53:07.280197457Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:53:07.284541 containerd[2025]: time="2025-07-07T05:53:07.280318261Z" level=info msg="containerd successfully booted in 0.239931s" Jul 7 05:53:07.280892 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:53:07.357293 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:53:07.456669 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:53:07.556115 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 7 05:53:07.656156 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 7 05:53:07.756598 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 05:53:07.857105 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 7 05:53:07.880248 tar[1997]: linux-arm64/LICENSE Jul 7 05:53:07.880248 tar[1997]: linux-arm64/README.md Jul 7 05:53:07.941427 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 05:53:07.960198 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [Registrar] Starting registrar module Jul 7 05:53:08.060088 amazon-ssm-agent[2112]: 2025-07-07 05:53:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 7 05:53:08.215321 sshd_keygen[2031]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:53:08.266367 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:53:08.283902 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:53:08.295235 systemd[1]: Started sshd@0-172.31.23.146:22-139.178.89.65:36752.service - OpenSSH per-connection server daemon (139.178.89.65:36752). Jul 7 05:53:08.322175 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:53:08.322637 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:53:08.335238 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:53:08.391914 ntpd[1985]: Listen normally on 6 eth0 [fe80::470:36ff:fe67:9a1f%2]:123 Jul 7 05:53:08.392396 ntpd[1985]: 7 Jul 05:53:08 ntpd[1985]: Listen normally on 6 eth0 [fe80::470:36ff:fe67:9a1f%2]:123 Jul 7 05:53:08.395426 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:53:08.410387 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:53:08.422937 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 05:53:08.428511 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:53:08.563996 amazon-ssm-agent[2112]: 2025-07-07 05:53:08 INFO [EC2Identity] EC2 registration was successful. Jul 7 05:53:08.584641 sshd[2217]: Accepted publickey for core from 139.178.89.65 port 36752 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:08.590466 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:08.594345 amazon-ssm-agent[2112]: 2025-07-07 05:53:08 INFO [CredentialRefresher] credentialRefresher has started Jul 7 05:53:08.594345 amazon-ssm-agent[2112]: 2025-07-07 05:53:08 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 05:53:08.594345 amazon-ssm-agent[2112]: 2025-07-07 05:53:08 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 05:53:08.613266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:53:08.630990 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:53:08.639845 systemd-logind[1991]: New session 1 of user core. Jul 7 05:53:08.663856 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:53:08.669864 amazon-ssm-agent[2112]: 2025-07-07 05:53:08 INFO [CredentialRefresher] Next credential rotation will be in 30.0749854091 minutes Jul 7 05:53:08.679550 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:53:08.696398 (systemd)[2229]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:53:08.907213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:08.912439 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:53:08.926887 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:08.963409 systemd[2229]: Queued start job for default target default.target. Jul 7 05:53:08.970005 systemd[2229]: Created slice app.slice - User Application Slice. Jul 7 05:53:08.970080 systemd[2229]: Reached target paths.target - Paths. Jul 7 05:53:08.970114 systemd[2229]: Reached target timers.target - Timers. Jul 7 05:53:08.973143 systemd[2229]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:53:09.021306 systemd[2229]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:53:09.021563 systemd[2229]: Reached target sockets.target - Sockets. Jul 7 05:53:09.021629 systemd[2229]: Reached target basic.target - Basic System. Jul 7 05:53:09.021780 systemd[2229]: Reached target default.target - Main User Target. Jul 7 05:53:09.021858 systemd[2229]: Startup finished in 306ms. Jul 7 05:53:09.022195 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:53:09.033025 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:53:09.036132 systemd[1]: Startup finished in 1.171s (kernel) + 10.562s (initrd) + 9.712s (userspace) = 21.446s. Jul 7 05:53:09.209284 systemd[1]: Started sshd@1-172.31.23.146:22-139.178.89.65:36756.service - OpenSSH per-connection server daemon (139.178.89.65:36756). Jul 7 05:53:09.401173 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 36756 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:09.404159 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:09.414558 systemd-logind[1991]: New session 2 of user core. Jul 7 05:53:09.418976 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:53:09.548351 sshd[2254]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:09.555507 systemd[1]: sshd@1-172.31.23.146:22-139.178.89.65:36756.service: Deactivated successfully. Jul 7 05:53:09.556495 systemd-logind[1991]: Session 2 logged out. Waiting for processes to exit. Jul 7 05:53:09.560121 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 05:53:09.563554 systemd-logind[1991]: Removed session 2. Jul 7 05:53:09.590517 systemd[1]: Started sshd@2-172.31.23.146:22-139.178.89.65:36768.service - OpenSSH per-connection server daemon (139.178.89.65:36768). Jul 7 05:53:09.639390 amazon-ssm-agent[2112]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 05:53:09.739823 amazon-ssm-agent[2112]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2264) started Jul 7 05:53:09.787405 sshd[2261]: Accepted publickey for core from 139.178.89.65 port 36768 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:09.794175 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:09.814127 systemd-logind[1991]: New session 3 of user core. Jul 7 05:53:09.820174 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:53:09.841748 amazon-ssm-agent[2112]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 05:53:09.951625 sshd[2261]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:09.959555 systemd[1]: sshd@2-172.31.23.146:22-139.178.89.65:36768.service: Deactivated successfully. Jul 7 05:53:09.965952 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 05:53:09.968419 systemd-logind[1991]: Session 3 logged out. Waiting for processes to exit. Jul 7 05:53:09.972815 systemd-logind[1991]: Removed session 3. Jul 7 05:53:09.992329 systemd[1]: Started sshd@3-172.31.23.146:22-139.178.89.65:57150.service - OpenSSH per-connection server daemon (139.178.89.65:57150). Jul 7 05:53:10.110780 kubelet[2240]: E0707 05:53:10.110584 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:10.117093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:10.117457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:10.118184 systemd[1]: kubelet.service: Consumed 1.525s CPU time. Jul 7 05:53:10.164454 sshd[2280]: Accepted publickey for core from 139.178.89.65 port 57150 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:10.167308 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:10.177556 systemd-logind[1991]: New session 4 of user core. Jul 7 05:53:10.184049 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:53:10.312724 sshd[2280]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:10.319014 systemd[1]: sshd@3-172.31.23.146:22-139.178.89.65:57150.service: Deactivated successfully. Jul 7 05:53:10.324166 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:53:10.325664 systemd-logind[1991]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:53:10.328232 systemd-logind[1991]: Removed session 4. Jul 7 05:53:10.353216 systemd[1]: Started sshd@4-172.31.23.146:22-139.178.89.65:57154.service - OpenSSH per-connection server daemon (139.178.89.65:57154). Jul 7 05:53:10.523560 sshd[2289]: Accepted publickey for core from 139.178.89.65 port 57154 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:10.525880 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:10.533956 systemd-logind[1991]: New session 5 of user core. Jul 7 05:53:10.543948 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:53:10.712074 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 05:53:10.712757 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:10.727314 sudo[2292]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:10.751198 sshd[2289]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:10.758477 systemd[1]: sshd@4-172.31.23.146:22-139.178.89.65:57154.service: Deactivated successfully. Jul 7 05:53:10.761649 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:53:10.764018 systemd-logind[1991]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:53:10.765939 systemd-logind[1991]: Removed session 5. Jul 7 05:53:10.789234 systemd[1]: Started sshd@5-172.31.23.146:22-139.178.89.65:57160.service - OpenSSH per-connection server daemon (139.178.89.65:57160). Jul 7 05:53:10.963380 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 57160 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:10.966088 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:10.973769 systemd-logind[1991]: New session 6 of user core. Jul 7 05:53:10.982942 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:53:11.088531 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 05:53:11.089234 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:11.095435 sudo[2301]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:11.105755 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 05:53:11.106379 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:11.132201 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:11.135712 auditctl[2304]: No rules Jul 7 05:53:11.136430 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 05:53:11.136834 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:11.148565 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:11.191711 augenrules[2322]: No rules Jul 7 05:53:11.195806 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:11.199056 sudo[2300]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:11.222752 sshd[2297]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:11.230555 systemd[1]: sshd@5-172.31.23.146:22-139.178.89.65:57160.service: Deactivated successfully. Jul 7 05:53:11.234211 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:53:11.236977 systemd-logind[1991]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:53:11.239377 systemd-logind[1991]: Removed session 6. Jul 7 05:53:11.262258 systemd[1]: Started sshd@6-172.31.23.146:22-139.178.89.65:57176.service - OpenSSH per-connection server daemon (139.178.89.65:57176). Jul 7 05:53:11.451438 sshd[2330]: Accepted publickey for core from 139.178.89.65 port 57176 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:11.453472 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:11.463316 systemd-logind[1991]: New session 7 of user core. Jul 7 05:53:11.472996 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:53:11.580091 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:53:11.580745 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:12.248176 systemd-resolved[1930]: Clock change detected. Flushing caches. Jul 7 05:53:12.367806 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 05:53:12.368883 (dockerd)[2348]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 05:53:12.890497 dockerd[2348]: time="2025-07-07T05:53:12.890368391Z" level=info msg="Starting up" Jul 7 05:53:13.198464 dockerd[2348]: time="2025-07-07T05:53:13.197946656Z" level=info msg="Loading containers: start." Jul 7 05:53:13.395381 kernel: Initializing XFRM netlink socket Jul 7 05:53:13.438063 (udev-worker)[2371]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:53:13.532984 systemd-networkd[1927]: docker0: Link UP Jul 7 05:53:13.558810 dockerd[2348]: time="2025-07-07T05:53:13.558737254Z" level=info msg="Loading containers: done." Jul 7 05:53:13.581475 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1267113490-merged.mount: Deactivated successfully. Jul 7 05:53:13.584433 dockerd[2348]: time="2025-07-07T05:53:13.584341342Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 05:53:13.584633 dockerd[2348]: time="2025-07-07T05:53:13.584513938Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 05:53:13.584797 dockerd[2348]: time="2025-07-07T05:53:13.584716714Z" level=info msg="Daemon has completed initialization" Jul 7 05:53:13.642620 dockerd[2348]: time="2025-07-07T05:53:13.641380582Z" level=info msg="API listen on /run/docker.sock" Jul 7 05:53:13.643740 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 05:53:14.774188 containerd[2025]: time="2025-07-07T05:53:14.773836428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 05:53:15.341176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598260994.mount: Deactivated successfully. Jul 7 05:53:16.735170 containerd[2025]: time="2025-07-07T05:53:16.735086522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:16.737063 containerd[2025]: time="2025-07-07T05:53:16.737003186Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 7 05:53:16.739779 containerd[2025]: time="2025-07-07T05:53:16.739696562Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:16.745709 containerd[2025]: time="2025-07-07T05:53:16.745631570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:16.748332 containerd[2025]: time="2025-07-07T05:53:16.747870506Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.973976166s" Jul 7 05:53:16.748332 containerd[2025]: time="2025-07-07T05:53:16.747940466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 05:53:16.750593 containerd[2025]: time="2025-07-07T05:53:16.750533018Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 05:53:18.108659 containerd[2025]: time="2025-07-07T05:53:18.108557257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:18.110759 containerd[2025]: time="2025-07-07T05:53:18.110687365Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 7 05:53:18.111625 containerd[2025]: time="2025-07-07T05:53:18.111086857Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:18.116866 containerd[2025]: time="2025-07-07T05:53:18.116811565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:18.119414 containerd[2025]: time="2025-07-07T05:53:18.119140801Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.368544159s" Jul 7 05:53:18.119414 containerd[2025]: time="2025-07-07T05:53:18.119206021Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 05:53:18.121493 containerd[2025]: time="2025-07-07T05:53:18.121289209Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 05:53:19.254334 containerd[2025]: time="2025-07-07T05:53:19.253107842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:19.256627 containerd[2025]: time="2025-07-07T05:53:19.256581698Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 7 05:53:19.257618 containerd[2025]: time="2025-07-07T05:53:19.257558906Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:19.262934 containerd[2025]: time="2025-07-07T05:53:19.262868522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:19.265528 containerd[2025]: time="2025-07-07T05:53:19.265466282Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.144075733s" Jul 7 05:53:19.265647 containerd[2025]: time="2025-07-07T05:53:19.265525766Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 05:53:19.266675 containerd[2025]: time="2025-07-07T05:53:19.266618066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 05:53:20.148648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 05:53:20.156678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:20.609746 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:20.622267 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:20.696080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927396193.mount: Deactivated successfully. Jul 7 05:53:20.715409 kubelet[2561]: E0707 05:53:20.715321 2561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:20.724932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:20.725255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:21.254458 containerd[2025]: time="2025-07-07T05:53:21.254399524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:21.257365 containerd[2025]: time="2025-07-07T05:53:21.257288920Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 7 05:53:21.259627 containerd[2025]: time="2025-07-07T05:53:21.259560040Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:21.271369 containerd[2025]: time="2025-07-07T05:53:21.269931172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:21.273132 containerd[2025]: time="2025-07-07T05:53:21.272272300Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 2.005590238s" Jul 7 05:53:21.273278 containerd[2025]: time="2025-07-07T05:53:21.273128296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 05:53:21.274081 containerd[2025]: time="2025-07-07T05:53:21.274035628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 05:53:21.851720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891538585.mount: Deactivated successfully. Jul 7 05:53:23.099693 containerd[2025]: time="2025-07-07T05:53:23.099594677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:23.102264 containerd[2025]: time="2025-07-07T05:53:23.102181433Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 05:53:23.104481 containerd[2025]: time="2025-07-07T05:53:23.104366621Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:23.111507 containerd[2025]: time="2025-07-07T05:53:23.111414413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:23.114358 containerd[2025]: time="2025-07-07T05:53:23.114048605Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.839820545s" Jul 7 05:53:23.114358 containerd[2025]: time="2025-07-07T05:53:23.114127349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 05:53:23.115384 containerd[2025]: time="2025-07-07T05:53:23.115074101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 05:53:23.846989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985146133.mount: Deactivated successfully. Jul 7 05:53:23.860364 containerd[2025]: time="2025-07-07T05:53:23.859855089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:23.862060 containerd[2025]: time="2025-07-07T05:53:23.861656433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 05:53:23.864362 containerd[2025]: time="2025-07-07T05:53:23.864261813Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:23.869434 containerd[2025]: time="2025-07-07T05:53:23.869311101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:23.871384 containerd[2025]: time="2025-07-07T05:53:23.870999621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 755.863168ms" Jul 7 05:53:23.871384 containerd[2025]: time="2025-07-07T05:53:23.871061361Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 05:53:23.872222 containerd[2025]: time="2025-07-07T05:53:23.871856577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 05:53:24.473762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057384594.mount: Deactivated successfully. Jul 7 05:53:26.965315 containerd[2025]: time="2025-07-07T05:53:26.965217601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:26.991734 containerd[2025]: time="2025-07-07T05:53:26.991651645Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 7 05:53:27.032940 containerd[2025]: time="2025-07-07T05:53:27.032841993Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:27.043321 containerd[2025]: time="2025-07-07T05:53:27.042649593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:27.045898 containerd[2025]: time="2025-07-07T05:53:27.045829101Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.173914864s" Jul 7 05:53:27.045992 containerd[2025]: time="2025-07-07T05:53:27.045892833Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 05:53:30.898195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 05:53:30.905820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:31.255700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:31.265960 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:31.347369 kubelet[2711]: E0707 05:53:31.347260 2711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:31.351769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:31.352114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:34.267592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:34.275863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:34.335438 systemd[1]: Reloading requested from client PID 2725 ('systemctl') (unit session-7.scope)... Jul 7 05:53:34.335636 systemd[1]: Reloading... Jul 7 05:53:34.596616 zram_generator::config[2775]: No configuration found. Jul 7 05:53:34.816525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:34.990850 systemd[1]: Reloading finished in 654 ms. Jul 7 05:53:35.070575 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 05:53:35.070900 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 05:53:35.071765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:35.082962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:35.400809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:35.422854 (kubelet)[2828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:35.495661 kubelet[2828]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:35.495661 kubelet[2828]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:53:35.495661 kubelet[2828]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:35.496210 kubelet[2828]: I0707 05:53:35.495761 2828 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:53:36.466289 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 05:53:37.447584 kubelet[2828]: I0707 05:53:37.447517 2828 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:53:37.447584 kubelet[2828]: I0707 05:53:37.447571 2828 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:53:37.448196 kubelet[2828]: I0707 05:53:37.447994 2828 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:53:37.498015 kubelet[2828]: E0707 05:53:37.497953 2828 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:37.502343 kubelet[2828]: I0707 05:53:37.502251 2828 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:37.517089 kubelet[2828]: E0707 05:53:37.517042 2828 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:53:37.517385 kubelet[2828]: I0707 05:53:37.517362 2828 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:53:37.524350 kubelet[2828]: I0707 05:53:37.524313 2828 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:53:37.525241 kubelet[2828]: I0707 05:53:37.525219 2828 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:53:37.525711 kubelet[2828]: I0707 05:53:37.525665 2828 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:53:37.526413 kubelet[2828]: I0707 05:53:37.525809 2828 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:53:37.526413 kubelet[2828]: I0707 05:53:37.526219 2828 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:53:37.526413 kubelet[2828]: I0707 05:53:37.526240 2828 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:53:37.526875 kubelet[2828]: I0707 05:53:37.526853 2828 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:37.532409 kubelet[2828]: I0707 05:53:37.532373 2828 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:53:37.533032 kubelet[2828]: I0707 05:53:37.532580 2828 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:53:37.533032 kubelet[2828]: I0707 05:53:37.532621 2828 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:53:37.533032 kubelet[2828]: I0707 05:53:37.532790 2828 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:53:37.535900 kubelet[2828]: W0707 05:53:37.535663 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-146&limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:37.536094 kubelet[2828]: E0707 05:53:37.535906 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-146&limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:37.542357 kubelet[2828]: I0707 05:53:37.541925 2828 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:53:37.543435 kubelet[2828]: I0707 05:53:37.543406 2828 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:53:37.543626 kubelet[2828]: W0707 05:53:37.543606 2828 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:53:37.546327 kubelet[2828]: I0707 05:53:37.545604 2828 server.go:1274] "Started kubelet" Jul 7 05:53:37.546327 kubelet[2828]: W0707 05:53:37.545803 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:37.546327 kubelet[2828]: E0707 05:53:37.545875 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:37.561205 kubelet[2828]: E0707 05:53:37.558220 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.146:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-146.184fe250fd504b15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-146,UID:ip-172-31-23-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-146,},FirstTimestamp:2025-07-07 05:53:37.545571093 +0000 UTC m=+2.116972703,LastTimestamp:2025-07-07 05:53:37.545571093 +0000 UTC m=+2.116972703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-146,}" Jul 7 05:53:37.564362 kubelet[2828]: I0707 05:53:37.561513 2828 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:53:37.564362 kubelet[2828]: I0707 05:53:37.562091 2828 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:53:37.564362 kubelet[2828]: I0707 05:53:37.562372 2828 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:53:37.564362 kubelet[2828]: I0707 05:53:37.564337 2828 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:53:37.564974 kubelet[2828]: I0707 05:53:37.564946 2828 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:53:37.568132 kubelet[2828]: I0707 05:53:37.567999 2828 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:53:37.575079 kubelet[2828]: I0707 05:53:37.575021 2828 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:53:37.575877 kubelet[2828]: I0707 05:53:37.575829 2828 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:53:37.576747 kubelet[2828]: I0707 05:53:37.575288 2828 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:53:37.576955 kubelet[2828]: W0707 05:53:37.576883 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:37.577026 kubelet[2828]: E0707 05:53:37.576988 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:37.578913 kubelet[2828]: E0707 05:53:37.578856 2828 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-146\" not found" Jul 7 05:53:37.579836 kubelet[2828]: E0707 05:53:37.579567 2828 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:53:37.579976 kubelet[2828]: I0707 05:53:37.579792 2828 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:53:37.580229 kubelet[2828]: I0707 05:53:37.580196 2828 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:53:37.582500 kubelet[2828]: E0707 05:53:37.582431 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-146?timeout=10s\": dial tcp 172.31.23.146:6443: connect: connection refused" interval="200ms" Jul 7 05:53:37.583893 kubelet[2828]: I0707 05:53:37.583853 2828 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:53:37.617849 kubelet[2828]: I0707 05:53:37.617798 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:53:37.622128 kubelet[2828]: I0707 05:53:37.622086 2828 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:53:37.622747 kubelet[2828]: I0707 05:53:37.622400 2828 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:53:37.622747 kubelet[2828]: I0707 05:53:37.622443 2828 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:53:37.622747 kubelet[2828]: E0707 05:53:37.622515 2828 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:53:37.623717 kubelet[2828]: W0707 05:53:37.623649 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:37.623962 kubelet[2828]: E0707 05:53:37.623924 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:37.634896 kubelet[2828]: I0707 05:53:37.634862 2828 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:53:37.635120 kubelet[2828]: I0707 05:53:37.635089 2828 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:53:37.635382 kubelet[2828]: I0707 05:53:37.635281 2828 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:37.643489 kubelet[2828]: I0707 05:53:37.643413 2828 policy_none.go:49] "None policy: Start" Jul 7 05:53:37.644820 kubelet[2828]: I0707 05:53:37.644773 2828 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:53:37.644820 kubelet[2828]: I0707 05:53:37.644825 2828 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:53:37.658042 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 05:53:37.677939 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 05:53:37.679998 kubelet[2828]: E0707 05:53:37.679957 2828 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-146\" not found" Jul 7 05:53:37.685191 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 05:53:37.696867 kubelet[2828]: I0707 05:53:37.695896 2828 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:53:37.696867 kubelet[2828]: I0707 05:53:37.696190 2828 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:53:37.696867 kubelet[2828]: I0707 05:53:37.696243 2828 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:53:37.696867 kubelet[2828]: I0707 05:53:37.696684 2828 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:53:37.702373 kubelet[2828]: E0707 05:53:37.700016 2828 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-146\" not found" Jul 7 05:53:37.741358 systemd[1]: Created slice kubepods-burstable-podb6e413716f66db15fa2ab515c1d1b9ba.slice - libcontainer container kubepods-burstable-podb6e413716f66db15fa2ab515c1d1b9ba.slice. Jul 7 05:53:37.764964 systemd[1]: Created slice kubepods-burstable-pod5a73e5245586e7a4cd002dfc3b26a796.slice - libcontainer container kubepods-burstable-pod5a73e5245586e7a4cd002dfc3b26a796.slice. Jul 7 05:53:37.774668 systemd[1]: Created slice kubepods-burstable-podc5c7ec56a9a30361cb80260ae9064fab.slice - libcontainer container kubepods-burstable-podc5c7ec56a9a30361cb80260ae9064fab.slice. Jul 7 05:53:37.783422 kubelet[2828]: E0707 05:53:37.783344 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-146?timeout=10s\": dial tcp 172.31.23.146:6443: connect: connection refused" interval="400ms" Jul 7 05:53:37.799477 kubelet[2828]: I0707 05:53:37.798632 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-146" Jul 7 05:53:37.799477 kubelet[2828]: E0707 05:53:37.799448 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.146:6443/api/v1/nodes\": dial tcp 172.31.23.146:6443: connect: connection refused" node="ip-172-31-23-146" Jul 7 05:53:37.876898 kubelet[2828]: I0707 05:53:37.876834 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:37.876898 kubelet[2828]: I0707 05:53:37.876893 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:37.877041 kubelet[2828]: I0707 05:53:37.876932 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:37.877041 kubelet[2828]: I0707 05:53:37.876972 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5c7ec56a9a30361cb80260ae9064fab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-146\" (UID: \"c5c7ec56a9a30361cb80260ae9064fab\") " pod="kube-system/kube-apiserver-ip-172-31-23-146" Jul 7 05:53:37.877041 kubelet[2828]: I0707 05:53:37.877008 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:37.877203 kubelet[2828]: I0707 05:53:37.877050 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:37.877203 kubelet[2828]: I0707 05:53:37.877089 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a73e5245586e7a4cd002dfc3b26a796-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-146\" (UID: \"5a73e5245586e7a4cd002dfc3b26a796\") " pod="kube-system/kube-scheduler-ip-172-31-23-146" Jul 7 05:53:37.877203 kubelet[2828]: I0707 05:53:37.877127 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5c7ec56a9a30361cb80260ae9064fab-ca-certs\") pod \"kube-apiserver-ip-172-31-23-146\" (UID: \"c5c7ec56a9a30361cb80260ae9064fab\") " pod="kube-system/kube-apiserver-ip-172-31-23-146" Jul 7 05:53:37.877203 kubelet[2828]: I0707 05:53:37.877160 2828 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5c7ec56a9a30361cb80260ae9064fab-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-146\" (UID: \"c5c7ec56a9a30361cb80260ae9064fab\") " pod="kube-system/kube-apiserver-ip-172-31-23-146" Jul 7 05:53:38.002176 kubelet[2828]: I0707 05:53:38.002046 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-146" Jul 7 05:53:38.002617 kubelet[2828]: E0707 05:53:38.002566 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.146:6443/api/v1/nodes\": dial tcp 172.31.23.146:6443: connect: connection refused" node="ip-172-31-23-146" Jul 7 05:53:38.061535 containerd[2025]: time="2025-07-07T05:53:38.061410548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-146,Uid:b6e413716f66db15fa2ab515c1d1b9ba,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:38.072095 containerd[2025]: time="2025-07-07T05:53:38.072006008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-146,Uid:5a73e5245586e7a4cd002dfc3b26a796,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:38.080974 containerd[2025]: time="2025-07-07T05:53:38.080818160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-146,Uid:c5c7ec56a9a30361cb80260ae9064fab,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:38.184467 kubelet[2828]: E0707 05:53:38.184384 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-146?timeout=10s\": dial tcp 172.31.23.146:6443: connect: connection refused" interval="800ms" Jul 7 05:53:38.353232 kubelet[2828]: W0707 05:53:38.352945 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-146&limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:38.353232 kubelet[2828]: E0707 05:53:38.353045 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-146&limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:38.406111 kubelet[2828]: I0707 05:53:38.405599 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-146" Jul 7 05:53:38.406111 kubelet[2828]: E0707 05:53:38.406063 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.146:6443/api/v1/nodes\": dial tcp 172.31.23.146:6443: connect: connection refused" node="ip-172-31-23-146" Jul 7 05:53:38.571646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293347477.mount: Deactivated successfully. Jul 7 05:53:38.589835 containerd[2025]: time="2025-07-07T05:53:38.589747054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:38.592026 containerd[2025]: time="2025-07-07T05:53:38.591954082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:38.594200 containerd[2025]: time="2025-07-07T05:53:38.594144958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 7 05:53:38.596200 containerd[2025]: time="2025-07-07T05:53:38.596140342Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:53:38.598929 containerd[2025]: time="2025-07-07T05:53:38.597944674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:38.600962 containerd[2025]: time="2025-07-07T05:53:38.600675358Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:38.602396 containerd[2025]: time="2025-07-07T05:53:38.602259922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:53:38.613851 containerd[2025]: time="2025-07-07T05:53:38.611949418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:38.617736 containerd[2025]: time="2025-07-07T05:53:38.617438602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.301986ms" Jul 7 05:53:38.621589 containerd[2025]: time="2025-07-07T05:53:38.621513802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.989806ms" Jul 7 05:53:38.635150 kubelet[2828]: W0707 05:53:38.635009 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:38.635150 kubelet[2828]: E0707 05:53:38.635106 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:38.657843 containerd[2025]: time="2025-07-07T05:53:38.657766631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.841563ms" Jul 7 05:53:38.719769 kubelet[2828]: E0707 05:53:38.719597 2828 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.146:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-146.184fe250fd504b15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-146,UID:ip-172-31-23-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-146,},FirstTimestamp:2025-07-07 05:53:37.545571093 +0000 UTC m=+2.116972703,LastTimestamp:2025-07-07 05:53:37.545571093 +0000 UTC m=+2.116972703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-146,}" Jul 7 05:53:38.724038 kubelet[2828]: W0707 05:53:38.723447 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:38.724038 kubelet[2828]: E0707 05:53:38.723524 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:38.807113 containerd[2025]: time="2025-07-07T05:53:38.806908151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:38.808513 containerd[2025]: time="2025-07-07T05:53:38.807442043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:38.808513 containerd[2025]: time="2025-07-07T05:53:38.807481379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:38.808513 containerd[2025]: time="2025-07-07T05:53:38.807631163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:38.817919 containerd[2025]: time="2025-07-07T05:53:38.817381415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:38.817919 containerd[2025]: time="2025-07-07T05:53:38.817482923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:38.817919 containerd[2025]: time="2025-07-07T05:53:38.817521311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:38.818935 containerd[2025]: time="2025-07-07T05:53:38.818395751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:38.818935 containerd[2025]: time="2025-07-07T05:53:38.818481251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:38.819389 containerd[2025]: time="2025-07-07T05:53:38.818175671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:38.820231 containerd[2025]: time="2025-07-07T05:53:38.819694667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:38.822598 containerd[2025]: time="2025-07-07T05:53:38.820792631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:38.861686 systemd[1]: Started cri-containerd-b22288d3f292b46ab046012e55c353d0393f883524679e9024d22cffae3e9a35.scope - libcontainer container b22288d3f292b46ab046012e55c353d0393f883524679e9024d22cffae3e9a35. Jul 7 05:53:38.881634 systemd[1]: Started cri-containerd-ac373a0b23610c6aafae1b88e16f0e90c68a6c3f609b0249063412552b414fd4.scope - libcontainer container ac373a0b23610c6aafae1b88e16f0e90c68a6c3f609b0249063412552b414fd4. Jul 7 05:53:38.893612 systemd[1]: Started cri-containerd-c64d66bb295c8c95a14813546f039c35e35d0f47c1c09bcbf229ac64546b31b4.scope - libcontainer container c64d66bb295c8c95a14813546f039c35e35d0f47c1c09bcbf229ac64546b31b4. Jul 7 05:53:38.931243 kubelet[2828]: W0707 05:53:38.931194 2828 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.146:6443: connect: connection refused Jul 7 05:53:38.932367 kubelet[2828]: E0707 05:53:38.932308 2828 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.146:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:38.974285 containerd[2025]: time="2025-07-07T05:53:38.974211720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-146,Uid:c5c7ec56a9a30361cb80260ae9064fab,Namespace:kube-system,Attempt:0,} returns sandbox id \"b22288d3f292b46ab046012e55c353d0393f883524679e9024d22cffae3e9a35\"" Jul 7 05:53:38.985408 containerd[2025]: time="2025-07-07T05:53:38.985263012Z" level=info msg="CreateContainer within sandbox \"b22288d3f292b46ab046012e55c353d0393f883524679e9024d22cffae3e9a35\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 05:53:38.986089 kubelet[2828]: E0707 05:53:38.985962 2828 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-146?timeout=10s\": dial tcp 172.31.23.146:6443: connect: connection refused" interval="1.6s" Jul 7 05:53:39.017840 containerd[2025]: time="2025-07-07T05:53:39.017595092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-146,Uid:b6e413716f66db15fa2ab515c1d1b9ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c64d66bb295c8c95a14813546f039c35e35d0f47c1c09bcbf229ac64546b31b4\"" Jul 7 05:53:39.027199 containerd[2025]: time="2025-07-07T05:53:39.027116649Z" level=info msg="CreateContainer within sandbox \"c64d66bb295c8c95a14813546f039c35e35d0f47c1c09bcbf229ac64546b31b4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 05:53:39.027719 containerd[2025]: time="2025-07-07T05:53:39.027177273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-146,Uid:5a73e5245586e7a4cd002dfc3b26a796,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac373a0b23610c6aafae1b88e16f0e90c68a6c3f609b0249063412552b414fd4\"" Jul 7 05:53:39.028728 containerd[2025]: time="2025-07-07T05:53:39.028648389Z" level=info msg="CreateContainer within sandbox \"b22288d3f292b46ab046012e55c353d0393f883524679e9024d22cffae3e9a35\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"843501275f806b05f874533fd11374d3dea73b745ae5b17a41d04fccbcd057e3\"" Jul 7 05:53:39.031123 containerd[2025]: time="2025-07-07T05:53:39.030988413Z" level=info msg="StartContainer for \"843501275f806b05f874533fd11374d3dea73b745ae5b17a41d04fccbcd057e3\"" Jul 7 05:53:39.043818 containerd[2025]: time="2025-07-07T05:53:39.043552629Z" level=info msg="CreateContainer within sandbox \"ac373a0b23610c6aafae1b88e16f0e90c68a6c3f609b0249063412552b414fd4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 05:53:39.069445 containerd[2025]: time="2025-07-07T05:53:39.069372621Z" level=info msg="CreateContainer within sandbox \"c64d66bb295c8c95a14813546f039c35e35d0f47c1c09bcbf229ac64546b31b4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b62886434fb7a546d660155a53ce4ec3e0424e943e79839925d80d1966f3bad\"" Jul 7 05:53:39.070906 containerd[2025]: time="2025-07-07T05:53:39.070860081Z" level=info msg="StartContainer for \"5b62886434fb7a546d660155a53ce4ec3e0424e943e79839925d80d1966f3bad\"" Jul 7 05:53:39.093701 containerd[2025]: time="2025-07-07T05:53:39.093240069Z" level=info msg="CreateContainer within sandbox \"ac373a0b23610c6aafae1b88e16f0e90c68a6c3f609b0249063412552b414fd4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e98a1d6608541b6a808bb2ba1ec6f88de2ae1e3940e852dc22eb92efd7c0cdf\"" Jul 7 05:53:39.093932 containerd[2025]: time="2025-07-07T05:53:39.093855945Z" level=info msg="StartContainer for \"5e98a1d6608541b6a808bb2ba1ec6f88de2ae1e3940e852dc22eb92efd7c0cdf\"" Jul 7 05:53:39.096633 systemd[1]: Started cri-containerd-843501275f806b05f874533fd11374d3dea73b745ae5b17a41d04fccbcd057e3.scope - libcontainer container 843501275f806b05f874533fd11374d3dea73b745ae5b17a41d04fccbcd057e3. Jul 7 05:53:39.167614 systemd[1]: Started cri-containerd-5b62886434fb7a546d660155a53ce4ec3e0424e943e79839925d80d1966f3bad.scope - libcontainer container 5b62886434fb7a546d660155a53ce4ec3e0424e943e79839925d80d1966f3bad. Jul 7 05:53:39.181175 systemd[1]: Started cri-containerd-5e98a1d6608541b6a808bb2ba1ec6f88de2ae1e3940e852dc22eb92efd7c0cdf.scope - libcontainer container 5e98a1d6608541b6a808bb2ba1ec6f88de2ae1e3940e852dc22eb92efd7c0cdf. Jul 7 05:53:39.210361 kubelet[2828]: I0707 05:53:39.209611 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-146" Jul 7 05:53:39.210361 kubelet[2828]: E0707 05:53:39.210103 2828 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.23.146:6443/api/v1/nodes\": dial tcp 172.31.23.146:6443: connect: connection refused" node="ip-172-31-23-146" Jul 7 05:53:39.234103 containerd[2025]: time="2025-07-07T05:53:39.233900710Z" level=info msg="StartContainer for \"843501275f806b05f874533fd11374d3dea73b745ae5b17a41d04fccbcd057e3\" returns successfully" Jul 7 05:53:39.302973 containerd[2025]: time="2025-07-07T05:53:39.302848258Z" level=info msg="StartContainer for \"5b62886434fb7a546d660155a53ce4ec3e0424e943e79839925d80d1966f3bad\" returns successfully" Jul 7 05:53:39.332714 containerd[2025]: time="2025-07-07T05:53:39.332121694Z" level=info msg="StartContainer for \"5e98a1d6608541b6a808bb2ba1ec6f88de2ae1e3940e852dc22eb92efd7c0cdf\" returns successfully" Jul 7 05:53:40.813635 kubelet[2828]: I0707 05:53:40.813567 2828 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-146" Jul 7 05:53:42.806802 kubelet[2828]: E0707 05:53:42.806737 2828 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-146\" not found" node="ip-172-31-23-146" Jul 7 05:53:42.969345 kubelet[2828]: I0707 05:53:42.968313 2828 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-146" Jul 7 05:53:42.969345 kubelet[2828]: E0707 05:53:42.968366 2828 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-146\": node \"ip-172-31-23-146\" not found" Jul 7 05:53:43.550689 kubelet[2828]: I0707 05:53:43.550349 2828 apiserver.go:52] "Watching apiserver" Jul 7 05:53:43.577607 kubelet[2828]: I0707 05:53:43.577546 2828 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:53:44.897778 systemd[1]: Reloading requested from client PID 3105 ('systemctl') (unit session-7.scope)... Jul 7 05:53:44.897811 systemd[1]: Reloading... Jul 7 05:53:45.089509 zram_generator::config[3148]: No configuration found. Jul 7 05:53:45.333736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:45.536318 systemd[1]: Reloading finished in 637 ms. Jul 7 05:53:45.614174 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:45.629681 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:53:45.630102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:45.630182 systemd[1]: kubelet.service: Consumed 2.814s CPU time, 127.2M memory peak, 0B memory swap peak. Jul 7 05:53:45.637966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:45.963938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:45.982961 (kubelet)[3205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:46.079405 kubelet[3205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:46.079405 kubelet[3205]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:53:46.079405 kubelet[3205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:46.079405 kubelet[3205]: I0707 05:53:46.079064 3205 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:53:46.091635 kubelet[3205]: I0707 05:53:46.091584 3205 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:53:46.093361 kubelet[3205]: I0707 05:53:46.091841 3205 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:53:46.093361 kubelet[3205]: I0707 05:53:46.092417 3205 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:53:46.095966 kubelet[3205]: I0707 05:53:46.095924 3205 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 05:53:46.100852 kubelet[3205]: I0707 05:53:46.100812 3205 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:46.110863 kubelet[3205]: E0707 05:53:46.110804 3205 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:53:46.111074 kubelet[3205]: I0707 05:53:46.111049 3205 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:53:46.120326 kubelet[3205]: I0707 05:53:46.120245 3205 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:53:46.120722 sudo[3219]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 05:53:46.121437 sudo[3219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 05:53:46.124059 kubelet[3205]: I0707 05:53:46.123912 3205 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:53:46.124881 kubelet[3205]: I0707 05:53:46.124822 3205 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:53:46.126557 kubelet[3205]: I0707 05:53:46.125395 3205 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-146","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:53:46.126557 kubelet[3205]: I0707 05:53:46.125890 3205 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:53:46.126557 kubelet[3205]: I0707 05:53:46.125912 3205 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:53:46.126557 kubelet[3205]: I0707 05:53:46.125989 3205 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:46.126557 kubelet[3205]: I0707 05:53:46.126177 3205 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:53:46.126967 kubelet[3205]: I0707 05:53:46.126201 3205 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:53:46.126967 kubelet[3205]: I0707 05:53:46.126238 3205 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:53:46.126967 kubelet[3205]: I0707 05:53:46.126266 3205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:53:46.134315 kubelet[3205]: I0707 05:53:46.133568 3205 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:53:46.134607 kubelet[3205]: I0707 05:53:46.134556 3205 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:53:46.137443 kubelet[3205]: I0707 05:53:46.137405 3205 server.go:1274] "Started kubelet" Jul 7 05:53:46.144583 kubelet[3205]: I0707 05:53:46.144543 3205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:53:46.155328 kubelet[3205]: I0707 05:53:46.154526 3205 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:53:46.155328 kubelet[3205]: E0707 05:53:46.154853 3205 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-146\" not found" Jul 7 05:53:46.155328 kubelet[3205]: I0707 05:53:46.155280 3205 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:53:46.155918 kubelet[3205]: I0707 05:53:46.155897 3205 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:53:46.158369 kubelet[3205]: I0707 05:53:46.158104 3205 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:53:46.159999 kubelet[3205]: I0707 05:53:46.159967 3205 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:53:46.162039 kubelet[3205]: I0707 05:53:46.158188 3205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:53:46.162837 kubelet[3205]: I0707 05:53:46.162807 3205 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:53:46.163223 kubelet[3205]: I0707 05:53:46.163198 3205 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:53:46.185380 kubelet[3205]: I0707 05:53:46.182280 3205 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:53:46.185380 kubelet[3205]: I0707 05:53:46.184535 3205 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:53:46.189729 kubelet[3205]: I0707 05:53:46.189669 3205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:53:46.193099 kubelet[3205]: I0707 05:53:46.193061 3205 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:53:46.194736 kubelet[3205]: I0707 05:53:46.194696 3205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:53:46.194907 kubelet[3205]: I0707 05:53:46.194888 3205 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:53:46.195038 kubelet[3205]: I0707 05:53:46.195018 3205 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:53:46.195643 kubelet[3205]: E0707 05:53:46.195180 3205 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:53:46.218880 kubelet[3205]: E0707 05:53:46.218758 3205 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:53:46.258751 kubelet[3205]: E0707 05:53:46.258708 3205 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-23-146\" not found" Jul 7 05:53:46.295507 kubelet[3205]: E0707 05:53:46.295243 3205 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 05:53:46.367880 kubelet[3205]: I0707 05:53:46.367433 3205 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:53:46.367880 kubelet[3205]: I0707 05:53:46.367463 3205 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:53:46.367880 kubelet[3205]: I0707 05:53:46.367500 3205 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:46.367880 kubelet[3205]: I0707 05:53:46.367735 3205 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 05:53:46.367880 kubelet[3205]: I0707 05:53:46.367755 3205 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 05:53:46.367880 kubelet[3205]: I0707 05:53:46.367791 3205 policy_none.go:49] "None policy: Start" Jul 7 05:53:46.370927 kubelet[3205]: I0707 05:53:46.369487 3205 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:53:46.370927 kubelet[3205]: I0707 05:53:46.369529 3205 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:53:46.370927 kubelet[3205]: I0707 05:53:46.369775 3205 state_mem.go:75] "Updated machine memory state" Jul 7 05:53:46.378675 kubelet[3205]: I0707 05:53:46.378642 3205 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:53:46.380392 kubelet[3205]: I0707 05:53:46.380348 3205 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:53:46.383391 kubelet[3205]: I0707 05:53:46.382174 3205 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:53:46.383921 kubelet[3205]: I0707 05:53:46.383897 3205 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:53:46.508255 kubelet[3205]: I0707 05:53:46.507610 3205 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-23-146" Jul 7 05:53:46.518501 kubelet[3205]: E0707 05:53:46.518376 3205 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-146\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:46.523750 kubelet[3205]: I0707 05:53:46.523565 3205 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-23-146" Jul 7 05:53:46.523750 kubelet[3205]: I0707 05:53:46.523703 3205 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-23-146" Jul 7 05:53:46.560214 kubelet[3205]: I0707 05:53:46.559961 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a73e5245586e7a4cd002dfc3b26a796-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-146\" (UID: \"5a73e5245586e7a4cd002dfc3b26a796\") " pod="kube-system/kube-scheduler-ip-172-31-23-146" Jul 7 05:53:46.560214 kubelet[3205]: I0707 05:53:46.560050 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5c7ec56a9a30361cb80260ae9064fab-ca-certs\") pod \"kube-apiserver-ip-172-31-23-146\" (UID: \"c5c7ec56a9a30361cb80260ae9064fab\") " pod="kube-system/kube-apiserver-ip-172-31-23-146" Jul 7 05:53:46.560214 kubelet[3205]: I0707 05:53:46.560092 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5c7ec56a9a30361cb80260ae9064fab-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-146\" (UID: \"c5c7ec56a9a30361cb80260ae9064fab\") " pod="kube-system/kube-apiserver-ip-172-31-23-146" Jul 7 05:53:46.560214 kubelet[3205]: I0707 05:53:46.560154 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:46.561026 kubelet[3205]: I0707 05:53:46.560715 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:46.561026 kubelet[3205]: I0707 05:53:46.560806 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:46.561026 kubelet[3205]: I0707 05:53:46.560858 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:46.561026 kubelet[3205]: I0707 05:53:46.560921 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e413716f66db15fa2ab515c1d1b9ba-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-146\" (UID: \"b6e413716f66db15fa2ab515c1d1b9ba\") " pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:46.561026 kubelet[3205]: I0707 05:53:46.560989 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5c7ec56a9a30361cb80260ae9064fab-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-146\" (UID: \"c5c7ec56a9a30361cb80260ae9064fab\") " pod="kube-system/kube-apiserver-ip-172-31-23-146" Jul 7 05:53:47.066099 sudo[3219]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:47.129217 kubelet[3205]: I0707 05:53:47.129141 3205 apiserver.go:52] "Watching apiserver" Jul 7 05:53:47.155905 kubelet[3205]: I0707 05:53:47.155846 3205 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:53:47.323579 kubelet[3205]: I0707 05:53:47.321097 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-146" podStartSLOduration=3.321075966 podStartE2EDuration="3.321075966s" podCreationTimestamp="2025-07-07 05:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:47.305627658 +0000 UTC m=+1.313201516" watchObservedRunningTime="2025-07-07 05:53:47.321075966 +0000 UTC m=+1.328649800" Jul 7 05:53:47.345286 kubelet[3205]: E0707 05:53:47.345240 3205 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-146\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-146" Jul 7 05:53:47.346427 kubelet[3205]: E0707 05:53:47.346127 3205 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-146\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-146" Jul 7 05:53:47.360447 kubelet[3205]: I0707 05:53:47.359911 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-146" podStartSLOduration=1.359890662 podStartE2EDuration="1.359890662s" podCreationTimestamp="2025-07-07 05:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:47.321543894 +0000 UTC m=+1.329117740" watchObservedRunningTime="2025-07-07 05:53:47.359890662 +0000 UTC m=+1.367464508" Jul 7 05:53:47.377282 kubelet[3205]: I0707 05:53:47.377194 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-146" podStartSLOduration=1.377175498 podStartE2EDuration="1.377175498s" podCreationTimestamp="2025-07-07 05:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:47.360212694 +0000 UTC m=+1.367786540" watchObservedRunningTime="2025-07-07 05:53:47.377175498 +0000 UTC m=+1.384749332" Jul 7 05:53:49.718233 sudo[2333]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:49.742611 sshd[2330]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:49.748050 systemd[1]: sshd@6-172.31.23.146:22-139.178.89.65:57176.service: Deactivated successfully. Jul 7 05:53:49.752614 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:53:49.753636 systemd[1]: session-7.scope: Consumed 10.738s CPU time, 149.3M memory peak, 0B memory swap peak. Jul 7 05:53:49.756004 systemd-logind[1991]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:53:49.758390 systemd-logind[1991]: Removed session 7. Jul 7 05:53:50.449338 update_engine[1992]: I20250707 05:53:50.448410 1992 update_attempter.cc:509] Updating boot flags... Jul 7 05:53:50.533515 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3287) Jul 7 05:53:51.572239 kubelet[3205]: I0707 05:53:51.572184 3205 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 05:53:51.573965 containerd[2025]: time="2025-07-07T05:53:51.573780959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:53:51.575896 kubelet[3205]: I0707 05:53:51.574605 3205 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 05:53:52.441974 systemd[1]: Created slice kubepods-besteffort-podbe700ab2_53f6_4c49_8d36_3a14b53b06cb.slice - libcontainer container kubepods-besteffort-podbe700ab2_53f6_4c49_8d36_3a14b53b06cb.slice. Jul 7 05:53:52.473894 systemd[1]: Created slice kubepods-burstable-pod3f1ca9e4_0a50_4c2d_badd_8d1794fe651b.slice - libcontainer container kubepods-burstable-pod3f1ca9e4_0a50_4c2d_badd_8d1794fe651b.slice. Jul 7 05:53:52.502652 kubelet[3205]: I0707 05:53:52.502606 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be700ab2-53f6-4c49-8d36-3a14b53b06cb-xtables-lock\") pod \"kube-proxy-5lvds\" (UID: \"be700ab2-53f6-4c49-8d36-3a14b53b06cb\") " pod="kube-system/kube-proxy-5lvds" Jul 7 05:53:52.503017 kubelet[3205]: I0707 05:53:52.502881 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-etc-cni-netd\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.503017 kubelet[3205]: I0707 05:53:52.502957 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-config-path\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.503324 kubelet[3205]: I0707 05:53:52.503169 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-lib-modules\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.503324 kubelet[3205]: I0707 05:53:52.503217 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-xtables-lock\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.503559 kubelet[3205]: I0707 05:53:52.503281 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-run\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.503814 kubelet[3205]: I0707 05:53:52.503532 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hubble-tls\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.503814 kubelet[3205]: I0707 05:53:52.503742 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bblqw\" (UniqueName: \"kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-kube-api-access-bblqw\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.504029 kubelet[3205]: I0707 05:53:52.503786 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be700ab2-53f6-4c49-8d36-3a14b53b06cb-lib-modules\") pod \"kube-proxy-5lvds\" (UID: \"be700ab2-53f6-4c49-8d36-3a14b53b06cb\") " pod="kube-system/kube-proxy-5lvds" Jul 7 05:53:52.504196 kubelet[3205]: I0707 05:53:52.504001 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hostproc\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.504196 kubelet[3205]: I0707 05:53:52.504162 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cni-path\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.504616 kubelet[3205]: I0707 05:53:52.504399 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-bpf-maps\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.504616 kubelet[3205]: I0707 05:53:52.504467 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-clustermesh-secrets\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.504616 kubelet[3205]: I0707 05:53:52.504505 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-net\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.504989 kubelet[3205]: I0707 05:53:52.504587 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be700ab2-53f6-4c49-8d36-3a14b53b06cb-kube-proxy\") pod \"kube-proxy-5lvds\" (UID: \"be700ab2-53f6-4c49-8d36-3a14b53b06cb\") " pod="kube-system/kube-proxy-5lvds" Jul 7 05:53:52.504989 kubelet[3205]: I0707 05:53:52.504908 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-cgroup\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.505314 kubelet[3205]: I0707 05:53:52.504960 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw499\" (UniqueName: \"kubernetes.io/projected/be700ab2-53f6-4c49-8d36-3a14b53b06cb-kube-api-access-lw499\") pod \"kube-proxy-5lvds\" (UID: \"be700ab2-53f6-4c49-8d36-3a14b53b06cb\") " pod="kube-system/kube-proxy-5lvds" Jul 7 05:53:52.505314 kubelet[3205]: I0707 05:53:52.505195 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-kernel\") pod \"cilium-sm989\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " pod="kube-system/cilium-sm989" Jul 7 05:53:52.738924 systemd[1]: Created slice kubepods-besteffort-podee78419f_1815_4b0b_a2d8_93430e4fff94.slice - libcontainer container kubepods-besteffort-podee78419f_1815_4b0b_a2d8_93430e4fff94.slice. Jul 7 05:53:52.758319 containerd[2025]: time="2025-07-07T05:53:52.756660877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lvds,Uid:be700ab2-53f6-4c49-8d36-3a14b53b06cb,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:52.785670 containerd[2025]: time="2025-07-07T05:53:52.785160133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sm989,Uid:3f1ca9e4-0a50-4c2d-badd-8d1794fe651b,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:52.807128 kubelet[3205]: I0707 05:53:52.806950 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jl7q\" (UniqueName: \"kubernetes.io/projected/ee78419f-1815-4b0b-a2d8-93430e4fff94-kube-api-access-4jl7q\") pod \"cilium-operator-5d85765b45-kwwcv\" (UID: \"ee78419f-1815-4b0b-a2d8-93430e4fff94\") " pod="kube-system/cilium-operator-5d85765b45-kwwcv" Jul 7 05:53:52.807128 kubelet[3205]: I0707 05:53:52.807040 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee78419f-1815-4b0b-a2d8-93430e4fff94-cilium-config-path\") pod \"cilium-operator-5d85765b45-kwwcv\" (UID: \"ee78419f-1815-4b0b-a2d8-93430e4fff94\") " pod="kube-system/cilium-operator-5d85765b45-kwwcv" Jul 7 05:53:52.819571 containerd[2025]: time="2025-07-07T05:53:52.819336781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:52.819571 containerd[2025]: time="2025-07-07T05:53:52.819462277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:52.819848 containerd[2025]: time="2025-07-07T05:53:52.819508165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:52.821174 containerd[2025]: time="2025-07-07T05:53:52.820894921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:52.838437 containerd[2025]: time="2025-07-07T05:53:52.837532681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:52.838437 containerd[2025]: time="2025-07-07T05:53:52.837638797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:52.838437 containerd[2025]: time="2025-07-07T05:53:52.837720373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:52.838437 containerd[2025]: time="2025-07-07T05:53:52.837908221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:52.863635 systemd[1]: Started cri-containerd-b155cce35e7ba1147a8e56582b92712067183734654b701d01447eee2f4768c6.scope - libcontainer container b155cce35e7ba1147a8e56582b92712067183734654b701d01447eee2f4768c6. Jul 7 05:53:52.886623 systemd[1]: Started cri-containerd-7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba.scope - libcontainer container 7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba. Jul 7 05:53:52.949006 containerd[2025]: time="2025-07-07T05:53:52.948797294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lvds,Uid:be700ab2-53f6-4c49-8d36-3a14b53b06cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b155cce35e7ba1147a8e56582b92712067183734654b701d01447eee2f4768c6\"" Jul 7 05:53:52.961471 containerd[2025]: time="2025-07-07T05:53:52.961327790Z" level=info msg="CreateContainer within sandbox \"b155cce35e7ba1147a8e56582b92712067183734654b701d01447eee2f4768c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:53:52.966690 containerd[2025]: time="2025-07-07T05:53:52.966560162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sm989,Uid:3f1ca9e4-0a50-4c2d-badd-8d1794fe651b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\"" Jul 7 05:53:52.971883 containerd[2025]: time="2025-07-07T05:53:52.971721242Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 05:53:53.002984 containerd[2025]: time="2025-07-07T05:53:53.002660002Z" level=info msg="CreateContainer within sandbox \"b155cce35e7ba1147a8e56582b92712067183734654b701d01447eee2f4768c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e291af012b5bd9bd8926e54cb7d544e527bbfc3f56b4d78cea2df44f26094e8c\"" Jul 7 05:53:53.004603 containerd[2025]: time="2025-07-07T05:53:53.004511410Z" level=info msg="StartContainer for \"e291af012b5bd9bd8926e54cb7d544e527bbfc3f56b4d78cea2df44f26094e8c\"" Jul 7 05:53:53.051366 containerd[2025]: time="2025-07-07T05:53:53.051254974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kwwcv,Uid:ee78419f-1815-4b0b-a2d8-93430e4fff94,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:53.054633 systemd[1]: Started cri-containerd-e291af012b5bd9bd8926e54cb7d544e527bbfc3f56b4d78cea2df44f26094e8c.scope - libcontainer container e291af012b5bd9bd8926e54cb7d544e527bbfc3f56b4d78cea2df44f26094e8c. Jul 7 05:53:53.119055 containerd[2025]: time="2025-07-07T05:53:53.111089410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:53.119055 containerd[2025]: time="2025-07-07T05:53:53.111185770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:53.119055 containerd[2025]: time="2025-07-07T05:53:53.112710502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:53.119055 containerd[2025]: time="2025-07-07T05:53:53.112865758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:53.130682 containerd[2025]: time="2025-07-07T05:53:53.130610459Z" level=info msg="StartContainer for \"e291af012b5bd9bd8926e54cb7d544e527bbfc3f56b4d78cea2df44f26094e8c\" returns successfully" Jul 7 05:53:53.170025 systemd[1]: Started cri-containerd-f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44.scope - libcontainer container f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44. Jul 7 05:53:53.287515 containerd[2025]: time="2025-07-07T05:53:53.287140439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-kwwcv,Uid:ee78419f-1815-4b0b-a2d8-93430e4fff94,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\"" Jul 7 05:53:54.384339 kubelet[3205]: I0707 05:53:54.383335 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5lvds" podStartSLOduration=2.383284033 podStartE2EDuration="2.383284033s" podCreationTimestamp="2025-07-07 05:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:53.36267156 +0000 UTC m=+7.370245394" watchObservedRunningTime="2025-07-07 05:53:54.383284033 +0000 UTC m=+8.390857855" Jul 7 05:53:57.952170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006966616.mount: Deactivated successfully. Jul 7 05:54:00.485348 containerd[2025]: time="2025-07-07T05:54:00.485078167Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:00.489250 containerd[2025]: time="2025-07-07T05:54:00.489020311Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 7 05:54:00.491869 containerd[2025]: time="2025-07-07T05:54:00.491782855Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:00.495270 containerd[2025]: time="2025-07-07T05:54:00.495085639Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.523280709s" Jul 7 05:54:00.495270 containerd[2025]: time="2025-07-07T05:54:00.495146527Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 7 05:54:00.498823 containerd[2025]: time="2025-07-07T05:54:00.498751807Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 05:54:00.500824 containerd[2025]: time="2025-07-07T05:54:00.500740255Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 05:54:00.530699 containerd[2025]: time="2025-07-07T05:54:00.530564395Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\"" Jul 7 05:54:00.531753 containerd[2025]: time="2025-07-07T05:54:00.531528019Z" level=info msg="StartContainer for \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\"" Jul 7 05:54:00.587631 systemd[1]: Started cri-containerd-80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414.scope - libcontainer container 80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414. Jul 7 05:54:00.638825 containerd[2025]: time="2025-07-07T05:54:00.638751464Z" level=info msg="StartContainer for \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\" returns successfully" Jul 7 05:54:00.662592 systemd[1]: cri-containerd-80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414.scope: Deactivated successfully. Jul 7 05:54:01.520266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414-rootfs.mount: Deactivated successfully. Jul 7 05:54:01.595199 containerd[2025]: time="2025-07-07T05:54:01.595111029Z" level=info msg="shim disconnected" id=80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414 namespace=k8s.io Jul 7 05:54:01.595838 containerd[2025]: time="2025-07-07T05:54:01.595786773Z" level=warning msg="cleaning up after shim disconnected" id=80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414 namespace=k8s.io Jul 7 05:54:01.595909 containerd[2025]: time="2025-07-07T05:54:01.595839393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:02.271027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097929156.mount: Deactivated successfully. Jul 7 05:54:02.395913 containerd[2025]: time="2025-07-07T05:54:02.395754765Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 05:54:02.438977 containerd[2025]: time="2025-07-07T05:54:02.438906753Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\"" Jul 7 05:54:02.441439 containerd[2025]: time="2025-07-07T05:54:02.441111729Z" level=info msg="StartContainer for \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\"" Jul 7 05:54:02.504638 systemd[1]: Started cri-containerd-c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35.scope - libcontainer container c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35. Jul 7 05:54:02.595608 containerd[2025]: time="2025-07-07T05:54:02.593003854Z" level=info msg="StartContainer for \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\" returns successfully" Jul 7 05:54:02.615926 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:54:02.618473 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:02.618598 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:02.630845 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:02.631623 systemd[1]: cri-containerd-c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35.scope: Deactivated successfully. Jul 7 05:54:02.678641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:02.706143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35-rootfs.mount: Deactivated successfully. Jul 7 05:54:02.750042 containerd[2025]: time="2025-07-07T05:54:02.749161330Z" level=info msg="shim disconnected" id=c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35 namespace=k8s.io Jul 7 05:54:02.750042 containerd[2025]: time="2025-07-07T05:54:02.749765374Z" level=warning msg="cleaning up after shim disconnected" id=c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35 namespace=k8s.io Jul 7 05:54:02.750042 containerd[2025]: time="2025-07-07T05:54:02.749789074Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:03.074364 containerd[2025]: time="2025-07-07T05:54:03.074237696Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:03.077075 containerd[2025]: time="2025-07-07T05:54:03.076982960Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 7 05:54:03.079619 containerd[2025]: time="2025-07-07T05:54:03.079537052Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:03.083544 containerd[2025]: time="2025-07-07T05:54:03.083481404Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.584657633s" Jul 7 05:54:03.083544 containerd[2025]: time="2025-07-07T05:54:03.083549072Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 7 05:54:03.088333 containerd[2025]: time="2025-07-07T05:54:03.088203992Z" level=info msg="CreateContainer within sandbox \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 05:54:03.119026 containerd[2025]: time="2025-07-07T05:54:03.118932020Z" level=info msg="CreateContainer within sandbox \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\"" Jul 7 05:54:03.120006 containerd[2025]: time="2025-07-07T05:54:03.119940104Z" level=info msg="StartContainer for \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\"" Jul 7 05:54:03.182657 systemd[1]: Started cri-containerd-6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d.scope - libcontainer container 6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d. Jul 7 05:54:03.229019 containerd[2025]: time="2025-07-07T05:54:03.228937149Z" level=info msg="StartContainer for \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\" returns successfully" Jul 7 05:54:03.401842 containerd[2025]: time="2025-07-07T05:54:03.401605378Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 05:54:03.443341 containerd[2025]: time="2025-07-07T05:54:03.443242390Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\"" Jul 7 05:54:03.447326 containerd[2025]: time="2025-07-07T05:54:03.447063538Z" level=info msg="StartContainer for \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\"" Jul 7 05:54:03.504092 kubelet[3205]: I0707 05:54:03.502217 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-kwwcv" podStartSLOduration=1.709104853 podStartE2EDuration="11.502195222s" podCreationTimestamp="2025-07-07 05:53:52 +0000 UTC" firstStartedPulling="2025-07-07 05:53:53.291431555 +0000 UTC m=+7.299005377" lastFinishedPulling="2025-07-07 05:54:03.084521936 +0000 UTC m=+17.092095746" observedRunningTime="2025-07-07 05:54:03.424244362 +0000 UTC m=+17.431818220" watchObservedRunningTime="2025-07-07 05:54:03.502195222 +0000 UTC m=+17.509769044" Jul 7 05:54:03.579193 systemd[1]: Started cri-containerd-6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df.scope - libcontainer container 6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df. Jul 7 05:54:03.708934 containerd[2025]: time="2025-07-07T05:54:03.708754667Z" level=info msg="StartContainer for \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\" returns successfully" Jul 7 05:54:03.732995 systemd[1]: cri-containerd-6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df.scope: Deactivated successfully. Jul 7 05:54:03.806459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df-rootfs.mount: Deactivated successfully. Jul 7 05:54:03.868518 containerd[2025]: time="2025-07-07T05:54:03.868441920Z" level=info msg="shim disconnected" id=6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df namespace=k8s.io Jul 7 05:54:03.869497 containerd[2025]: time="2025-07-07T05:54:03.869237004Z" level=warning msg="cleaning up after shim disconnected" id=6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df namespace=k8s.io Jul 7 05:54:03.869497 containerd[2025]: time="2025-07-07T05:54:03.869284392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:04.414447 containerd[2025]: time="2025-07-07T05:54:04.414379583Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 05:54:04.446830 containerd[2025]: time="2025-07-07T05:54:04.446752235Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\"" Jul 7 05:54:04.449167 containerd[2025]: time="2025-07-07T05:54:04.449094635Z" level=info msg="StartContainer for \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\"" Jul 7 05:54:04.550640 systemd[1]: Started cri-containerd-d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb.scope - libcontainer container d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb. Jul 7 05:54:04.652088 systemd[1]: cri-containerd-d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb.scope: Deactivated successfully. Jul 7 05:54:04.659363 containerd[2025]: time="2025-07-07T05:54:04.658819104Z" level=info msg="StartContainer for \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\" returns successfully" Jul 7 05:54:04.731727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb-rootfs.mount: Deactivated successfully. Jul 7 05:54:04.743077 containerd[2025]: time="2025-07-07T05:54:04.742985304Z" level=info msg="shim disconnected" id=d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb namespace=k8s.io Jul 7 05:54:04.743077 containerd[2025]: time="2025-07-07T05:54:04.743064180Z" level=warning msg="cleaning up after shim disconnected" id=d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb namespace=k8s.io Jul 7 05:54:04.746967 containerd[2025]: time="2025-07-07T05:54:04.743086764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:05.420620 containerd[2025]: time="2025-07-07T05:54:05.420567360Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 05:54:05.455756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689994894.mount: Deactivated successfully. Jul 7 05:54:05.467272 containerd[2025]: time="2025-07-07T05:54:05.467195976Z" level=info msg="CreateContainer within sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\"" Jul 7 05:54:05.467946 containerd[2025]: time="2025-07-07T05:54:05.467883960Z" level=info msg="StartContainer for \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\"" Jul 7 05:54:05.559444 systemd[1]: Started cri-containerd-1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b.scope - libcontainer container 1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b. Jul 7 05:54:05.693887 containerd[2025]: time="2025-07-07T05:54:05.693712969Z" level=info msg="StartContainer for \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\" returns successfully" Jul 7 05:54:05.888126 kubelet[3205]: I0707 05:54:05.887186 3205 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 05:54:05.968417 systemd[1]: Created slice kubepods-burstable-podda5d09db_e8b5_42b4_b9f6_64bfd92c3ee5.slice - libcontainer container kubepods-burstable-podda5d09db_e8b5_42b4_b9f6_64bfd92c3ee5.slice. Jul 7 05:54:05.983355 systemd[1]: Created slice kubepods-burstable-pod42155f39_8239_4b09_bff9_d6c8e6b69931.slice - libcontainer container kubepods-burstable-pod42155f39_8239_4b09_bff9_d6c8e6b69931.slice. Jul 7 05:54:06.098819 kubelet[3205]: I0707 05:54:06.098766 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42155f39-8239-4b09-bff9-d6c8e6b69931-config-volume\") pod \"coredns-7c65d6cfc9-v6wx8\" (UID: \"42155f39-8239-4b09-bff9-d6c8e6b69931\") " pod="kube-system/coredns-7c65d6cfc9-v6wx8" Jul 7 05:54:06.099267 kubelet[3205]: I0707 05:54:06.099087 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vblq8\" (UniqueName: \"kubernetes.io/projected/42155f39-8239-4b09-bff9-d6c8e6b69931-kube-api-access-vblq8\") pod \"coredns-7c65d6cfc9-v6wx8\" (UID: \"42155f39-8239-4b09-bff9-d6c8e6b69931\") " pod="kube-system/coredns-7c65d6cfc9-v6wx8" Jul 7 05:54:06.099267 kubelet[3205]: I0707 05:54:06.099210 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da5d09db-e8b5-42b4-b9f6-64bfd92c3ee5-config-volume\") pod \"coredns-7c65d6cfc9-kchj2\" (UID: \"da5d09db-e8b5-42b4-b9f6-64bfd92c3ee5\") " pod="kube-system/coredns-7c65d6cfc9-kchj2" Jul 7 05:54:06.099604 kubelet[3205]: I0707 05:54:06.099492 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzf6m\" (UniqueName: \"kubernetes.io/projected/da5d09db-e8b5-42b4-b9f6-64bfd92c3ee5-kube-api-access-lzf6m\") pod \"coredns-7c65d6cfc9-kchj2\" (UID: \"da5d09db-e8b5-42b4-b9f6-64bfd92c3ee5\") " pod="kube-system/coredns-7c65d6cfc9-kchj2" Jul 7 05:54:06.290812 containerd[2025]: time="2025-07-07T05:54:06.290655288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kchj2,Uid:da5d09db-e8b5-42b4-b9f6-64bfd92c3ee5,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:06.296348 containerd[2025]: time="2025-07-07T05:54:06.295477584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v6wx8,Uid:42155f39-8239-4b09-bff9-d6c8e6b69931,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:08.764236 (udev-worker)[4128]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:08.767234 (udev-worker)[4093]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:08.771695 systemd-networkd[1927]: cilium_host: Link UP Jul 7 05:54:08.772021 systemd-networkd[1927]: cilium_net: Link UP Jul 7 05:54:08.773822 systemd-networkd[1927]: cilium_net: Gained carrier Jul 7 05:54:08.774390 systemd-networkd[1927]: cilium_host: Gained carrier Jul 7 05:54:08.838679 systemd-networkd[1927]: cilium_net: Gained IPv6LL Jul 7 05:54:08.942503 systemd-networkd[1927]: cilium_host: Gained IPv6LL Jul 7 05:54:08.960927 systemd-networkd[1927]: cilium_vxlan: Link UP Jul 7 05:54:08.960942 systemd-networkd[1927]: cilium_vxlan: Gained carrier Jul 7 05:54:09.537344 kernel: NET: Registered PF_ALG protocol family Jul 7 05:54:10.486726 systemd-networkd[1927]: cilium_vxlan: Gained IPv6LL Jul 7 05:54:10.867744 systemd-networkd[1927]: lxc_health: Link UP Jul 7 05:54:10.892668 systemd-networkd[1927]: lxc_health: Gained carrier Jul 7 05:54:10.898546 (udev-worker)[4142]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:11.429761 systemd-networkd[1927]: lxc29f3ea59c224: Link UP Jul 7 05:54:11.438129 (udev-worker)[4464]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:11.453148 kernel: eth0: renamed from tmpdaa08 Jul 7 05:54:11.458628 systemd-networkd[1927]: lxc29f3ea59c224: Gained carrier Jul 7 05:54:11.463592 systemd-networkd[1927]: lxcab95af06120b: Link UP Jul 7 05:54:11.478350 kernel: eth0: renamed from tmp9cde7 Jul 7 05:54:11.484733 systemd-networkd[1927]: lxcab95af06120b: Gained carrier Jul 7 05:54:12.086622 systemd-networkd[1927]: lxc_health: Gained IPv6LL Jul 7 05:54:12.829726 kubelet[3205]: I0707 05:54:12.829602 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sm989" podStartSLOduration=13.303027827 podStartE2EDuration="20.829573388s" podCreationTimestamp="2025-07-07 05:53:52 +0000 UTC" firstStartedPulling="2025-07-07 05:53:52.97085549 +0000 UTC m=+6.978429312" lastFinishedPulling="2025-07-07 05:54:00.497401051 +0000 UTC m=+14.504974873" observedRunningTime="2025-07-07 05:54:06.456519397 +0000 UTC m=+20.464093267" watchObservedRunningTime="2025-07-07 05:54:12.829573388 +0000 UTC m=+26.837147198" Jul 7 05:54:13.238943 systemd-networkd[1927]: lxcab95af06120b: Gained IPv6LL Jul 7 05:54:13.366982 systemd-networkd[1927]: lxc29f3ea59c224: Gained IPv6LL Jul 7 05:54:16.248227 ntpd[1985]: Listen normally on 7 cilium_host 192.168.0.14:123 Jul 7 05:54:16.249602 ntpd[1985]: 7 Jul 05:54:16 ntpd[1985]: Listen normally on 7 cilium_host 192.168.0.14:123 Jul 7 05:54:16.249602 ntpd[1985]: 7 Jul 05:54:16 ntpd[1985]: Listen normally on 8 cilium_net [fe80::c11:c6ff:feda:7f4f%4]:123 Jul 7 05:54:16.249602 ntpd[1985]: 7 Jul 05:54:16 ntpd[1985]: Listen normally on 9 cilium_host [fe80::80f3:cff:fe0e:d394%5]:123 Jul 7 05:54:16.249602 ntpd[1985]: 7 Jul 05:54:16 ntpd[1985]: Listen normally on 10 cilium_vxlan [fe80::4cfd:6dff:fee8:c6e9%6]:123 Jul 7 05:54:16.249602 ntpd[1985]: 7 Jul 05:54:16 ntpd[1985]: Listen normally on 11 lxc_health [fe80::80:a1ff:fe17:1d17%8]:123 Jul 7 05:54:16.249602 ntpd[1985]: 7 Jul 05:54:16 ntpd[1985]: Listen normally on 12 lxcab95af06120b [fe80::6824:1bff:fee2:ce45%10]:123 Jul 7 05:54:16.249602 ntpd[1985]: 7 Jul 05:54:16 ntpd[1985]: Listen normally on 13 lxc29f3ea59c224 [fe80::584b:83ff:fe30:b91a%12]:123 Jul 7 05:54:16.248434 ntpd[1985]: Listen normally on 8 cilium_net [fe80::c11:c6ff:feda:7f4f%4]:123 Jul 7 05:54:16.248534 ntpd[1985]: Listen normally on 9 cilium_host [fe80::80f3:cff:fe0e:d394%5]:123 Jul 7 05:54:16.248612 ntpd[1985]: Listen normally on 10 cilium_vxlan [fe80::4cfd:6dff:fee8:c6e9%6]:123 Jul 7 05:54:16.248687 ntpd[1985]: Listen normally on 11 lxc_health [fe80::80:a1ff:fe17:1d17%8]:123 Jul 7 05:54:16.248764 ntpd[1985]: Listen normally on 12 lxcab95af06120b [fe80::6824:1bff:fee2:ce45%10]:123 Jul 7 05:54:16.248842 ntpd[1985]: Listen normally on 13 lxc29f3ea59c224 [fe80::584b:83ff:fe30:b91a%12]:123 Jul 7 05:54:20.342434 containerd[2025]: time="2025-07-07T05:54:20.342254366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:20.344878 containerd[2025]: time="2025-07-07T05:54:20.342458114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:20.344878 containerd[2025]: time="2025-07-07T05:54:20.342554546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:20.344878 containerd[2025]: time="2025-07-07T05:54:20.343049546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:20.408963 systemd[1]: Started cri-containerd-9cde73d178866bbbc899cf57a612a99b683d463a31b5c0f304411fb3be8262c2.scope - libcontainer container 9cde73d178866bbbc899cf57a612a99b683d463a31b5c0f304411fb3be8262c2. Jul 7 05:54:20.429620 containerd[2025]: time="2025-07-07T05:54:20.429427142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:20.431495 containerd[2025]: time="2025-07-07T05:54:20.429555662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:20.432076 containerd[2025]: time="2025-07-07T05:54:20.431809634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:20.432553 containerd[2025]: time="2025-07-07T05:54:20.432436394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:20.498461 systemd[1]: Started cri-containerd-daa087b0954d2052371941343d3498c9964dd5037dd288a364673e0915e99692.scope - libcontainer container daa087b0954d2052371941343d3498c9964dd5037dd288a364673e0915e99692. Jul 7 05:54:20.555602 containerd[2025]: time="2025-07-07T05:54:20.555479475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-v6wx8,Uid:42155f39-8239-4b09-bff9-d6c8e6b69931,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cde73d178866bbbc899cf57a612a99b683d463a31b5c0f304411fb3be8262c2\"" Jul 7 05:54:20.568071 containerd[2025]: time="2025-07-07T05:54:20.567986811Z" level=info msg="CreateContainer within sandbox \"9cde73d178866bbbc899cf57a612a99b683d463a31b5c0f304411fb3be8262c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:20.607895 containerd[2025]: time="2025-07-07T05:54:20.607648683Z" level=info msg="CreateContainer within sandbox \"9cde73d178866bbbc899cf57a612a99b683d463a31b5c0f304411fb3be8262c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"686f9d81d020e2b5bcc4ca03ab530c8f5f234a04efff3aecdb32dcd13b0455c3\"" Jul 7 05:54:20.611160 containerd[2025]: time="2025-07-07T05:54:20.610246995Z" level=info msg="StartContainer for \"686f9d81d020e2b5bcc4ca03ab530c8f5f234a04efff3aecdb32dcd13b0455c3\"" Jul 7 05:54:20.645119 containerd[2025]: time="2025-07-07T05:54:20.644951319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kchj2,Uid:da5d09db-e8b5-42b4-b9f6-64bfd92c3ee5,Namespace:kube-system,Attempt:0,} returns sandbox id \"daa087b0954d2052371941343d3498c9964dd5037dd288a364673e0915e99692\"" Jul 7 05:54:20.661744 containerd[2025]: time="2025-07-07T05:54:20.660405555Z" level=info msg="CreateContainer within sandbox \"daa087b0954d2052371941343d3498c9964dd5037dd288a364673e0915e99692\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:20.705459 containerd[2025]: time="2025-07-07T05:54:20.705214264Z" level=info msg="CreateContainer within sandbox \"daa087b0954d2052371941343d3498c9964dd5037dd288a364673e0915e99692\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d21edab1b8450c69f1c4b2ac490ead099ee5afc3c8e8f170b5832b384d0c64f7\"" Jul 7 05:54:20.709194 containerd[2025]: time="2025-07-07T05:54:20.708540472Z" level=info msg="StartContainer for \"d21edab1b8450c69f1c4b2ac490ead099ee5afc3c8e8f170b5832b384d0c64f7\"" Jul 7 05:54:20.712636 systemd[1]: Started cri-containerd-686f9d81d020e2b5bcc4ca03ab530c8f5f234a04efff3aecdb32dcd13b0455c3.scope - libcontainer container 686f9d81d020e2b5bcc4ca03ab530c8f5f234a04efff3aecdb32dcd13b0455c3. Jul 7 05:54:20.803714 systemd[1]: Started cri-containerd-d21edab1b8450c69f1c4b2ac490ead099ee5afc3c8e8f170b5832b384d0c64f7.scope - libcontainer container d21edab1b8450c69f1c4b2ac490ead099ee5afc3c8e8f170b5832b384d0c64f7. Jul 7 05:54:20.833356 containerd[2025]: time="2025-07-07T05:54:20.832266076Z" level=info msg="StartContainer for \"686f9d81d020e2b5bcc4ca03ab530c8f5f234a04efff3aecdb32dcd13b0455c3\" returns successfully" Jul 7 05:54:20.911619 containerd[2025]: time="2025-07-07T05:54:20.911560097Z" level=info msg="StartContainer for \"d21edab1b8450c69f1c4b2ac490ead099ee5afc3c8e8f170b5832b384d0c64f7\" returns successfully" Jul 7 05:54:21.528413 kubelet[3205]: I0707 05:54:21.527247 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kchj2" podStartSLOduration=29.527225716 podStartE2EDuration="29.527225716s" podCreationTimestamp="2025-07-07 05:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:21.52572052 +0000 UTC m=+35.533294390" watchObservedRunningTime="2025-07-07 05:54:21.527225716 +0000 UTC m=+35.534799538" Jul 7 05:54:21.586375 kubelet[3205]: I0707 05:54:21.586243 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-v6wx8" podStartSLOduration=29.586218664 podStartE2EDuration="29.586218664s" podCreationTimestamp="2025-07-07 05:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:21.581525668 +0000 UTC m=+35.589099514" watchObservedRunningTime="2025-07-07 05:54:21.586218664 +0000 UTC m=+35.593792486" Jul 7 05:54:32.227852 systemd[1]: Started sshd@7-172.31.23.146:22-139.178.89.65:35324.service - OpenSSH per-connection server daemon (139.178.89.65:35324). Jul 7 05:54:32.417145 sshd[4672]: Accepted publickey for core from 139.178.89.65 port 35324 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:32.420147 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:32.430027 systemd-logind[1991]: New session 8 of user core. Jul 7 05:54:32.435623 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 05:54:32.708456 sshd[4672]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:32.715653 systemd[1]: sshd@7-172.31.23.146:22-139.178.89.65:35324.service: Deactivated successfully. Jul 7 05:54:32.719633 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 05:54:32.721715 systemd-logind[1991]: Session 8 logged out. Waiting for processes to exit. Jul 7 05:54:32.724387 systemd-logind[1991]: Removed session 8. Jul 7 05:54:37.748866 systemd[1]: Started sshd@8-172.31.23.146:22-139.178.89.65:35338.service - OpenSSH per-connection server daemon (139.178.89.65:35338). Jul 7 05:54:37.931723 sshd[4685]: Accepted publickey for core from 139.178.89.65 port 35338 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:37.934510 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:37.944275 systemd-logind[1991]: New session 9 of user core. Jul 7 05:54:37.949604 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 05:54:38.195986 sshd[4685]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:38.206387 systemd[1]: sshd@8-172.31.23.146:22-139.178.89.65:35338.service: Deactivated successfully. Jul 7 05:54:38.210495 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 05:54:38.213460 systemd-logind[1991]: Session 9 logged out. Waiting for processes to exit. Jul 7 05:54:38.215805 systemd-logind[1991]: Removed session 9. Jul 7 05:54:43.237975 systemd[1]: Started sshd@9-172.31.23.146:22-139.178.89.65:48080.service - OpenSSH per-connection server daemon (139.178.89.65:48080). Jul 7 05:54:43.428620 sshd[4699]: Accepted publickey for core from 139.178.89.65 port 48080 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:43.431685 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:43.441010 systemd-logind[1991]: New session 10 of user core. Jul 7 05:54:43.449708 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 05:54:43.705846 sshd[4699]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:43.711630 systemd[1]: sshd@9-172.31.23.146:22-139.178.89.65:48080.service: Deactivated successfully. Jul 7 05:54:43.716271 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 05:54:43.720211 systemd-logind[1991]: Session 10 logged out. Waiting for processes to exit. Jul 7 05:54:43.723255 systemd-logind[1991]: Removed session 10. Jul 7 05:54:48.745860 systemd[1]: Started sshd@10-172.31.23.146:22-139.178.89.65:48084.service - OpenSSH per-connection server daemon (139.178.89.65:48084). Jul 7 05:54:48.926023 sshd[4715]: Accepted publickey for core from 139.178.89.65 port 48084 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:48.928775 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:48.938577 systemd-logind[1991]: New session 11 of user core. Jul 7 05:54:48.943618 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 05:54:49.190199 sshd[4715]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:49.196947 systemd[1]: sshd@10-172.31.23.146:22-139.178.89.65:48084.service: Deactivated successfully. Jul 7 05:54:49.201956 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 05:54:49.207606 systemd-logind[1991]: Session 11 logged out. Waiting for processes to exit. Jul 7 05:54:49.209459 systemd-logind[1991]: Removed session 11. Jul 7 05:54:54.239963 systemd[1]: Started sshd@11-172.31.23.146:22-139.178.89.65:35496.service - OpenSSH per-connection server daemon (139.178.89.65:35496). Jul 7 05:54:54.418918 sshd[4732]: Accepted publickey for core from 139.178.89.65 port 35496 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:54.421844 sshd[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:54.431068 systemd-logind[1991]: New session 12 of user core. Jul 7 05:54:54.438595 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 05:54:54.683905 sshd[4732]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:54.690330 systemd-logind[1991]: Session 12 logged out. Waiting for processes to exit. Jul 7 05:54:54.691960 systemd[1]: sshd@11-172.31.23.146:22-139.178.89.65:35496.service: Deactivated successfully. Jul 7 05:54:54.695934 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 05:54:54.698720 systemd-logind[1991]: Removed session 12. Jul 7 05:54:54.729074 systemd[1]: Started sshd@12-172.31.23.146:22-139.178.89.65:35504.service - OpenSSH per-connection server daemon (139.178.89.65:35504). Jul 7 05:54:54.898134 sshd[4746]: Accepted publickey for core from 139.178.89.65 port 35504 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:54.901025 sshd[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:54.909478 systemd-logind[1991]: New session 13 of user core. Jul 7 05:54:54.914744 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 05:54:55.234055 sshd[4746]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:55.241961 systemd-logind[1991]: Session 13 logged out. Waiting for processes to exit. Jul 7 05:54:55.243606 systemd[1]: sshd@12-172.31.23.146:22-139.178.89.65:35504.service: Deactivated successfully. Jul 7 05:54:55.249692 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 05:54:55.273057 systemd-logind[1991]: Removed session 13. Jul 7 05:54:55.283895 systemd[1]: Started sshd@13-172.31.23.146:22-139.178.89.65:35514.service - OpenSSH per-connection server daemon (139.178.89.65:35514). Jul 7 05:54:55.471882 sshd[4757]: Accepted publickey for core from 139.178.89.65 port 35514 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:55.474729 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:55.482107 systemd-logind[1991]: New session 14 of user core. Jul 7 05:54:55.490592 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 05:54:55.734629 sshd[4757]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:55.747449 systemd[1]: sshd@13-172.31.23.146:22-139.178.89.65:35514.service: Deactivated successfully. Jul 7 05:54:55.755933 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 05:54:55.758009 systemd-logind[1991]: Session 14 logged out. Waiting for processes to exit. Jul 7 05:54:55.760982 systemd-logind[1991]: Removed session 14. Jul 7 05:55:00.780408 systemd[1]: Started sshd@14-172.31.23.146:22-139.178.89.65:38860.service - OpenSSH per-connection server daemon (139.178.89.65:38860). Jul 7 05:55:00.961476 sshd[4771]: Accepted publickey for core from 139.178.89.65 port 38860 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:00.964618 sshd[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:00.974574 systemd-logind[1991]: New session 15 of user core. Jul 7 05:55:00.982668 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 05:55:01.238962 sshd[4771]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:01.246373 systemd[1]: sshd@14-172.31.23.146:22-139.178.89.65:38860.service: Deactivated successfully. Jul 7 05:55:01.252659 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 05:55:01.255420 systemd-logind[1991]: Session 15 logged out. Waiting for processes to exit. Jul 7 05:55:01.258326 systemd-logind[1991]: Removed session 15. Jul 7 05:55:06.282835 systemd[1]: Started sshd@15-172.31.23.146:22-139.178.89.65:38868.service - OpenSSH per-connection server daemon (139.178.89.65:38868). Jul 7 05:55:06.456656 sshd[4785]: Accepted publickey for core from 139.178.89.65 port 38868 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:06.459395 sshd[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:06.466594 systemd-logind[1991]: New session 16 of user core. Jul 7 05:55:06.474617 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 05:55:06.715810 sshd[4785]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:06.720284 systemd[1]: sshd@15-172.31.23.146:22-139.178.89.65:38868.service: Deactivated successfully. Jul 7 05:55:06.723173 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 05:55:06.727070 systemd-logind[1991]: Session 16 logged out. Waiting for processes to exit. Jul 7 05:55:06.729058 systemd-logind[1991]: Removed session 16. Jul 7 05:55:11.756842 systemd[1]: Started sshd@16-172.31.23.146:22-139.178.89.65:54090.service - OpenSSH per-connection server daemon (139.178.89.65:54090). Jul 7 05:55:11.938169 sshd[4799]: Accepted publickey for core from 139.178.89.65 port 54090 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:11.940978 sshd[4799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:11.950107 systemd-logind[1991]: New session 17 of user core. Jul 7 05:55:11.964609 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 05:55:12.210331 sshd[4799]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:12.216965 systemd[1]: sshd@16-172.31.23.146:22-139.178.89.65:54090.service: Deactivated successfully. Jul 7 05:55:12.220871 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 05:55:12.222701 systemd-logind[1991]: Session 17 logged out. Waiting for processes to exit. Jul 7 05:55:12.224946 systemd-logind[1991]: Removed session 17. Jul 7 05:55:12.251828 systemd[1]: Started sshd@17-172.31.23.146:22-139.178.89.65:54100.service - OpenSSH per-connection server daemon (139.178.89.65:54100). Jul 7 05:55:12.434863 sshd[4812]: Accepted publickey for core from 139.178.89.65 port 54100 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:12.437622 sshd[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:12.446330 systemd-logind[1991]: New session 18 of user core. Jul 7 05:55:12.457598 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 05:55:12.783439 sshd[4812]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:12.789474 systemd-logind[1991]: Session 18 logged out. Waiting for processes to exit. Jul 7 05:55:12.789904 systemd[1]: sshd@17-172.31.23.146:22-139.178.89.65:54100.service: Deactivated successfully. Jul 7 05:55:12.793011 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 05:55:12.797594 systemd-logind[1991]: Removed session 18. Jul 7 05:55:12.824858 systemd[1]: Started sshd@18-172.31.23.146:22-139.178.89.65:54114.service - OpenSSH per-connection server daemon (139.178.89.65:54114). Jul 7 05:55:13.003397 sshd[4822]: Accepted publickey for core from 139.178.89.65 port 54114 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:13.006052 sshd[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:13.014774 systemd-logind[1991]: New session 19 of user core. Jul 7 05:55:13.022875 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 05:55:15.613713 sshd[4822]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:15.626906 systemd[1]: sshd@18-172.31.23.146:22-139.178.89.65:54114.service: Deactivated successfully. Jul 7 05:55:15.632418 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 05:55:15.635872 systemd-logind[1991]: Session 19 logged out. Waiting for processes to exit. Jul 7 05:55:15.666494 systemd[1]: Started sshd@19-172.31.23.146:22-139.178.89.65:54122.service - OpenSSH per-connection server daemon (139.178.89.65:54122). Jul 7 05:55:15.668107 systemd-logind[1991]: Removed session 19. Jul 7 05:55:15.854064 sshd[4840]: Accepted publickey for core from 139.178.89.65 port 54122 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:15.856744 sshd[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:15.865552 systemd-logind[1991]: New session 20 of user core. Jul 7 05:55:15.872618 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 05:55:16.371162 sshd[4840]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:16.377822 systemd[1]: sshd@19-172.31.23.146:22-139.178.89.65:54122.service: Deactivated successfully. Jul 7 05:55:16.385037 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 05:55:16.387266 systemd-logind[1991]: Session 20 logged out. Waiting for processes to exit. Jul 7 05:55:16.389336 systemd-logind[1991]: Removed session 20. Jul 7 05:55:16.412819 systemd[1]: Started sshd@20-172.31.23.146:22-139.178.89.65:54130.service - OpenSSH per-connection server daemon (139.178.89.65:54130). Jul 7 05:55:16.592336 sshd[4851]: Accepted publickey for core from 139.178.89.65 port 54130 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:16.594814 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:16.603868 systemd-logind[1991]: New session 21 of user core. Jul 7 05:55:16.612564 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 05:55:16.870923 sshd[4851]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:16.877171 systemd[1]: sshd@20-172.31.23.146:22-139.178.89.65:54130.service: Deactivated successfully. Jul 7 05:55:16.882077 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 05:55:16.884640 systemd-logind[1991]: Session 21 logged out. Waiting for processes to exit. Jul 7 05:55:16.887437 systemd-logind[1991]: Removed session 21. Jul 7 05:55:21.915835 systemd[1]: Started sshd@21-172.31.23.146:22-139.178.89.65:55398.service - OpenSSH per-connection server daemon (139.178.89.65:55398). Jul 7 05:55:22.088613 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 55398 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:22.091757 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:22.100690 systemd-logind[1991]: New session 22 of user core. Jul 7 05:55:22.107594 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 05:55:22.376480 sshd[4864]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:22.383045 systemd[1]: sshd@21-172.31.23.146:22-139.178.89.65:55398.service: Deactivated successfully. Jul 7 05:55:22.388810 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 05:55:22.391234 systemd-logind[1991]: Session 22 logged out. Waiting for processes to exit. Jul 7 05:55:22.393025 systemd-logind[1991]: Removed session 22. Jul 7 05:55:27.421866 systemd[1]: Started sshd@22-172.31.23.146:22-139.178.89.65:55412.service - OpenSSH per-connection server daemon (139.178.89.65:55412). Jul 7 05:55:27.586499 sshd[4882]: Accepted publickey for core from 139.178.89.65 port 55412 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:27.589249 sshd[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:27.596561 systemd-logind[1991]: New session 23 of user core. Jul 7 05:55:27.608668 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 05:55:27.840836 sshd[4882]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:27.847523 systemd[1]: sshd@22-172.31.23.146:22-139.178.89.65:55412.service: Deactivated successfully. Jul 7 05:55:27.850708 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 05:55:27.852479 systemd-logind[1991]: Session 23 logged out. Waiting for processes to exit. Jul 7 05:55:27.854998 systemd-logind[1991]: Removed session 23. Jul 7 05:55:32.880868 systemd[1]: Started sshd@23-172.31.23.146:22-139.178.89.65:55196.service - OpenSSH per-connection server daemon (139.178.89.65:55196). Jul 7 05:55:33.063391 sshd[4895]: Accepted publickey for core from 139.178.89.65 port 55196 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:33.066110 sshd[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:33.074978 systemd-logind[1991]: New session 24 of user core. Jul 7 05:55:33.079565 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 05:55:33.330550 sshd[4895]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:33.338255 systemd[1]: sshd@23-172.31.23.146:22-139.178.89.65:55196.service: Deactivated successfully. Jul 7 05:55:33.341778 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 05:55:33.344223 systemd-logind[1991]: Session 24 logged out. Waiting for processes to exit. Jul 7 05:55:33.346762 systemd-logind[1991]: Removed session 24. Jul 7 05:55:38.370839 systemd[1]: Started sshd@24-172.31.23.146:22-139.178.89.65:55204.service - OpenSSH per-connection server daemon (139.178.89.65:55204). Jul 7 05:55:38.552539 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 55204 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:38.555277 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:38.563748 systemd-logind[1991]: New session 25 of user core. Jul 7 05:55:38.570595 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 05:55:38.804477 sshd[4907]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:38.811732 systemd[1]: sshd@24-172.31.23.146:22-139.178.89.65:55204.service: Deactivated successfully. Jul 7 05:55:38.818282 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 05:55:38.823668 systemd-logind[1991]: Session 25 logged out. Waiting for processes to exit. Jul 7 05:55:38.838580 systemd-logind[1991]: Removed session 25. Jul 7 05:55:38.847865 systemd[1]: Started sshd@25-172.31.23.146:22-139.178.89.65:55210.service - OpenSSH per-connection server daemon (139.178.89.65:55210). Jul 7 05:55:39.019959 sshd[4920]: Accepted publickey for core from 139.178.89.65 port 55210 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:39.023762 sshd[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:39.032569 systemd-logind[1991]: New session 26 of user core. Jul 7 05:55:39.042552 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 05:55:41.932265 containerd[2025]: time="2025-07-07T05:55:41.931999691Z" level=info msg="StopContainer for \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\" with timeout 30 (s)" Jul 7 05:55:41.935413 containerd[2025]: time="2025-07-07T05:55:41.934647203Z" level=info msg="Stop container \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\" with signal terminated" Jul 7 05:55:41.965511 systemd[1]: cri-containerd-6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d.scope: Deactivated successfully. Jul 7 05:55:41.983387 containerd[2025]: time="2025-07-07T05:55:41.982901915Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:55:42.001844 containerd[2025]: time="2025-07-07T05:55:42.001775923Z" level=info msg="StopContainer for \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\" with timeout 2 (s)" Jul 7 05:55:42.002873 containerd[2025]: time="2025-07-07T05:55:42.002773615Z" level=info msg="Stop container \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\" with signal terminated" Jul 7 05:55:42.019081 systemd-networkd[1927]: lxc_health: Link DOWN Jul 7 05:55:42.019097 systemd-networkd[1927]: lxc_health: Lost carrier Jul 7 05:55:42.030633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d-rootfs.mount: Deactivated successfully. Jul 7 05:55:42.058116 systemd[1]: cri-containerd-1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b.scope: Deactivated successfully. Jul 7 05:55:42.058605 systemd[1]: cri-containerd-1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b.scope: Consumed 15.180s CPU time. Jul 7 05:55:42.066501 containerd[2025]: time="2025-07-07T05:55:42.066133280Z" level=info msg="shim disconnected" id=6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d namespace=k8s.io Jul 7 05:55:42.066501 containerd[2025]: time="2025-07-07T05:55:42.066202352Z" level=warning msg="cleaning up after shim disconnected" id=6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d namespace=k8s.io Jul 7 05:55:42.066501 containerd[2025]: time="2025-07-07T05:55:42.066240644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:42.110859 containerd[2025]: time="2025-07-07T05:55:42.110771336Z" level=info msg="StopContainer for \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\" returns successfully" Jul 7 05:55:42.112397 containerd[2025]: time="2025-07-07T05:55:42.112203380Z" level=info msg="StopPodSandbox for \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\"" Jul 7 05:55:42.112553 containerd[2025]: time="2025-07-07T05:55:42.112460984Z" level=info msg="Container to stop \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:42.116780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b-rootfs.mount: Deactivated successfully. Jul 7 05:55:42.117010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44-shm.mount: Deactivated successfully. Jul 7 05:55:42.133533 containerd[2025]: time="2025-07-07T05:55:42.133384340Z" level=info msg="shim disconnected" id=1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b namespace=k8s.io Jul 7 05:55:42.133533 containerd[2025]: time="2025-07-07T05:55:42.133490516Z" level=warning msg="cleaning up after shim disconnected" id=1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b namespace=k8s.io Jul 7 05:55:42.134129 containerd[2025]: time="2025-07-07T05:55:42.133874216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:42.138286 systemd[1]: cri-containerd-f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44.scope: Deactivated successfully. Jul 7 05:55:42.170333 containerd[2025]: time="2025-07-07T05:55:42.170063792Z" level=warning msg="cleanup warnings time=\"2025-07-07T05:55:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 05:55:42.178222 containerd[2025]: time="2025-07-07T05:55:42.178041884Z" level=info msg="StopContainer for \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\" returns successfully" Jul 7 05:55:42.179155 containerd[2025]: time="2025-07-07T05:55:42.179108276Z" level=info msg="StopPodSandbox for \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\"" Jul 7 05:55:42.179282 containerd[2025]: time="2025-07-07T05:55:42.179173460Z" level=info msg="Container to stop \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:42.179282 containerd[2025]: time="2025-07-07T05:55:42.179201228Z" level=info msg="Container to stop \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:42.179282 containerd[2025]: time="2025-07-07T05:55:42.179229956Z" level=info msg="Container to stop \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:42.179282 containerd[2025]: time="2025-07-07T05:55:42.179257220Z" level=info msg="Container to stop \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:42.179938 containerd[2025]: time="2025-07-07T05:55:42.179279144Z" level=info msg="Container to stop \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:42.186431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba-shm.mount: Deactivated successfully. Jul 7 05:55:42.203890 systemd[1]: cri-containerd-7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba.scope: Deactivated successfully. Jul 7 05:55:42.207378 containerd[2025]: time="2025-07-07T05:55:42.207119660Z" level=info msg="shim disconnected" id=f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44 namespace=k8s.io Jul 7 05:55:42.208487 containerd[2025]: time="2025-07-07T05:55:42.208287332Z" level=warning msg="cleaning up after shim disconnected" id=f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44 namespace=k8s.io Jul 7 05:55:42.209132 containerd[2025]: time="2025-07-07T05:55:42.208474064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:42.243141 containerd[2025]: time="2025-07-07T05:55:42.243090369Z" level=info msg="TearDown network for sandbox \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" successfully" Jul 7 05:55:42.243414 containerd[2025]: time="2025-07-07T05:55:42.243379245Z" level=info msg="StopPodSandbox for \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" returns successfully" Jul 7 05:55:42.257583 containerd[2025]: time="2025-07-07T05:55:42.257488497Z" level=info msg="shim disconnected" id=7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba namespace=k8s.io Jul 7 05:55:42.257583 containerd[2025]: time="2025-07-07T05:55:42.257570925Z" level=warning msg="cleaning up after shim disconnected" id=7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba namespace=k8s.io Jul 7 05:55:42.259511 containerd[2025]: time="2025-07-07T05:55:42.257593041Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:42.292468 containerd[2025]: time="2025-07-07T05:55:42.292398705Z" level=info msg="TearDown network for sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" successfully" Jul 7 05:55:42.292468 containerd[2025]: time="2025-07-07T05:55:42.292452045Z" level=info msg="StopPodSandbox for \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" returns successfully" Jul 7 05:55:42.356354 kubelet[3205]: I0707 05:55:42.353320 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jl7q\" (UniqueName: \"kubernetes.io/projected/ee78419f-1815-4b0b-a2d8-93430e4fff94-kube-api-access-4jl7q\") pod \"ee78419f-1815-4b0b-a2d8-93430e4fff94\" (UID: \"ee78419f-1815-4b0b-a2d8-93430e4fff94\") " Jul 7 05:55:42.356354 kubelet[3205]: I0707 05:55:42.353393 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee78419f-1815-4b0b-a2d8-93430e4fff94-cilium-config-path\") pod \"ee78419f-1815-4b0b-a2d8-93430e4fff94\" (UID: \"ee78419f-1815-4b0b-a2d8-93430e4fff94\") " Jul 7 05:55:42.364641 kubelet[3205]: I0707 05:55:42.364580 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee78419f-1815-4b0b-a2d8-93430e4fff94-kube-api-access-4jl7q" (OuterVolumeSpecName: "kube-api-access-4jl7q") pod "ee78419f-1815-4b0b-a2d8-93430e4fff94" (UID: "ee78419f-1815-4b0b-a2d8-93430e4fff94"). InnerVolumeSpecName "kube-api-access-4jl7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:55:42.367493 kubelet[3205]: I0707 05:55:42.367432 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee78419f-1815-4b0b-a2d8-93430e4fff94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ee78419f-1815-4b0b-a2d8-93430e4fff94" (UID: "ee78419f-1815-4b0b-a2d8-93430e4fff94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:55:42.456382 kubelet[3205]: I0707 05:55:42.454551 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-lib-modules\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456382 kubelet[3205]: I0707 05:55:42.454624 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cni-path\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456382 kubelet[3205]: I0707 05:55:42.454658 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-bpf-maps\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456382 kubelet[3205]: I0707 05:55:42.454690 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-cgroup\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456382 kubelet[3205]: I0707 05:55:42.454723 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hostproc\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456382 kubelet[3205]: I0707 05:55:42.454767 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-clustermesh-secrets\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456935 kubelet[3205]: I0707 05:55:42.454799 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-net\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456935 kubelet[3205]: I0707 05:55:42.454842 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-config-path\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456935 kubelet[3205]: I0707 05:55:42.454874 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-run\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456935 kubelet[3205]: I0707 05:55:42.454914 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bblqw\" (UniqueName: \"kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-kube-api-access-bblqw\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.456935 kubelet[3205]: I0707 05:55:42.455629 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hostproc" (OuterVolumeSpecName: "hostproc") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.456935 kubelet[3205]: I0707 05:55:42.455726 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-kernel\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.457263 kubelet[3205]: I0707 05:55:42.455776 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-etc-cni-netd\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.457263 kubelet[3205]: I0707 05:55:42.455809 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-xtables-lock\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.457263 kubelet[3205]: I0707 05:55:42.455845 3205 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hubble-tls\") pod \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\" (UID: \"3f1ca9e4-0a50-4c2d-badd-8d1794fe651b\") " Jul 7 05:55:42.457263 kubelet[3205]: I0707 05:55:42.455911 3205 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jl7q\" (UniqueName: \"kubernetes.io/projected/ee78419f-1815-4b0b-a2d8-93430e4fff94-kube-api-access-4jl7q\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.457263 kubelet[3205]: I0707 05:55:42.455937 3205 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hostproc\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.457263 kubelet[3205]: I0707 05:55:42.455960 3205 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ee78419f-1815-4b0b-a2d8-93430e4fff94-cilium-config-path\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.463322 kubelet[3205]: I0707 05:55:42.455726 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463322 kubelet[3205]: I0707 05:55:42.455752 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cni-path" (OuterVolumeSpecName: "cni-path") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463322 kubelet[3205]: I0707 05:55:42.455775 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463322 kubelet[3205]: I0707 05:55:42.455797 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463322 kubelet[3205]: I0707 05:55:42.455820 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463740 kubelet[3205]: I0707 05:55:42.462372 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463740 kubelet[3205]: I0707 05:55:42.462546 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463740 kubelet[3205]: I0707 05:55:42.462706 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.463740 kubelet[3205]: I0707 05:55:42.462746 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:42.467618 kubelet[3205]: I0707 05:55:42.467540 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:55:42.471725 kubelet[3205]: I0707 05:55:42.471525 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 05:55:42.474740 kubelet[3205]: I0707 05:55:42.474479 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-kube-api-access-bblqw" (OuterVolumeSpecName: "kube-api-access-bblqw") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "kube-api-access-bblqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:55:42.475711 kubelet[3205]: I0707 05:55:42.475627 3205 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" (UID: "3f1ca9e4-0a50-4c2d-badd-8d1794fe651b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556442 3205 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-config-path\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556488 3205 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-run\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556515 3205 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-kernel\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556536 3205 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-etc-cni-netd\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556557 3205 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-xtables-lock\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556580 3205 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-hubble-tls\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556601 3205 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bblqw\" (UniqueName: \"kubernetes.io/projected/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-kube-api-access-bblqw\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.556812 kubelet[3205]: I0707 05:55:42.556622 3205 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-lib-modules\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.557365 kubelet[3205]: I0707 05:55:42.556643 3205 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cni-path\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.557365 kubelet[3205]: I0707 05:55:42.556689 3205 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-bpf-maps\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.557365 kubelet[3205]: I0707 05:55:42.556710 3205 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-cilium-cgroup\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.557365 kubelet[3205]: I0707 05:55:42.556751 3205 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-clustermesh-secrets\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.557365 kubelet[3205]: I0707 05:55:42.556776 3205 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b-host-proc-sys-net\") on node \"ip-172-31-23-146\" DevicePath \"\"" Jul 7 05:55:42.720497 kubelet[3205]: I0707 05:55:42.719926 3205 scope.go:117] "RemoveContainer" containerID="6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d" Jul 7 05:55:42.728607 containerd[2025]: time="2025-07-07T05:55:42.728334035Z" level=info msg="RemoveContainer for \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\"" Jul 7 05:55:42.745223 containerd[2025]: time="2025-07-07T05:55:42.744970727Z" level=info msg="RemoveContainer for \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\" returns successfully" Jul 7 05:55:42.747025 kubelet[3205]: I0707 05:55:42.746958 3205 scope.go:117] "RemoveContainer" containerID="6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d" Jul 7 05:55:42.748127 systemd[1]: Removed slice kubepods-besteffort-podee78419f_1815_4b0b_a2d8_93430e4fff94.slice - libcontainer container kubepods-besteffort-podee78419f_1815_4b0b_a2d8_93430e4fff94.slice. Jul 7 05:55:42.748829 containerd[2025]: time="2025-07-07T05:55:42.748110695Z" level=error msg="ContainerStatus for \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\": not found" Jul 7 05:55:42.750135 kubelet[3205]: E0707 05:55:42.750057 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\": not found" containerID="6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d" Jul 7 05:55:42.750275 kubelet[3205]: I0707 05:55:42.750127 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d"} err="failed to get container status \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6323710ef40b8e2c8118d38e4bbf19b7e5ef38b3080633536726b7225778008d\": not found" Jul 7 05:55:42.750275 kubelet[3205]: I0707 05:55:42.750247 3205 scope.go:117] "RemoveContainer" containerID="1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b" Jul 7 05:55:42.755845 containerd[2025]: time="2025-07-07T05:55:42.755214755Z" level=info msg="RemoveContainer for \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\"" Jul 7 05:55:42.762641 containerd[2025]: time="2025-07-07T05:55:42.762356687Z" level=info msg="RemoveContainer for \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\" returns successfully" Jul 7 05:55:42.763941 kubelet[3205]: I0707 05:55:42.763419 3205 scope.go:117] "RemoveContainer" containerID="d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb" Jul 7 05:55:42.769228 containerd[2025]: time="2025-07-07T05:55:42.769178075Z" level=info msg="RemoveContainer for \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\"" Jul 7 05:55:42.770514 systemd[1]: Removed slice kubepods-burstable-pod3f1ca9e4_0a50_4c2d_badd_8d1794fe651b.slice - libcontainer container kubepods-burstable-pod3f1ca9e4_0a50_4c2d_badd_8d1794fe651b.slice. Jul 7 05:55:42.771069 systemd[1]: kubepods-burstable-pod3f1ca9e4_0a50_4c2d_badd_8d1794fe651b.slice: Consumed 15.343s CPU time. Jul 7 05:55:42.777396 containerd[2025]: time="2025-07-07T05:55:42.777263951Z" level=info msg="RemoveContainer for \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\" returns successfully" Jul 7 05:55:42.777797 kubelet[3205]: I0707 05:55:42.777615 3205 scope.go:117] "RemoveContainer" containerID="6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df" Jul 7 05:55:42.783015 containerd[2025]: time="2025-07-07T05:55:42.782895239Z" level=info msg="RemoveContainer for \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\"" Jul 7 05:55:42.791529 containerd[2025]: time="2025-07-07T05:55:42.791373359Z" level=info msg="RemoveContainer for \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\" returns successfully" Jul 7 05:55:42.792675 kubelet[3205]: I0707 05:55:42.792538 3205 scope.go:117] "RemoveContainer" containerID="c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35" Jul 7 05:55:42.795692 containerd[2025]: time="2025-07-07T05:55:42.795436187Z" level=info msg="RemoveContainer for \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\"" Jul 7 05:55:42.802415 containerd[2025]: time="2025-07-07T05:55:42.802340087Z" level=info msg="RemoveContainer for \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\" returns successfully" Jul 7 05:55:42.803188 kubelet[3205]: I0707 05:55:42.802705 3205 scope.go:117] "RemoveContainer" containerID="80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414" Jul 7 05:55:42.804836 containerd[2025]: time="2025-07-07T05:55:42.804776987Z" level=info msg="RemoveContainer for \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\"" Jul 7 05:55:42.812110 containerd[2025]: time="2025-07-07T05:55:42.811948667Z" level=info msg="RemoveContainer for \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\" returns successfully" Jul 7 05:55:42.816002 kubelet[3205]: I0707 05:55:42.815393 3205 scope.go:117] "RemoveContainer" containerID="1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b" Jul 7 05:55:42.816703 containerd[2025]: time="2025-07-07T05:55:42.816544631Z" level=error msg="ContainerStatus for \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\": not found" Jul 7 05:55:42.816901 kubelet[3205]: E0707 05:55:42.816818 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\": not found" containerID="1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b" Jul 7 05:55:42.816901 kubelet[3205]: I0707 05:55:42.816873 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b"} err="failed to get container status \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\": rpc error: code = NotFound desc = an error occurred when try to find container \"1501e12a8f49494c30ceff24b71c7af10d47da0f03ec07fb08ec5c93d521b71b\": not found" Jul 7 05:55:42.817730 kubelet[3205]: I0707 05:55:42.816915 3205 scope.go:117] "RemoveContainer" containerID="d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb" Jul 7 05:55:42.817730 kubelet[3205]: E0707 05:55:42.817483 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\": not found" containerID="d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb" Jul 7 05:55:42.817730 kubelet[3205]: I0707 05:55:42.817525 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb"} err="failed to get container status \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\": not found" Jul 7 05:55:42.817730 kubelet[3205]: I0707 05:55:42.817556 3205 scope.go:117] "RemoveContainer" containerID="6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df" Jul 7 05:55:42.817988 containerd[2025]: time="2025-07-07T05:55:42.817248071Z" level=error msg="ContainerStatus for \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d685a8175831ec7178f635bdb8d16b1f5ecf03d313dcb0e150a8974267de6abb\": not found" Jul 7 05:55:42.817988 containerd[2025]: time="2025-07-07T05:55:42.817841219Z" level=error msg="ContainerStatus for \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\": not found" Jul 7 05:55:42.818440 kubelet[3205]: E0707 05:55:42.818220 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\": not found" containerID="6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df" Jul 7 05:55:42.818440 kubelet[3205]: I0707 05:55:42.818272 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df"} err="failed to get container status \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\": rpc error: code = NotFound desc = an error occurred when try to find container \"6951caca88089d38f6f17615c057a5af9927e9e2d5076540eb204348230389df\": not found" Jul 7 05:55:42.818440 kubelet[3205]: I0707 05:55:42.818330 3205 scope.go:117] "RemoveContainer" containerID="c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35" Jul 7 05:55:42.818752 containerd[2025]: time="2025-07-07T05:55:42.818698307Z" level=error msg="ContainerStatus for \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\": not found" Jul 7 05:55:42.818944 kubelet[3205]: E0707 05:55:42.818903 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\": not found" containerID="c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35" Jul 7 05:55:42.819024 kubelet[3205]: I0707 05:55:42.818955 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35"} err="failed to get container status \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\": rpc error: code = NotFound desc = an error occurred when try to find container \"c921854776400a4bb6aabba182189ba4e3c29a861da54972dae2e2431003da35\": not found" Jul 7 05:55:42.819024 kubelet[3205]: I0707 05:55:42.818992 3205 scope.go:117] "RemoveContainer" containerID="80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414" Jul 7 05:55:42.819383 containerd[2025]: time="2025-07-07T05:55:42.819273431Z" level=error msg="ContainerStatus for \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\": not found" Jul 7 05:55:42.819681 kubelet[3205]: E0707 05:55:42.819513 3205 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\": not found" containerID="80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414" Jul 7 05:55:42.819681 kubelet[3205]: I0707 05:55:42.819564 3205 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414"} err="failed to get container status \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\": rpc error: code = NotFound desc = an error occurred when try to find container \"80ec807481724b4d741b719f0904c4dbd3af93add8911b2ab062a3de69485414\": not found" Jul 7 05:55:42.947080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44-rootfs.mount: Deactivated successfully. Jul 7 05:55:42.947256 systemd[1]: var-lib-kubelet-pods-ee78419f\x2d1815\x2d4b0b\x2da2d8\x2d93430e4fff94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jl7q.mount: Deactivated successfully. Jul 7 05:55:42.947421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba-rootfs.mount: Deactivated successfully. Jul 7 05:55:42.947562 systemd[1]: var-lib-kubelet-pods-3f1ca9e4\x2d0a50\x2d4c2d\x2dbadd\x2d8d1794fe651b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbblqw.mount: Deactivated successfully. Jul 7 05:55:42.947704 systemd[1]: var-lib-kubelet-pods-3f1ca9e4\x2d0a50\x2d4c2d\x2dbadd\x2d8d1794fe651b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 05:55:42.947838 systemd[1]: var-lib-kubelet-pods-3f1ca9e4\x2d0a50\x2d4c2d\x2dbadd\x2d8d1794fe651b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 05:55:43.860684 sshd[4920]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:43.866423 systemd-logind[1991]: Session 26 logged out. Waiting for processes to exit. Jul 7 05:55:43.867034 systemd[1]: sshd@25-172.31.23.146:22-139.178.89.65:55210.service: Deactivated successfully. Jul 7 05:55:43.871785 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 05:55:43.872228 systemd[1]: session-26.scope: Consumed 2.121s CPU time. Jul 7 05:55:43.875799 systemd-logind[1991]: Removed session 26. Jul 7 05:55:43.898842 systemd[1]: Started sshd@26-172.31.23.146:22-139.178.89.65:41812.service - OpenSSH per-connection server daemon (139.178.89.65:41812). Jul 7 05:55:44.063151 sshd[5082]: Accepted publickey for core from 139.178.89.65 port 41812 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:44.065873 sshd[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:44.074577 systemd-logind[1991]: New session 27 of user core. Jul 7 05:55:44.083657 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 05:55:44.202269 kubelet[3205]: I0707 05:55:44.201883 3205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" path="/var/lib/kubelet/pods/3f1ca9e4-0a50-4c2d-badd-8d1794fe651b/volumes" Jul 7 05:55:44.204345 kubelet[3205]: I0707 05:55:44.204095 3205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee78419f-1815-4b0b-a2d8-93430e4fff94" path="/var/lib/kubelet/pods/ee78419f-1815-4b0b-a2d8-93430e4fff94/volumes" Jul 7 05:55:44.248792 ntpd[1985]: Deleting interface #11 lxc_health, fe80::80:a1ff:fe17:1d17%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Jul 7 05:55:44.250608 ntpd[1985]: 7 Jul 05:55:44 ntpd[1985]: Deleting interface #11 lxc_health, fe80::80:a1ff:fe17:1d17%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Jul 7 05:55:46.193289 containerd[2025]: time="2025-07-07T05:55:46.192804384Z" level=info msg="StopPodSandbox for \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\"" Jul 7 05:55:46.193289 containerd[2025]: time="2025-07-07T05:55:46.192945492Z" level=info msg="TearDown network for sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" successfully" Jul 7 05:55:46.193289 containerd[2025]: time="2025-07-07T05:55:46.192969348Z" level=info msg="StopPodSandbox for \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" returns successfully" Jul 7 05:55:46.195499 containerd[2025]: time="2025-07-07T05:55:46.194747076Z" level=info msg="RemovePodSandbox for \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\"" Jul 7 05:55:46.195499 containerd[2025]: time="2025-07-07T05:55:46.194821032Z" level=info msg="Forcibly stopping sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\"" Jul 7 05:55:46.195499 containerd[2025]: time="2025-07-07T05:55:46.194992488Z" level=info msg="TearDown network for sandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" successfully" Jul 7 05:55:46.207106 containerd[2025]: time="2025-07-07T05:55:46.207052440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:46.207415 containerd[2025]: time="2025-07-07T05:55:46.207380304Z" level=info msg="RemovePodSandbox \"7c68927c9735ae82edb221b9cbda523336d2fefa1cd3dda50ce6f5700e5655ba\" returns successfully" Jul 7 05:55:46.209596 containerd[2025]: time="2025-07-07T05:55:46.209554764Z" level=info msg="StopPodSandbox for \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\"" Jul 7 05:55:46.210398 containerd[2025]: time="2025-07-07T05:55:46.210176520Z" level=info msg="TearDown network for sandbox \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" successfully" Jul 7 05:55:46.210398 containerd[2025]: time="2025-07-07T05:55:46.210209532Z" level=info msg="StopPodSandbox for \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" returns successfully" Jul 7 05:55:46.211947 containerd[2025]: time="2025-07-07T05:55:46.211905936Z" level=info msg="RemovePodSandbox for \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\"" Jul 7 05:55:46.212340 containerd[2025]: time="2025-07-07T05:55:46.212187096Z" level=info msg="Forcibly stopping sandbox \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\"" Jul 7 05:55:46.212832 containerd[2025]: time="2025-07-07T05:55:46.212489388Z" level=info msg="TearDown network for sandbox \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" successfully" Jul 7 05:55:46.220852 containerd[2025]: time="2025-07-07T05:55:46.220663380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:46.221484 containerd[2025]: time="2025-07-07T05:55:46.221195112Z" level=info msg="RemovePodSandbox \"f0c52f4cb6bdd76a2d77449a31dab054796695ddf681483f239b13b19ecf1e44\" returns successfully" Jul 7 05:55:46.430531 kubelet[3205]: E0707 05:55:46.430340 3205 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 05:55:46.469229 sshd[5082]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:46.483705 systemd[1]: sshd@26-172.31.23.146:22-139.178.89.65:41812.service: Deactivated successfully. Jul 7 05:55:46.490253 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 05:55:46.492445 systemd[1]: session-27.scope: Consumed 2.161s CPU time. Jul 7 05:55:46.495392 systemd-logind[1991]: Session 27 logged out. Waiting for processes to exit. Jul 7 05:55:46.522874 systemd[1]: Started sshd@27-172.31.23.146:22-139.178.89.65:41822.service - OpenSSH per-connection server daemon (139.178.89.65:41822). Jul 7 05:55:46.527350 kubelet[3205]: E0707 05:55:46.526337 3205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee78419f-1815-4b0b-a2d8-93430e4fff94" containerName="cilium-operator" Jul 7 05:55:46.527350 kubelet[3205]: E0707 05:55:46.526387 3205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" containerName="mount-bpf-fs" Jul 7 05:55:46.527350 kubelet[3205]: E0707 05:55:46.526405 3205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" containerName="cilium-agent" Jul 7 05:55:46.527350 kubelet[3205]: E0707 05:55:46.526422 3205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" containerName="mount-cgroup" Jul 7 05:55:46.527350 kubelet[3205]: E0707 05:55:46.526437 3205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" containerName="apply-sysctl-overwrites" Jul 7 05:55:46.527350 kubelet[3205]: E0707 05:55:46.526453 3205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" containerName="clean-cilium-state" Jul 7 05:55:46.527350 kubelet[3205]: I0707 05:55:46.526504 3205 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee78419f-1815-4b0b-a2d8-93430e4fff94" containerName="cilium-operator" Jul 7 05:55:46.527350 kubelet[3205]: I0707 05:55:46.526520 3205 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1ca9e4-0a50-4c2d-badd-8d1794fe651b" containerName="cilium-agent" Jul 7 05:55:46.530144 systemd-logind[1991]: Removed session 27. Jul 7 05:55:46.553269 systemd[1]: Created slice kubepods-burstable-pod46a7f841_5af7_4be3_bf8f_79879be0276b.slice - libcontainer container kubepods-burstable-pod46a7f841_5af7_4be3_bf8f_79879be0276b.slice. Jul 7 05:55:46.685941 kubelet[3205]: I0707 05:55:46.685882 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-lib-modules\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686124 kubelet[3205]: I0707 05:55:46.685958 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46a7f841-5af7-4be3-bf8f-79879be0276b-cilium-config-path\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686124 kubelet[3205]: I0707 05:55:46.686036 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-host-proc-sys-kernel\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686124 kubelet[3205]: I0707 05:55:46.686099 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-cilium-run\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686322 kubelet[3205]: I0707 05:55:46.686151 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-cni-path\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686322 kubelet[3205]: I0707 05:55:46.686196 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/46a7f841-5af7-4be3-bf8f-79879be0276b-cilium-ipsec-secrets\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686322 kubelet[3205]: I0707 05:55:46.686247 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-cilium-cgroup\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686322 kubelet[3205]: I0707 05:55:46.686284 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-etc-cni-netd\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686547 kubelet[3205]: I0707 05:55:46.686348 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-xtables-lock\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686547 kubelet[3205]: I0707 05:55:46.686382 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-host-proc-sys-net\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686547 kubelet[3205]: I0707 05:55:46.686419 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46a7f841-5af7-4be3-bf8f-79879be0276b-hubble-tls\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686547 kubelet[3205]: I0707 05:55:46.686456 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-bpf-maps\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686547 kubelet[3205]: I0707 05:55:46.686492 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn42q\" (UniqueName: \"kubernetes.io/projected/46a7f841-5af7-4be3-bf8f-79879be0276b-kube-api-access-fn42q\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686807 kubelet[3205]: I0707 05:55:46.686550 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46a7f841-5af7-4be3-bf8f-79879be0276b-hostproc\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.686807 kubelet[3205]: I0707 05:55:46.686625 3205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46a7f841-5af7-4be3-bf8f-79879be0276b-clustermesh-secrets\") pod \"cilium-c48hc\" (UID: \"46a7f841-5af7-4be3-bf8f-79879be0276b\") " pod="kube-system/cilium-c48hc" Jul 7 05:55:46.754208 sshd[5096]: Accepted publickey for core from 139.178.89.65 port 41822 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:46.755554 sshd[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:46.765556 systemd-logind[1991]: New session 28 of user core. Jul 7 05:55:46.770567 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 05:55:46.866679 containerd[2025]: time="2025-07-07T05:55:46.866599299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c48hc,Uid:46a7f841-5af7-4be3-bf8f-79879be0276b,Namespace:kube-system,Attempt:0,}" Jul 7 05:55:46.896910 sshd[5096]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:46.907986 systemd[1]: sshd@27-172.31.23.146:22-139.178.89.65:41822.service: Deactivated successfully. Jul 7 05:55:46.916145 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 05:55:46.924071 systemd-logind[1991]: Session 28 logged out. Waiting for processes to exit. Jul 7 05:55:46.928879 containerd[2025]: time="2025-07-07T05:55:46.928504144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:46.929224 containerd[2025]: time="2025-07-07T05:55:46.929067856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:46.929462 containerd[2025]: time="2025-07-07T05:55:46.929188252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:46.933515 containerd[2025]: time="2025-07-07T05:55:46.931153264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:46.949251 systemd[1]: Started sshd@28-172.31.23.146:22-139.178.89.65:41830.service - OpenSSH per-connection server daemon (139.178.89.65:41830). Jul 7 05:55:46.965259 systemd-logind[1991]: Removed session 28. Jul 7 05:55:46.996692 systemd[1]: Started cri-containerd-28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b.scope - libcontainer container 28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b. Jul 7 05:55:47.040504 containerd[2025]: time="2025-07-07T05:55:47.039669348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c48hc,Uid:46a7f841-5af7-4be3-bf8f-79879be0276b,Namespace:kube-system,Attempt:0,} returns sandbox id \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\"" Jul 7 05:55:47.047833 containerd[2025]: time="2025-07-07T05:55:47.047544960Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 05:55:47.072821 containerd[2025]: time="2025-07-07T05:55:47.072731677Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a\"" Jul 7 05:55:47.074644 containerd[2025]: time="2025-07-07T05:55:47.074593129Z" level=info msg="StartContainer for \"0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a\"" Jul 7 05:55:47.123678 systemd[1]: Started cri-containerd-0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a.scope - libcontainer container 0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a. Jul 7 05:55:47.165753 sshd[5126]: Accepted publickey for core from 139.178.89.65 port 41830 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:47.169907 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:47.180728 systemd-logind[1991]: New session 29 of user core. Jul 7 05:55:47.186531 containerd[2025]: time="2025-07-07T05:55:47.185550733Z" level=info msg="StartContainer for \"0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a\" returns successfully" Jul 7 05:55:47.189077 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 05:55:47.204278 systemd[1]: cri-containerd-0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a.scope: Deactivated successfully. Jul 7 05:55:47.259438 containerd[2025]: time="2025-07-07T05:55:47.258939469Z" level=info msg="shim disconnected" id=0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a namespace=k8s.io Jul 7 05:55:47.262376 containerd[2025]: time="2025-07-07T05:55:47.260229505Z" level=warning msg="cleaning up after shim disconnected" id=0908dd4494f6ed43f6bdf58b52fdd33672b291644067a656b4c2d6506b2a236a namespace=k8s.io Jul 7 05:55:47.262376 containerd[2025]: time="2025-07-07T05:55:47.260286385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:47.770790 containerd[2025]: time="2025-07-07T05:55:47.770691640Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 05:55:47.806364 containerd[2025]: time="2025-07-07T05:55:47.805817668Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08\"" Jul 7 05:55:47.808361 containerd[2025]: time="2025-07-07T05:55:47.807848980Z" level=info msg="StartContainer for \"dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08\"" Jul 7 05:55:47.860784 systemd[1]: run-containerd-runc-k8s.io-dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08-runc.pn59HS.mount: Deactivated successfully. Jul 7 05:55:47.871629 systemd[1]: Started cri-containerd-dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08.scope - libcontainer container dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08. Jul 7 05:55:47.919783 containerd[2025]: time="2025-07-07T05:55:47.919700093Z" level=info msg="StartContainer for \"dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08\" returns successfully" Jul 7 05:55:47.934755 systemd[1]: cri-containerd-dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08.scope: Deactivated successfully. Jul 7 05:55:47.972842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08-rootfs.mount: Deactivated successfully. Jul 7 05:55:47.983617 containerd[2025]: time="2025-07-07T05:55:47.982864049Z" level=info msg="shim disconnected" id=dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08 namespace=k8s.io Jul 7 05:55:47.983617 containerd[2025]: time="2025-07-07T05:55:47.982940861Z" level=warning msg="cleaning up after shim disconnected" id=dfd8f5bcec7c3bf1b9f139365d84de6f4ebd47e5aa608182565a785ccebe3b08 namespace=k8s.io Jul 7 05:55:47.983617 containerd[2025]: time="2025-07-07T05:55:47.982960733Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:48.777265 containerd[2025]: time="2025-07-07T05:55:48.776994653Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 05:55:48.813703 containerd[2025]: time="2025-07-07T05:55:48.808737677Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04\"" Jul 7 05:55:48.813703 containerd[2025]: time="2025-07-07T05:55:48.810502889Z" level=info msg="StartContainer for \"50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04\"" Jul 7 05:55:48.865652 systemd[1]: Started cri-containerd-50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04.scope - libcontainer container 50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04. Jul 7 05:55:48.918898 containerd[2025]: time="2025-07-07T05:55:48.918817602Z" level=info msg="StartContainer for \"50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04\" returns successfully" Jul 7 05:55:48.924716 systemd[1]: cri-containerd-50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04.scope: Deactivated successfully. Jul 7 05:55:48.966488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04-rootfs.mount: Deactivated successfully. Jul 7 05:55:48.976756 containerd[2025]: time="2025-07-07T05:55:48.976591422Z" level=info msg="shim disconnected" id=50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04 namespace=k8s.io Jul 7 05:55:48.976756 containerd[2025]: time="2025-07-07T05:55:48.976671174Z" level=warning msg="cleaning up after shim disconnected" id=50ac6e69121769871c262cf558d6ce851886fc9e217d3c7b53bd5c70bf75be04 namespace=k8s.io Jul 7 05:55:48.976756 containerd[2025]: time="2025-07-07T05:55:48.976708374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:49.275994 kubelet[3205]: I0707 05:55:49.275892 3205 setters.go:600] "Node became not ready" node="ip-172-31-23-146" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T05:55:49Z","lastTransitionTime":"2025-07-07T05:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 05:55:49.784648 containerd[2025]: time="2025-07-07T05:55:49.784588482Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 05:55:49.819831 containerd[2025]: time="2025-07-07T05:55:49.819765582Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c\"" Jul 7 05:55:49.821550 containerd[2025]: time="2025-07-07T05:55:49.821484090Z" level=info msg="StartContainer for \"79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c\"" Jul 7 05:55:49.881610 systemd[1]: Started cri-containerd-79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c.scope - libcontainer container 79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c. Jul 7 05:55:49.929774 systemd[1]: cri-containerd-79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c.scope: Deactivated successfully. Jul 7 05:55:49.933891 containerd[2025]: time="2025-07-07T05:55:49.933665899Z" level=info msg="StartContainer for \"79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c\" returns successfully" Jul 7 05:55:49.970771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c-rootfs.mount: Deactivated successfully. Jul 7 05:55:49.984468 containerd[2025]: time="2025-07-07T05:55:49.984001471Z" level=info msg="shim disconnected" id=79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c namespace=k8s.io Jul 7 05:55:49.984468 containerd[2025]: time="2025-07-07T05:55:49.984078271Z" level=warning msg="cleaning up after shim disconnected" id=79ddb59eb167f5bae14f6de96c1693d74086397c533cfdb97f7c1b6606ad226c namespace=k8s.io Jul 7 05:55:49.984468 containerd[2025]: time="2025-07-07T05:55:49.984098623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:50.792828 containerd[2025]: time="2025-07-07T05:55:50.792755335Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 05:55:50.825527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791355649.mount: Deactivated successfully. Jul 7 05:55:50.829325 containerd[2025]: time="2025-07-07T05:55:50.827130979Z" level=info msg="CreateContainer within sandbox \"28dfb41617e331859c66e1f0f0aa7c707d6b43fea22a658835a2d8bc567a3d5b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef6d68e716430ec8f6baf41b32b421ecfe9e64ed3f659ef4adc00e248cb0ba9b\"" Jul 7 05:55:50.833340 containerd[2025]: time="2025-07-07T05:55:50.832666255Z" level=info msg="StartContainer for \"ef6d68e716430ec8f6baf41b32b421ecfe9e64ed3f659ef4adc00e248cb0ba9b\"" Jul 7 05:55:50.886645 systemd[1]: Started cri-containerd-ef6d68e716430ec8f6baf41b32b421ecfe9e64ed3f659ef4adc00e248cb0ba9b.scope - libcontainer container ef6d68e716430ec8f6baf41b32b421ecfe9e64ed3f659ef4adc00e248cb0ba9b. Jul 7 05:55:50.947458 containerd[2025]: time="2025-07-07T05:55:50.947392292Z" level=info msg="StartContainer for \"ef6d68e716430ec8f6baf41b32b421ecfe9e64ed3f659ef4adc00e248cb0ba9b\" returns successfully" Jul 7 05:55:51.735362 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 7 05:55:51.830498 kubelet[3205]: I0707 05:55:51.829706 3205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c48hc" podStartSLOduration=5.829684664 podStartE2EDuration="5.829684664s" podCreationTimestamp="2025-07-07 05:55:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:55:51.826852988 +0000 UTC m=+125.834426834" watchObservedRunningTime="2025-07-07 05:55:51.829684664 +0000 UTC m=+125.837258498" Jul 7 05:55:56.149623 systemd-networkd[1927]: lxc_health: Link UP Jul 7 05:55:56.150191 (udev-worker)[5953]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:55:56.153913 (udev-worker)[5954]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:55:56.182835 systemd-networkd[1927]: lxc_health: Gained carrier Jul 7 05:55:58.006516 systemd-networkd[1927]: lxc_health: Gained IPv6LL Jul 7 05:56:00.248250 ntpd[1985]: Listen normally on 14 lxc_health [fe80::90d9:91ff:fe0c:db7a%14]:123 Jul 7 05:56:00.248841 ntpd[1985]: 7 Jul 05:56:00 ntpd[1985]: Listen normally on 14 lxc_health [fe80::90d9:91ff:fe0c:db7a%14]:123 Jul 7 05:56:02.780388 systemd[1]: run-containerd-runc-k8s.io-ef6d68e716430ec8f6baf41b32b421ecfe9e64ed3f659ef4adc00e248cb0ba9b-runc.mIburF.mount: Deactivated successfully. Jul 7 05:56:02.917648 sshd[5126]: pam_unix(sshd:session): session closed for user core Jul 7 05:56:02.924910 systemd-logind[1991]: Session 29 logged out. Waiting for processes to exit. Jul 7 05:56:02.926746 systemd[1]: sshd@28-172.31.23.146:22-139.178.89.65:41830.service: Deactivated successfully. Jul 7 05:56:02.933464 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 05:56:02.941088 systemd-logind[1991]: Removed session 29.