Jul 7 05:52:53.253079 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 7 05:52:53.253124 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:52:53.253150 kernel: KASLR disabled due to lack of seed Jul 7 05:52:53.253167 kernel: efi: EFI v2.7 by EDK II Jul 7 05:52:53.253182 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Jul 7 05:52:53.253198 kernel: ACPI: Early table checksum verification disabled Jul 7 05:52:53.253216 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 7 05:52:53.253250 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 7 05:52:53.253269 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 7 05:52:53.253285 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 7 05:52:53.253307 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 7 05:52:53.253323 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 7 05:52:53.253339 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 7 05:52:53.253354 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 7 05:52:53.253373 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 7 05:52:53.253393 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 7 05:52:53.253410 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 7 05:52:53.253427 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 7 05:52:53.253443 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 7 05:52:53.253459 kernel: printk: bootconsole [uart0] enabled Jul 7 05:52:53.253476 kernel: NUMA: Failed to initialise from firmware Jul 7 05:52:53.253509 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 7 05:52:53.253706 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 7 05:52:53.253724 kernel: Zone ranges: Jul 7 05:52:53.253740 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 7 05:52:53.253757 kernel: DMA32 empty Jul 7 05:52:53.253780 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 7 05:52:53.253797 kernel: Movable zone start for each node Jul 7 05:52:53.253813 kernel: Early memory node ranges Jul 7 05:52:53.253830 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 7 05:52:53.253846 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 7 05:52:53.253862 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 7 05:52:53.253878 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 7 05:52:53.253895 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 7 05:52:53.253911 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 7 05:52:53.253927 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 7 05:52:53.253944 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 7 05:52:53.253960 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 7 05:52:53.253981 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 7 05:52:53.253998 kernel: psci: probing for conduit method from ACPI. Jul 7 05:52:53.254021 kernel: psci: PSCIv1.0 detected in firmware. Jul 7 05:52:53.254039 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:52:53.254057 kernel: psci: Trusted OS migration not required Jul 7 05:52:53.254078 kernel: psci: SMC Calling Convention v1.1 Jul 7 05:52:53.254096 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 7 05:52:53.254113 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:52:53.254131 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:52:53.254149 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 7 05:52:53.254166 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:52:53.254184 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:52:53.254201 kernel: CPU features: detected: Spectre-v2 Jul 7 05:52:53.254218 kernel: CPU features: detected: Spectre-v3a Jul 7 05:52:53.254235 kernel: CPU features: detected: Spectre-BHB Jul 7 05:52:53.254253 kernel: CPU features: detected: ARM erratum 1742098 Jul 7 05:52:53.254274 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 7 05:52:53.254292 kernel: alternatives: applying boot alternatives Jul 7 05:52:53.254312 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:53.254330 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:52:53.254348 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:52:53.254365 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:52:53.254383 kernel: Fallback order for Node 0: 0 Jul 7 05:52:53.254400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 7 05:52:53.254417 kernel: Policy zone: Normal Jul 7 05:52:53.254435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:52:53.254452 kernel: software IO TLB: area num 2. Jul 7 05:52:53.254474 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 7 05:52:53.254547 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Jul 7 05:52:53.254566 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 05:52:53.254583 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:52:53.254602 kernel: rcu: RCU event tracing is enabled. Jul 7 05:52:53.254620 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 05:52:53.254638 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:52:53.254655 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:52:53.254673 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:52:53.254690 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 05:52:53.254708 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:52:53.254731 kernel: GICv3: 96 SPIs implemented Jul 7 05:52:53.254749 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:52:53.254767 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:52:53.254784 kernel: GICv3: GICv3 features: 16 PPIs Jul 7 05:52:53.254801 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 7 05:52:53.254818 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 7 05:52:53.254836 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 05:52:53.254853 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 7 05:52:53.254871 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 7 05:52:53.254888 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 7 05:52:53.254905 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 7 05:52:53.254923 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:52:53.254945 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 7 05:52:53.254962 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 7 05:52:53.254980 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 7 05:52:53.254998 kernel: Console: colour dummy device 80x25 Jul 7 05:52:53.255017 kernel: printk: console [tty1] enabled Jul 7 05:52:53.255034 kernel: ACPI: Core revision 20230628 Jul 7 05:52:53.255052 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 7 05:52:53.255070 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:52:53.255088 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:52:53.255110 kernel: landlock: Up and running. Jul 7 05:52:53.255128 kernel: SELinux: Initializing. Jul 7 05:52:53.255146 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:53.255164 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:52:53.255182 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:53.255200 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 05:52:53.255218 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:52:53.255236 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:52:53.255254 kernel: Platform MSI: ITS@0x10080000 domain created Jul 7 05:52:53.255276 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 7 05:52:53.255320 kernel: Remapping and enabling EFI services. Jul 7 05:52:53.255340 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:52:53.255358 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:52:53.255376 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 7 05:52:53.255394 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 7 05:52:53.255412 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 7 05:52:53.255430 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 05:52:53.255448 kernel: SMP: Total of 2 processors activated. Jul 7 05:52:53.255471 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:52:53.255504 kernel: CPU features: detected: 32-bit EL1 Support Jul 7 05:52:53.255528 kernel: CPU features: detected: CRC32 instructions Jul 7 05:52:53.255560 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:52:53.255584 kernel: alternatives: applying system-wide alternatives Jul 7 05:52:53.255602 kernel: devtmpfs: initialized Jul 7 05:52:53.255621 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:52:53.255639 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 05:52:53.255658 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:52:53.255677 kernel: SMBIOS 3.0.0 present. Jul 7 05:52:53.255701 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 7 05:52:53.255720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:52:53.255739 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:52:53.255757 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:52:53.255776 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:52:53.255795 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:52:53.255813 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Jul 7 05:52:53.255836 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:52:53.255854 kernel: cpuidle: using governor menu Jul 7 05:52:53.255873 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:52:53.255891 kernel: ASID allocator initialised with 65536 entries Jul 7 05:52:53.255910 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:52:53.255928 kernel: Serial: AMBA PL011 UART driver Jul 7 05:52:53.255947 kernel: Modules: 17488 pages in range for non-PLT usage Jul 7 05:52:53.255966 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:52:53.255984 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:52:53.256007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:52:53.256026 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:52:53.256045 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:52:53.256063 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:52:53.256082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:52:53.256125 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:52:53.256147 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:52:53.256166 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:52:53.256185 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:52:53.256209 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:52:53.256228 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:52:53.256246 kernel: ACPI: Interpreter enabled Jul 7 05:52:53.256265 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:52:53.256283 kernel: ACPI: MCFG table detected, 1 entries Jul 7 05:52:53.256301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 7 05:52:53.256659 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 05:52:53.256902 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 05:52:53.257166 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 05:52:53.257422 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 7 05:52:53.258018 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 7 05:52:53.258053 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 7 05:52:53.258073 kernel: acpiphp: Slot [1] registered Jul 7 05:52:53.258092 kernel: acpiphp: Slot [2] registered Jul 7 05:52:53.258111 kernel: acpiphp: Slot [3] registered Jul 7 05:52:53.258130 kernel: acpiphp: Slot [4] registered Jul 7 05:52:53.258159 kernel: acpiphp: Slot [5] registered Jul 7 05:52:53.258178 kernel: acpiphp: Slot [6] registered Jul 7 05:52:53.258196 kernel: acpiphp: Slot [7] registered Jul 7 05:52:53.258214 kernel: acpiphp: Slot [8] registered Jul 7 05:52:53.258233 kernel: acpiphp: Slot [9] registered Jul 7 05:52:53.258251 kernel: acpiphp: Slot [10] registered Jul 7 05:52:53.258269 kernel: acpiphp: Slot [11] registered Jul 7 05:52:53.258287 kernel: acpiphp: Slot [12] registered Jul 7 05:52:53.258305 kernel: acpiphp: Slot [13] registered Jul 7 05:52:53.258324 kernel: acpiphp: Slot [14] registered Jul 7 05:52:53.258347 kernel: acpiphp: Slot [15] registered Jul 7 05:52:53.258365 kernel: acpiphp: Slot [16] registered Jul 7 05:52:53.258383 kernel: acpiphp: Slot [17] registered Jul 7 05:52:53.258402 kernel: acpiphp: Slot [18] registered Jul 7 05:52:53.258420 kernel: acpiphp: Slot [19] registered Jul 7 05:52:53.258438 kernel: acpiphp: Slot [20] registered Jul 7 05:52:53.258456 kernel: acpiphp: Slot [21] registered Jul 7 05:52:53.258475 kernel: acpiphp: Slot [22] registered Jul 7 05:52:53.258516 kernel: acpiphp: Slot [23] registered Jul 7 05:52:53.258545 kernel: acpiphp: Slot [24] registered Jul 7 05:52:53.258564 kernel: acpiphp: Slot [25] registered Jul 7 05:52:53.258582 kernel: acpiphp: Slot [26] registered Jul 7 05:52:53.258601 kernel: acpiphp: Slot [27] registered Jul 7 05:52:53.258619 kernel: acpiphp: Slot [28] registered Jul 7 05:52:53.258637 kernel: acpiphp: Slot [29] registered Jul 7 05:52:53.258655 kernel: acpiphp: Slot [30] registered Jul 7 05:52:53.258673 kernel: acpiphp: Slot [31] registered Jul 7 05:52:53.258692 kernel: PCI host bridge to bus 0000:00 Jul 7 05:52:53.258921 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 7 05:52:53.259126 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 05:52:53.259321 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 7 05:52:53.259540 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 7 05:52:53.259783 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 7 05:52:53.260017 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 7 05:52:53.260231 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 7 05:52:53.260468 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 7 05:52:53.260711 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 7 05:52:53.260929 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 05:52:53.261158 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 7 05:52:53.264471 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 7 05:52:53.264769 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 7 05:52:53.264991 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 7 05:52:53.265237 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 05:52:53.265459 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 7 05:52:53.265701 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 7 05:52:53.265920 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 7 05:52:53.266131 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 7 05:52:53.266348 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 7 05:52:53.266905 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 7 05:52:53.267104 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 05:52:53.267291 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 7 05:52:53.267317 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 05:52:53.267336 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 05:52:53.267355 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 05:52:53.267374 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 05:52:53.267393 kernel: iommu: Default domain type: Translated Jul 7 05:52:53.267420 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:52:53.267440 kernel: efivars: Registered efivars operations Jul 7 05:52:53.267458 kernel: vgaarb: loaded Jul 7 05:52:53.267477 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:52:53.267515 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:52:53.267537 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:52:53.267556 kernel: pnp: PnP ACPI init Jul 7 05:52:53.267821 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 7 05:52:53.267852 kernel: pnp: PnP ACPI: found 1 devices Jul 7 05:52:53.267878 kernel: NET: Registered PF_INET protocol family Jul 7 05:52:53.267898 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:52:53.267917 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:52:53.267936 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:52:53.267954 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:52:53.267973 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:52:53.269477 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:52:53.269547 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:53.269573 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:52:53.269601 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:52:53.269621 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:52:53.269639 kernel: kvm [1]: HYP mode not available Jul 7 05:52:53.269658 kernel: Initialise system trusted keyrings Jul 7 05:52:53.269677 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:52:53.269695 kernel: Key type asymmetric registered Jul 7 05:52:53.269714 kernel: Asymmetric key parser 'x509' registered Jul 7 05:52:53.269732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:52:53.269750 kernel: io scheduler mq-deadline registered Jul 7 05:52:53.269774 kernel: io scheduler kyber registered Jul 7 05:52:53.269792 kernel: io scheduler bfq registered Jul 7 05:52:53.270067 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 7 05:52:53.270097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 05:52:53.270117 kernel: ACPI: button: Power Button [PWRB] Jul 7 05:52:53.270136 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 7 05:52:53.270156 kernel: ACPI: button: Sleep Button [SLPB] Jul 7 05:52:53.270176 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:52:53.270203 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 7 05:52:53.270442 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 7 05:52:53.271569 kernel: printk: console [ttyS0] disabled Jul 7 05:52:53.271610 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 7 05:52:53.271631 kernel: printk: console [ttyS0] enabled Jul 7 05:52:53.271651 kernel: printk: bootconsole [uart0] disabled Jul 7 05:52:53.271671 kernel: thunder_xcv, ver 1.0 Jul 7 05:52:53.271690 kernel: thunder_bgx, ver 1.0 Jul 7 05:52:53.271709 kernel: nicpf, ver 1.0 Jul 7 05:52:53.271737 kernel: nicvf, ver 1.0 Jul 7 05:52:53.272034 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:52:53.272263 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:52:52 UTC (1751867572) Jul 7 05:52:53.272293 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:52:53.272313 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 7 05:52:53.272332 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:52:53.272351 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:52:53.272369 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:52:53.272398 kernel: Segment Routing with IPv6 Jul 7 05:52:53.272417 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:52:53.272436 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:52:53.272455 kernel: Key type dns_resolver registered Jul 7 05:52:53.272475 kernel: registered taskstats version 1 Jul 7 05:52:53.273626 kernel: Loading compiled-in X.509 certificates Jul 7 05:52:53.273667 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:52:53.273686 kernel: Key type .fscrypt registered Jul 7 05:52:53.273705 kernel: Key type fscrypt-provisioning registered Jul 7 05:52:53.273733 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:52:53.273753 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:52:53.273772 kernel: ima: No architecture policies found Jul 7 05:52:53.273791 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:52:53.273809 kernel: clk: Disabling unused clocks Jul 7 05:52:53.273829 kernel: Freeing unused kernel memory: 39424K Jul 7 05:52:53.273847 kernel: Run /init as init process Jul 7 05:52:53.273866 kernel: with arguments: Jul 7 05:52:53.273884 kernel: /init Jul 7 05:52:53.273903 kernel: with environment: Jul 7 05:52:53.273926 kernel: HOME=/ Jul 7 05:52:53.273944 kernel: TERM=linux Jul 7 05:52:53.273962 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:52:53.273987 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:52:53.274011 systemd[1]: Detected virtualization amazon. Jul 7 05:52:53.274032 systemd[1]: Detected architecture arm64. Jul 7 05:52:53.274053 systemd[1]: Running in initrd. Jul 7 05:52:53.274077 systemd[1]: No hostname configured, using default hostname. Jul 7 05:52:53.274097 systemd[1]: Hostname set to . Jul 7 05:52:53.274118 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:52:53.274139 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:52:53.274159 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:52:53.274180 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:52:53.274202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:52:53.274222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:52:53.274249 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:52:53.274270 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:52:53.274294 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:52:53.274315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:52:53.274336 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:52:53.274356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:52:53.274377 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:52:53.274403 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:52:53.274423 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:52:53.274443 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:52:53.274464 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:52:53.274484 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:52:53.275641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:52:53.275689 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:52:53.275713 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:52:53.275745 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:52:53.275767 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:52:53.275787 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:52:53.275808 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:52:53.275829 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:52:53.275850 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:52:53.275870 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:52:53.275890 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:52:53.275911 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:52:53.275936 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:53.275956 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:52:53.276029 systemd-journald[251]: Collecting audit messages is disabled. Jul 7 05:52:53.276077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:52:53.276103 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:52:53.276125 systemd-journald[251]: Journal started Jul 7 05:52:53.276166 systemd-journald[251]: Runtime Journal (/run/log/journal/ec24a22d21d4b4913f76799315b57944) is 8.0M, max 75.3M, 67.3M free. Jul 7 05:52:53.279165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:52:53.263369 systemd-modules-load[252]: Inserted module 'overlay' Jul 7 05:52:53.292551 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:52:53.319538 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:52:53.323069 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 7 05:52:53.325142 kernel: Bridge firewalling registered Jul 7 05:52:53.325391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:52:53.331566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:52:53.345823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:53.346530 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:52:53.356873 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:52:53.367997 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:53.379871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:52:53.388734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:52:53.421124 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:52:53.435808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:52:53.441049 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:53.449875 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:52:53.463806 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:52:53.505167 dracut-cmdline[288]: dracut-dracut-053 Jul 7 05:52:53.513684 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:52:53.541460 systemd-resolved[286]: Positive Trust Anchors: Jul 7 05:52:53.541534 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:52:53.541599 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:52:53.681528 kernel: SCSI subsystem initialized Jul 7 05:52:53.688539 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:52:53.701533 kernel: iscsi: registered transport (tcp) Jul 7 05:52:53.724798 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:52:53.724876 kernel: QLogic iSCSI HBA Driver Jul 7 05:52:53.790754 kernel: random: crng init done Jul 7 05:52:53.791153 systemd-resolved[286]: Defaulting to hostname 'linux'. Jul 7 05:52:53.795559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:52:53.801656 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:52:53.829451 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:52:53.840965 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:52:53.885288 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:52:53.885370 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:52:53.885398 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:52:53.958570 kernel: raid6: neonx8 gen() 6632 MB/s Jul 7 05:52:53.975561 kernel: raid6: neonx4 gen() 6405 MB/s Jul 7 05:52:53.992571 kernel: raid6: neonx2 gen() 5323 MB/s Jul 7 05:52:54.009552 kernel: raid6: neonx1 gen() 3900 MB/s Jul 7 05:52:54.027548 kernel: raid6: int64x8 gen() 3779 MB/s Jul 7 05:52:54.045547 kernel: raid6: int64x4 gen() 3659 MB/s Jul 7 05:52:54.062555 kernel: raid6: int64x2 gen() 3520 MB/s Jul 7 05:52:54.081105 kernel: raid6: int64x1 gen() 2746 MB/s Jul 7 05:52:54.081177 kernel: raid6: using algorithm neonx8 gen() 6632 MB/s Jul 7 05:52:54.100175 kernel: raid6: .... xor() 4817 MB/s, rmw enabled Jul 7 05:52:54.100257 kernel: raid6: using neon recovery algorithm Jul 7 05:52:54.110217 kernel: xor: measuring software checksum speed Jul 7 05:52:54.110305 kernel: 8regs : 11022 MB/sec Jul 7 05:52:54.111482 kernel: 32regs : 11737 MB/sec Jul 7 05:52:54.113949 kernel: arm64_neon : 8966 MB/sec Jul 7 05:52:54.114031 kernel: xor: using function: 32regs (11737 MB/sec) Jul 7 05:52:54.204573 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:52:54.228675 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:52:54.246947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:52:54.284202 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jul 7 05:52:54.294107 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:52:54.316834 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:52:54.367121 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jul 7 05:52:54.432466 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:52:54.446882 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:52:54.577113 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:52:54.594123 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:52:54.643388 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:52:54.650371 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:52:54.657736 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:52:54.670174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:52:54.682936 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:52:54.729107 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:52:54.816156 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 05:52:54.816238 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 7 05:52:54.830432 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 7 05:52:54.830881 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 7 05:52:54.831453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:52:54.833980 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:54.839952 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:54.843750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:52:54.844062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:54.846881 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:54.866704 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:29:20:7d:83:97 Jul 7 05:52:54.867998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:52:54.880534 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 7 05:52:54.880614 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 7 05:52:54.886298 (udev-worker)[514]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:52:54.893574 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 05:52:54.907459 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 05:52:54.907590 kernel: GPT:9289727 != 16777215 Jul 7 05:52:54.907620 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 05:52:54.912449 kernel: GPT:9289727 != 16777215 Jul 7 05:52:54.912568 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 05:52:54.915553 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:54.925240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:52:54.934854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:52:54.995411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:52:55.013858 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (534) Jul 7 05:52:55.050541 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (530) Jul 7 05:52:55.118227 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 7 05:52:55.153956 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 7 05:52:55.160707 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 7 05:52:55.176574 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 7 05:52:55.194396 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 05:52:55.208880 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:52:55.223023 disk-uuid[660]: Primary Header is updated. Jul 7 05:52:55.223023 disk-uuid[660]: Secondary Entries is updated. Jul 7 05:52:55.223023 disk-uuid[660]: Secondary Header is updated. Jul 7 05:52:55.237529 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:55.247546 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:55.256581 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:56.259896 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 05:52:56.259981 disk-uuid[661]: The operation has completed successfully. Jul 7 05:52:56.462401 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:52:56.464774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:52:56.523833 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:52:56.539558 sh[1004]: Success Jul 7 05:52:56.566193 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:52:56.699199 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:52:56.709634 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:52:56.717847 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:52:56.766118 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:52:56.766196 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:56.766223 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:52:56.768020 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:52:56.769409 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:52:56.811546 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 7 05:52:56.827548 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:52:56.831518 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:52:56.841922 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:52:56.847782 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:52:56.882870 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:56.882955 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:56.884897 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:52:56.890562 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:52:56.914322 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:52:56.920340 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:56.932347 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:52:56.946954 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:52:57.101443 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:52:57.121927 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:52:57.175647 ignition[1109]: Ignition 2.19.0 Jul 7 05:52:57.175669 ignition[1109]: Stage: fetch-offline Jul 7 05:52:57.177472 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:57.181569 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:57.185124 ignition[1109]: Ignition finished successfully Jul 7 05:52:57.192191 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:52:57.206061 systemd-networkd[1201]: lo: Link UP Jul 7 05:52:57.206082 systemd-networkd[1201]: lo: Gained carrier Jul 7 05:52:57.211034 systemd-networkd[1201]: Enumeration completed Jul 7 05:52:57.212009 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:57.212015 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:52:57.212954 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:52:57.214966 systemd-networkd[1201]: eth0: Link UP Jul 7 05:52:57.214974 systemd-networkd[1201]: eth0: Gained carrier Jul 7 05:52:57.214990 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:52:57.218420 systemd[1]: Reached target network.target - Network. Jul 7 05:52:57.238943 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.20.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 05:52:57.240189 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 05:52:57.274835 ignition[1204]: Ignition 2.19.0 Jul 7 05:52:57.274866 ignition[1204]: Stage: fetch Jul 7 05:52:57.275572 ignition[1204]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:57.275600 ignition[1204]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:57.275758 ignition[1204]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:57.300342 ignition[1204]: PUT result: OK Jul 7 05:52:57.305596 ignition[1204]: parsed url from cmdline: "" Jul 7 05:52:57.305668 ignition[1204]: no config URL provided Jul 7 05:52:57.305684 ignition[1204]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:52:57.305711 ignition[1204]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:52:57.305755 ignition[1204]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:57.307873 ignition[1204]: PUT result: OK Jul 7 05:52:57.314742 ignition[1204]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 7 05:52:57.319110 ignition[1204]: GET result: OK Jul 7 05:52:57.319287 ignition[1204]: parsing config with SHA512: 37b4b4c1eb3d2f8215710507575e9159f86f81846b70bef82298628bb5e6e082f031a378c5e545b296d4ef2a38c7feb778c82c92103ecef13052ea1d6e82115b Jul 7 05:52:57.330555 unknown[1204]: fetched base config from "system" Jul 7 05:52:57.330807 unknown[1204]: fetched base config from "system" Jul 7 05:52:57.331806 ignition[1204]: fetch: fetch complete Jul 7 05:52:57.330831 unknown[1204]: fetched user config from "aws" Jul 7 05:52:57.331817 ignition[1204]: fetch: fetch passed Jul 7 05:52:57.337670 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 05:52:57.331903 ignition[1204]: Ignition finished successfully Jul 7 05:52:57.354768 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:52:57.383288 ignition[1211]: Ignition 2.19.0 Jul 7 05:52:57.383316 ignition[1211]: Stage: kargs Jul 7 05:52:57.383960 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:57.383985 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:57.384129 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:57.386196 ignition[1211]: PUT result: OK Jul 7 05:52:57.395216 ignition[1211]: kargs: kargs passed Jul 7 05:52:57.395322 ignition[1211]: Ignition finished successfully Jul 7 05:52:57.399776 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:52:57.416755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:52:57.442316 ignition[1218]: Ignition 2.19.0 Jul 7 05:52:57.442348 ignition[1218]: Stage: disks Jul 7 05:52:57.444003 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:57.444031 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:57.444192 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:57.446313 ignition[1218]: PUT result: OK Jul 7 05:52:57.459598 ignition[1218]: disks: disks passed Jul 7 05:52:57.459938 ignition[1218]: Ignition finished successfully Jul 7 05:52:57.463524 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:52:57.466892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:52:57.473125 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:52:57.475822 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:52:57.480905 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:52:57.484841 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:52:57.501806 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:52:57.551350 systemd-fsck[1226]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 05:52:57.559773 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:52:57.572468 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:52:57.655532 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:52:57.656943 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:52:57.661369 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:52:57.680738 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:57.690774 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:52:57.695624 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 05:52:57.695815 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:52:57.720930 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1245) Jul 7 05:52:57.695873 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:52:57.729063 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:57.729157 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:57.729185 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:52:57.738047 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:52:57.745985 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:52:57.751019 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:52:57.759259 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:57.867782 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:52:57.878546 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:52:57.890110 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:52:57.899122 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:52:58.060600 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:52:58.071771 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:52:58.076049 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:52:58.102964 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:52:58.106128 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:58.138895 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:52:58.158603 ignition[1359]: INFO : Ignition 2.19.0 Jul 7 05:52:58.158603 ignition[1359]: INFO : Stage: mount Jul 7 05:52:58.163138 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:58.163138 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:58.163138 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:58.171855 ignition[1359]: INFO : PUT result: OK Jul 7 05:52:58.177401 ignition[1359]: INFO : mount: mount passed Jul 7 05:52:58.177401 ignition[1359]: INFO : Ignition finished successfully Jul 7 05:52:58.181867 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:52:58.200487 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:52:58.225679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:52:58.248869 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1370) Jul 7 05:52:58.248934 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:52:58.250766 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:52:58.252108 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 7 05:52:58.257526 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 7 05:52:58.261796 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:52:58.306654 ignition[1387]: INFO : Ignition 2.19.0 Jul 7 05:52:58.308785 ignition[1387]: INFO : Stage: files Jul 7 05:52:58.308785 ignition[1387]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:52:58.312635 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:52:58.312635 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:52:58.319305 ignition[1387]: INFO : PUT result: OK Jul 7 05:52:58.325340 ignition[1387]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:52:58.330804 ignition[1387]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:52:58.330804 ignition[1387]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:52:58.339071 ignition[1387]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:52:58.342548 ignition[1387]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:52:58.345949 unknown[1387]: wrote ssh authorized keys file for user: core Jul 7 05:52:58.349093 ignition[1387]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:52:58.352561 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:58.356622 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 05:52:58.356622 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:58.356622 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:52:58.449340 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 05:52:58.594687 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:52:58.594687 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 05:52:58.594687 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 7 05:52:58.942715 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jul 7 05:52:58.980667 systemd-networkd[1201]: eth0: Gained IPv6LL Jul 7 05:52:59.087478 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 05:52:59.087478 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:59.087478 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:59.105594 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:52:59.616463 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jul 7 05:52:59.996055 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:52:59.996055 ignition[1387]: INFO : files: op(d): [started] processing unit "containerd.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(d): [finished] processing unit "containerd.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:53:00.004055 ignition[1387]: INFO : files: files passed Jul 7 05:53:00.050295 ignition[1387]: INFO : Ignition finished successfully Jul 7 05:53:00.044478 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:53:00.058765 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:53:00.079885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:53:00.092601 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:53:00.098434 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:53:00.113391 initrd-setup-root-after-ignition[1415]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:53:00.113391 initrd-setup-root-after-ignition[1415]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:53:00.121775 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:53:00.129680 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:53:00.133922 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:53:00.154646 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:53:00.215372 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:53:00.215614 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:53:00.218983 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:53:00.221556 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:53:00.224262 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:53:00.237674 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:53:00.280582 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:53:00.295750 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:53:00.323424 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:53:00.326864 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:53:00.332341 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:53:00.336597 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:53:00.336853 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:53:00.347893 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:53:00.350980 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:53:00.355185 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:53:00.359914 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:53:00.365630 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:53:00.369309 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:53:00.379528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:53:00.382672 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:53:00.385296 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:53:00.387870 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:53:00.390071 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:53:00.390333 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:53:00.400152 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:53:00.405466 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:53:00.406054 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:53:00.416655 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:53:00.427996 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:53:00.429289 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:53:00.436469 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:53:00.436768 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:53:00.440041 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:53:00.440305 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:53:00.465759 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:53:00.467812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:53:00.478167 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:53:00.478600 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:53:00.481527 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:53:00.481765 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:53:00.505626 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:53:00.510788 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:53:00.529860 ignition[1440]: INFO : Ignition 2.19.0 Jul 7 05:53:00.529860 ignition[1440]: INFO : Stage: umount Jul 7 05:53:00.535425 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:53:00.535425 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 7 05:53:00.535425 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 7 05:53:00.545075 ignition[1440]: INFO : PUT result: OK Jul 7 05:53:00.553267 ignition[1440]: INFO : umount: umount passed Jul 7 05:53:00.553267 ignition[1440]: INFO : Ignition finished successfully Jul 7 05:53:00.553125 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:53:00.556696 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:53:00.558632 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:53:00.569635 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:53:00.569746 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:53:00.572208 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:53:00.572309 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:53:00.576988 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 05:53:00.577089 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 05:53:00.580963 systemd[1]: Stopped target network.target - Network. Jul 7 05:53:00.584928 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:53:00.585052 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:53:00.587842 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:53:00.589820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:53:00.596077 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:53:00.600720 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:53:00.602764 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:53:00.604949 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:53:00.605035 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:53:00.609475 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:53:00.609584 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:53:00.613048 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:53:00.613169 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:53:00.615996 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:53:00.616094 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:53:00.624018 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:53:00.631128 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:53:00.637860 systemd-networkd[1201]: eth0: DHCPv6 lease lost Jul 7 05:53:00.648429 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:53:00.652106 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:53:00.673162 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:53:00.673421 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:53:00.679960 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:53:00.680058 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:53:00.696818 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:53:00.699069 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:53:00.700046 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:53:00.716203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:53:00.716351 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:53:00.722868 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:53:00.722987 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:53:00.732052 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:53:00.732174 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:53:00.743594 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:53:00.749941 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:53:00.750159 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:53:00.775384 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:53:00.776576 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:53:00.787106 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:53:00.787259 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:53:00.793369 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:53:00.793463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:53:00.803298 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:53:00.803436 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:53:00.806910 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:53:00.807027 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:53:00.817588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:53:00.817728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:53:00.820915 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:53:00.821033 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:53:00.845885 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:53:00.849029 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:53:00.849160 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:53:00.854516 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 05:53:00.854631 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:53:00.870929 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:53:00.871162 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:53:00.876204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:53:00.876324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:53:00.879951 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:53:00.880402 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:53:00.892468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:53:00.892695 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:53:00.897786 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:53:00.920388 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:53:00.943587 systemd[1]: Switching root. Jul 7 05:53:00.985563 systemd-journald[251]: Journal stopped Jul 7 05:53:03.029877 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 7 05:53:03.030021 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:53:03.030083 kernel: SELinux: policy capability open_perms=1 Jul 7 05:53:03.030118 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:53:03.030151 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:53:03.030184 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:53:03.030216 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:53:03.030258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:53:03.030297 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:53:03.030331 kernel: audit: type=1403 audit(1751867581.331:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:53:03.030372 systemd[1]: Successfully loaded SELinux policy in 51.770ms. Jul 7 05:53:03.030424 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.499ms. Jul 7 05:53:03.030461 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:53:03.030531 systemd[1]: Detected virtualization amazon. Jul 7 05:53:03.030575 systemd[1]: Detected architecture arm64. Jul 7 05:53:03.030609 systemd[1]: Detected first boot. Jul 7 05:53:03.030651 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:53:03.030687 zram_generator::config[1505]: No configuration found. Jul 7 05:53:03.030726 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:53:03.030759 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:53:03.030793 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 05:53:03.030829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:53:03.030862 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:53:03.030896 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:53:03.030931 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:53:03.030972 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:53:03.031007 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:53:03.031041 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:53:03.031072 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:53:03.031106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:53:03.031138 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:53:03.031171 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:53:03.031204 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:53:03.031243 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:53:03.031278 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:53:03.031309 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 05:53:03.031339 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:53:03.031370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:53:03.031400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:53:03.031432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:53:03.031465 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:53:03.031544 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:53:03.031582 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:53:03.031616 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:53:03.031648 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:53:03.031681 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:53:03.031720 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:53:03.031752 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:53:03.031785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:53:03.031816 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:53:03.031849 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:53:03.031888 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:53:03.031920 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:53:03.031953 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:53:03.031987 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:53:03.032018 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:53:03.032049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:53:03.032082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:53:03.032125 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:53:03.032162 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:53:03.032192 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:53:03.032227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:53:03.032257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:53:03.032297 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:53:03.032327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:53:03.032357 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:53:03.032387 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 05:53:03.032424 kernel: fuse: init (API version 7.39) Jul 7 05:53:03.032459 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 05:53:03.032488 kernel: ACPI: bus type drm_connector registered Jul 7 05:53:03.032591 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:53:03.032623 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:53:03.032655 kernel: loop: module loaded Jul 7 05:53:03.032688 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:53:03.032718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:53:03.032748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:53:03.032780 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:53:03.032817 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:53:03.032848 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:53:03.032882 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:53:03.032914 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:53:03.033008 systemd-journald[1610]: Collecting audit messages is disabled. Jul 7 05:53:03.033070 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:53:03.033101 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:53:03.033134 systemd-journald[1610]: Journal started Jul 7 05:53:03.033183 systemd-journald[1610]: Runtime Journal (/run/log/journal/ec24a22d21d4b4913f76799315b57944) is 8.0M, max 75.3M, 67.3M free. Jul 7 05:53:03.041334 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:53:03.045489 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:53:03.049273 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:53:03.049773 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:53:03.054478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:53:03.054956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:53:03.059120 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:53:03.059850 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:53:03.063409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:53:03.063834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:53:03.067401 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:53:03.070035 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:53:03.073407 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:53:03.075986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:53:03.079549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:53:03.082977 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:53:03.087432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:53:03.119433 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:53:03.129744 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:53:03.144756 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:53:03.148358 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:53:03.160926 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:53:03.168044 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:53:03.170943 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:53:03.178866 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:53:03.188365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:53:03.205691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:53:03.216643 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:53:03.233345 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:53:03.241956 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:53:03.266715 systemd-journald[1610]: Time spent on flushing to /var/log/journal/ec24a22d21d4b4913f76799315b57944 is 75.265ms for 900 entries. Jul 7 05:53:03.266715 systemd-journald[1610]: System Journal (/var/log/journal/ec24a22d21d4b4913f76799315b57944) is 8.0M, max 195.6M, 187.6M free. Jul 7 05:53:03.361802 systemd-journald[1610]: Received client request to flush runtime journal. Jul 7 05:53:03.299328 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:53:03.302622 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:53:03.354853 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Jul 7 05:53:03.354878 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Jul 7 05:53:03.365941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:53:03.375780 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:53:03.398359 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:53:03.413855 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:53:03.428481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:53:03.439863 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:53:03.477582 udevadm[1672]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 05:53:03.544108 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:53:03.559862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:53:03.596979 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 7 05:53:03.597688 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Jul 7 05:53:03.611392 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:53:04.363263 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:53:04.374865 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:53:04.447322 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Jul 7 05:53:04.500267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:53:04.522335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:53:04.569686 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:53:04.703267 (udev-worker)[1696]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:53:04.717609 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:53:04.753385 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 7 05:53:04.946846 systemd-networkd[1690]: lo: Link UP Jul 7 05:53:04.946875 systemd-networkd[1690]: lo: Gained carrier Jul 7 05:53:04.950360 systemd-networkd[1690]: Enumeration completed Jul 7 05:53:04.950644 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:53:04.958152 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:53:04.958178 systemd-networkd[1690]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:53:04.962015 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:53:04.965429 systemd-networkd[1690]: eth0: Link UP Jul 7 05:53:04.965943 systemd-networkd[1690]: eth0: Gained carrier Jul 7 05:53:04.965981 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:53:04.974660 systemd-networkd[1690]: eth0: DHCPv4 address 172.31.20.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 7 05:53:04.987684 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:53:05.014552 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1698) Jul 7 05:53:05.085105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:53:05.324157 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 7 05:53:05.328132 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:53:05.332027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:53:05.345941 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:53:05.378408 lvm[1811]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:53:05.422675 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:53:05.430410 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:53:05.441913 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:53:05.465969 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:53:05.510723 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:53:05.519469 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:53:05.522744 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:53:05.522977 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:53:05.525743 systemd[1]: Reached target machines.target - Containers. Jul 7 05:53:05.530407 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:53:05.539874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:53:05.551873 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:53:05.555797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:53:05.568849 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:53:05.579125 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:53:05.597986 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:53:05.605755 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:53:05.628087 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:53:05.654708 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:53:05.656303 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:53:05.665547 kernel: loop0: detected capacity change from 0 to 52536 Jul 7 05:53:05.704875 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:53:05.746575 kernel: loop1: detected capacity change from 0 to 114328 Jul 7 05:53:05.801661 kernel: loop2: detected capacity change from 0 to 114432 Jul 7 05:53:05.859552 kernel: loop3: detected capacity change from 0 to 203944 Jul 7 05:53:06.064732 kernel: loop4: detected capacity change from 0 to 52536 Jul 7 05:53:06.091559 kernel: loop5: detected capacity change from 0 to 114328 Jul 7 05:53:06.124341 kernel: loop6: detected capacity change from 0 to 114432 Jul 7 05:53:06.160118 kernel: loop7: detected capacity change from 0 to 203944 Jul 7 05:53:06.191613 (sd-merge)[1836]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 7 05:53:06.193544 (sd-merge)[1836]: Merged extensions into '/usr'. Jul 7 05:53:06.205751 systemd[1]: Reloading requested from client PID 1822 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:53:06.205786 systemd[1]: Reloading... Jul 7 05:53:06.370884 ldconfig[1818]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:53:06.388551 zram_generator::config[1865]: No configuration found. Jul 7 05:53:06.596687 systemd-networkd[1690]: eth0: Gained IPv6LL Jul 7 05:53:06.684355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:06.852237 systemd[1]: Reloading finished in 645 ms. Jul 7 05:53:06.884763 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:53:06.889057 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:53:06.892589 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:53:06.911962 systemd[1]: Starting ensure-sysext.service... Jul 7 05:53:06.924050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:53:06.943643 systemd[1]: Reloading requested from client PID 1926 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:53:06.943684 systemd[1]: Reloading... Jul 7 05:53:06.983038 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:53:06.985122 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:53:06.989831 systemd-tmpfiles[1927]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:53:06.990375 systemd-tmpfiles[1927]: ACLs are not supported, ignoring. Jul 7 05:53:06.990534 systemd-tmpfiles[1927]: ACLs are not supported, ignoring. Jul 7 05:53:06.998434 systemd-tmpfiles[1927]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:53:06.998472 systemd-tmpfiles[1927]: Skipping /boot Jul 7 05:53:07.027332 systemd-tmpfiles[1927]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:53:07.027371 systemd-tmpfiles[1927]: Skipping /boot Jul 7 05:53:07.133550 zram_generator::config[1959]: No configuration found. Jul 7 05:53:07.406387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:07.571948 systemd[1]: Reloading finished in 627 ms. Jul 7 05:53:07.606732 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:53:07.627035 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:07.636850 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:53:07.649840 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:53:07.671923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:53:07.681916 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:53:07.720834 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:53:07.728411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:53:07.753038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:53:07.774034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:53:07.778862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:53:07.788797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:53:07.799266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:53:07.801984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:53:07.802850 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:53:07.814072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:53:07.814539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:53:07.826251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:53:07.826921 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:53:07.846655 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:53:07.853692 systemd[1]: Finished ensure-sysext.service. Jul 7 05:53:07.860957 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:53:07.882313 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:53:07.882779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:53:07.900177 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:53:07.906895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:53:07.912728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:53:07.912866 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:53:07.923931 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:53:07.944710 augenrules[2057]: No rules Jul 7 05:53:07.948309 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:07.975380 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:53:08.007616 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:53:08.011793 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:53:08.049772 systemd-resolved[2019]: Positive Trust Anchors: Jul 7 05:53:08.050441 systemd-resolved[2019]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:53:08.050562 systemd-resolved[2019]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:53:08.067099 systemd-resolved[2019]: Defaulting to hostname 'linux'. Jul 7 05:53:08.071395 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:53:08.074202 systemd[1]: Reached target network.target - Network. Jul 7 05:53:08.076521 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:53:08.079178 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:53:08.082160 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:53:08.084956 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:53:08.087966 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:53:08.091326 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:53:08.094202 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:53:08.097007 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:53:08.103023 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:53:08.103091 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:53:08.105162 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:53:08.108290 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:53:08.115867 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:53:08.121604 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:53:08.127678 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:53:08.130582 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:53:08.133192 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:53:08.137024 systemd[1]: System is tainted: cgroupsv1 Jul 7 05:53:08.137149 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:53:08.137234 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:53:08.144751 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:53:08.153818 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 05:53:08.169027 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:53:08.176033 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:53:08.184754 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:53:08.188221 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:53:08.214918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:08.233323 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:53:08.249817 systemd[1]: Started ntpd.service - Network Time Service. Jul 7 05:53:08.258124 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:53:08.281540 jq[2074]: false Jul 7 05:53:08.288137 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 05:53:08.307326 dbus-daemon[2073]: [system] SELinux support is enabled Jul 7 05:53:08.312017 dbus-daemon[2073]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1690 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 05:53:08.318785 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 7 05:53:08.333827 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:53:08.348901 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:53:08.373920 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:53:08.380659 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:53:08.395919 ntpd[2080]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:27 UTC 2025 (1): Starting Jul 7 05:53:08.398846 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: ntpd 4.2.8p17@1.4004-o Sun Jul 6 21:48:27 UTC 2025 (1): Starting Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: ---------------------------------------------------- Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: corporation. Support and training for ntp-4 are Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: available at https://www.nwtime.org/support Jul 7 05:53:08.404283 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: ---------------------------------------------------- Jul 7 05:53:08.395983 ntpd[2080]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 7 05:53:08.396005 ntpd[2080]: ---------------------------------------------------- Jul 7 05:53:08.396028 ntpd[2080]: ntp-4 is maintained by Network Time Foundation, Jul 7 05:53:08.396048 ntpd[2080]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 7 05:53:08.396067 ntpd[2080]: corporation. Support and training for ntp-4 are Jul 7 05:53:08.396086 ntpd[2080]: available at https://www.nwtime.org/support Jul 7 05:53:08.396107 ntpd[2080]: ---------------------------------------------------- Jul 7 05:53:08.411338 ntpd[2080]: proto: precision = 0.096 usec (-23) Jul 7 05:53:08.415816 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: proto: precision = 0.096 usec (-23) Jul 7 05:53:08.415816 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: basedate set to 2025-06-24 Jul 7 05:53:08.415816 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: gps base set to 2025-06-29 (week 2373) Jul 7 05:53:08.413908 ntpd[2080]: basedate set to 2025-06-24 Jul 7 05:53:08.413947 ntpd[2080]: gps base set to 2025-06-29 (week 2373) Jul 7 05:53:08.427739 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 05:53:08.427870 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 05:53:08.428117 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Listen and drop on 0 v6wildcard [::]:123 Jul 7 05:53:08.428117 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 7 05:53:08.428237 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 05:53:08.428204 ntpd[2080]: Listen normally on 2 lo 127.0.0.1:123 Jul 7 05:53:08.428366 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Listen normally on 3 eth0 172.31.20.83:123 Jul 7 05:53:08.428366 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Listen normally on 4 lo [::1]:123 Jul 7 05:53:08.428278 ntpd[2080]: Listen normally on 3 eth0 172.31.20.83:123 Jul 7 05:53:08.428353 ntpd[2080]: Listen normally on 4 lo [::1]:123 Jul 7 05:53:08.428600 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Listen normally on 5 eth0 [fe80::429:20ff:fe7d:8397%2]:123 Jul 7 05:53:08.428436 ntpd[2080]: Listen normally on 5 eth0 [fe80::429:20ff:fe7d:8397%2]:123 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found loop4 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found loop5 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found loop6 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found loop7 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1p1 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1p2 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1p3 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found usr Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1p4 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1p6 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1p7 Jul 7 05:53:08.434535 extend-filesystems[2075]: Found nvme0n1p9 Jul 7 05:53:08.434535 extend-filesystems[2075]: Checking size of /dev/nvme0n1p9 Jul 7 05:53:08.435737 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:53:08.469811 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: Listening on routing socket on fd #22 for interface updates Jul 7 05:53:08.438555 ntpd[2080]: Listening on routing socket on fd #22 for interface updates Jul 7 05:53:08.476701 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:53:08.484593 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:08.484669 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:08.484870 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:08.484870 ntpd[2080]: 7 Jul 05:53:08 ntpd[2080]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 7 05:53:08.527413 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:53:08.528047 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:53:08.546531 extend-filesystems[2075]: Resized partition /dev/nvme0n1p9 Jul 7 05:53:08.558731 extend-filesystems[2121]: resize2fs 1.47.1 (20-May-2024) Jul 7 05:53:08.553649 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:53:08.554214 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:53:08.582603 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:53:08.600542 jq[2098]: true Jul 7 05:53:08.618568 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 7 05:53:08.627777 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:53:08.633190 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:53:08.682568 coreos-metadata[2071]: Jul 07 05:53:08.681 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 05:53:08.701771 coreos-metadata[2071]: Jul 07 05:53:08.686 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 7 05:53:08.701771 coreos-metadata[2071]: Jul 07 05:53:08.693 INFO Fetch successful Jul 7 05:53:08.701771 coreos-metadata[2071]: Jul 07 05:53:08.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 7 05:53:08.702738 coreos-metadata[2071]: Jul 07 05:53:08.702 INFO Fetch successful Jul 7 05:53:08.702738 coreos-metadata[2071]: Jul 07 05:53:08.702 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 7 05:53:08.704571 coreos-metadata[2071]: Jul 07 05:53:08.703 INFO Fetch successful Jul 7 05:53:08.704571 coreos-metadata[2071]: Jul 07 05:53:08.703 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 7 05:53:08.706898 coreos-metadata[2071]: Jul 07 05:53:08.705 INFO Fetch successful Jul 7 05:53:08.706898 coreos-metadata[2071]: Jul 07 05:53:08.705 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 7 05:53:08.711947 coreos-metadata[2071]: Jul 07 05:53:08.711 INFO Fetch failed with 404: resource not found Jul 7 05:53:08.711947 coreos-metadata[2071]: Jul 07 05:53:08.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 7 05:53:08.721585 coreos-metadata[2071]: Jul 07 05:53:08.713 INFO Fetch successful Jul 7 05:53:08.721585 coreos-metadata[2071]: Jul 07 05:53:08.713 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 7 05:53:08.721585 coreos-metadata[2071]: Jul 07 05:53:08.714 INFO Fetch successful Jul 7 05:53:08.721585 coreos-metadata[2071]: Jul 07 05:53:08.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 7 05:53:08.721585 coreos-metadata[2071]: Jul 07 05:53:08.716 INFO Fetch successful Jul 7 05:53:08.721585 coreos-metadata[2071]: Jul 07 05:53:08.716 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 7 05:53:08.723556 update_engine[2095]: I20250707 05:53:08.722862 2095 main.cc:92] Flatcar Update Engine starting Jul 7 05:53:08.735350 coreos-metadata[2071]: Jul 07 05:53:08.724 INFO Fetch successful Jul 7 05:53:08.735350 coreos-metadata[2071]: Jul 07 05:53:08.724 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 7 05:53:08.735350 coreos-metadata[2071]: Jul 07 05:53:08.726 INFO Fetch successful Jul 7 05:53:08.734077 dbus-daemon[2073]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 05:53:08.739808 tar[2118]: linux-arm64/helm Jul 7 05:53:08.741937 (ntainerd)[2124]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:53:08.753855 update_engine[2095]: I20250707 05:53:08.745802 2095 update_check_scheduler.cc:74] Next update check in 2m54s Jul 7 05:53:08.757267 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:53:08.783798 jq[2126]: true Jul 7 05:53:08.774343 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:53:08.774413 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:53:08.793332 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 05:53:08.799906 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:53:08.799970 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:53:08.811877 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:53:08.827886 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:53:08.864414 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 7 05:53:08.860410 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 7 05:53:08.876343 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 7 05:53:08.908531 extend-filesystems[2121]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 7 05:53:08.908531 extend-filesystems[2121]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 05:53:08.908531 extend-filesystems[2121]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 7 05:53:08.918591 extend-filesystems[2075]: Resized filesystem in /dev/nvme0n1p9 Jul 7 05:53:08.931394 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:53:08.931962 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:53:09.001325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 05:53:09.007926 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:53:09.130093 bash[2189]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:53:09.136860 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:53:09.166404 systemd[1]: Starting sshkeys.service... Jul 7 05:53:09.212294 systemd-logind[2093]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 05:53:09.212351 systemd-logind[2093]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 7 05:53:09.215532 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2185) Jul 7 05:53:09.218191 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 05:53:09.239404 systemd-logind[2093]: New seat seat0. Jul 7 05:53:09.276687 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 05:53:09.281249 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:53:09.296297 locksmithd[2151]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: Initializing new seelog logger Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: New Seelog Logger Creation Complete Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 processing appconfig overrides Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.390525 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 processing appconfig overrides Jul 7 05:53:09.405569 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO Proxy environment variables: Jul 7 05:53:09.407974 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.407974 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.408967 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 processing appconfig overrides Jul 7 05:53:09.425407 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.425407 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 7 05:53:09.433544 amazon-ssm-agent[2157]: 2025/07/07 05:53:09 processing appconfig overrides Jul 7 05:53:09.521964 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO https_proxy: Jul 7 05:53:09.643622 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO http_proxy: Jul 7 05:53:09.726776 coreos-metadata[2213]: Jul 07 05:53:09.726 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 7 05:53:09.737097 coreos-metadata[2213]: Jul 07 05:53:09.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 7 05:53:09.739557 coreos-metadata[2213]: Jul 07 05:53:09.737 INFO Fetch successful Jul 7 05:53:09.739557 coreos-metadata[2213]: Jul 07 05:53:09.737 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 05:53:09.739557 coreos-metadata[2213]: Jul 07 05:53:09.738 INFO Fetch successful Jul 7 05:53:09.746461 unknown[2213]: wrote ssh authorized keys file for user: core Jul 7 05:53:09.748072 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO no_proxy: Jul 7 05:53:09.788753 dbus-daemon[2073]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 05:53:09.789159 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 05:53:09.797660 dbus-daemon[2073]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2150 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 05:53:09.823138 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 05:53:09.851058 update-ssh-keys[2292]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:53:09.858046 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO Checking if agent identity type OnPrem can be assumed Jul 7 05:53:09.881888 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 05:53:09.903481 systemd[1]: Finished sshkeys.service. Jul 7 05:53:09.937954 polkitd[2293]: Started polkitd version 121 Jul 7 05:53:09.959330 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO Checking if agent identity type EC2 can be assumed Jul 7 05:53:10.001095 polkitd[2293]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 05:53:10.001265 polkitd[2293]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 05:53:10.009653 polkitd[2293]: Finished loading, compiling and executing 2 rules Jul 7 05:53:10.016377 containerd[2124]: time="2025-07-07T05:53:10.014729782Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:53:10.018858 dbus-daemon[2073]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 05:53:10.019141 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 05:53:10.021603 polkitd[2293]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 05:53:10.056964 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO Agent will take identity from EC2 Jul 7 05:53:10.095397 systemd-hostnamed[2150]: Hostname set to (transient) Jul 7 05:53:10.096107 systemd-resolved[2019]: System hostname changed to 'ip-172-31-20-83'. Jul 7 05:53:10.156391 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:53:10.162536 containerd[2124]: time="2025-07-07T05:53:10.158739107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:10.175552 containerd[2124]: time="2025-07-07T05:53:10.174815339Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:10.175552 containerd[2124]: time="2025-07-07T05:53:10.174907319Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:53:10.175552 containerd[2124]: time="2025-07-07T05:53:10.174951911Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:53:10.175552 containerd[2124]: time="2025-07-07T05:53:10.175329563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:53:10.175552 containerd[2124]: time="2025-07-07T05:53:10.175375751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:10.175552 containerd[2124]: time="2025-07-07T05:53:10.175549739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:10.175919 containerd[2124]: time="2025-07-07T05:53:10.175588091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:10.176096 containerd[2124]: time="2025-07-07T05:53:10.176020835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:10.176096 containerd[2124]: time="2025-07-07T05:53:10.176086103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:10.176227 containerd[2124]: time="2025-07-07T05:53:10.176124395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:10.176227 containerd[2124]: time="2025-07-07T05:53:10.176152439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:10.176431 containerd[2124]: time="2025-07-07T05:53:10.176372543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:10.181088 containerd[2124]: time="2025-07-07T05:53:10.180996719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:53:10.181451 containerd[2124]: time="2025-07-07T05:53:10.181392587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:53:10.181562 containerd[2124]: time="2025-07-07T05:53:10.181445159Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:53:10.181746 containerd[2124]: time="2025-07-07T05:53:10.181695707Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:53:10.181909 containerd[2124]: time="2025-07-07T05:53:10.181843259Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:53:10.197606 containerd[2124]: time="2025-07-07T05:53:10.196585355Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:53:10.197606 containerd[2124]: time="2025-07-07T05:53:10.196720103Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:53:10.197606 containerd[2124]: time="2025-07-07T05:53:10.196759751Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:53:10.197606 containerd[2124]: time="2025-07-07T05:53:10.196794275Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:53:10.197606 containerd[2124]: time="2025-07-07T05:53:10.196834631Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:53:10.197606 containerd[2124]: time="2025-07-07T05:53:10.197124731Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:53:10.198985 containerd[2124]: time="2025-07-07T05:53:10.198418211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.204823607Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.204910139Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.204956819Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205001687Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205044299Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205087775Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205135307Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205179467Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205239719Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205282223Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205325867Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205383659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205429727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206027 containerd[2124]: time="2025-07-07T05:53:10.205473287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205555283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205594319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205638587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205692287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205737395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205780895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205839035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205883567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205917539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.205963703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.206022863Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.206086211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.206129687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.206747 containerd[2124]: time="2025-07-07T05:53:10.206167499Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:53:10.207331 containerd[2124]: time="2025-07-07T05:53:10.206421503Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:53:10.217721 containerd[2124]: time="2025-07-07T05:53:10.206477123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:53:10.219170 containerd[2124]: time="2025-07-07T05:53:10.218638740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:53:10.219170 containerd[2124]: time="2025-07-07T05:53:10.218737860Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:53:10.219170 containerd[2124]: time="2025-07-07T05:53:10.218773548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.219170 containerd[2124]: time="2025-07-07T05:53:10.218820696Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:53:10.219170 containerd[2124]: time="2025-07-07T05:53:10.218849460Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:53:10.219170 containerd[2124]: time="2025-07-07T05:53:10.218879340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:53:10.219645 containerd[2124]: time="2025-07-07T05:53:10.219387324Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:53:10.219645 containerd[2124]: time="2025-07-07T05:53:10.219557460Z" level=info msg="Connect containerd service" Jul 7 05:53:10.220004 containerd[2124]: time="2025-07-07T05:53:10.219654048Z" level=info msg="using legacy CRI server" Jul 7 05:53:10.220004 containerd[2124]: time="2025-07-07T05:53:10.219676344Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:53:10.220004 containerd[2124]: time="2025-07-07T05:53:10.219893604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.227211240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228070080Z" level=info msg="Start subscribing containerd event" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228189828Z" level=info msg="Start recovering state" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228317316Z" level=info msg="Start event monitor" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228342624Z" level=info msg="Start snapshots syncer" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228367104Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228385524Z" level=info msg="Start streaming server" Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228105936Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228813060Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:53:10.235484 containerd[2124]: time="2025-07-07T05:53:10.228957828Z" level=info msg="containerd successfully booted in 0.217220s" Jul 7 05:53:10.229104 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:53:10.259088 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:53:10.355060 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 7 05:53:10.457016 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 7 05:53:10.558621 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 7 05:53:10.601863 sshd_keygen[2113]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:53:10.659609 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] Starting Core Agent Jul 7 05:53:10.702218 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:53:10.718307 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:53:10.763884 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:53:10.764486 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:53:10.771115 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 7 05:53:10.780181 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:53:10.828245 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:53:10.840065 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:53:10.860084 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 05:53:10.864134 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:53:10.868227 amazon-ssm-agent[2157]: 2025-07-07 05:53:09 INFO [Registrar] Starting registrar module Jul 7 05:53:10.903568 amazon-ssm-agent[2157]: 2025-07-07 05:53:10 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 7 05:53:10.903568 amazon-ssm-agent[2157]: 2025-07-07 05:53:10 INFO [EC2Identity] EC2 registration was successful. Jul 7 05:53:10.903568 amazon-ssm-agent[2157]: 2025-07-07 05:53:10 INFO [CredentialRefresher] credentialRefresher has started Jul 7 05:53:10.903568 amazon-ssm-agent[2157]: 2025-07-07 05:53:10 INFO [CredentialRefresher] Starting credentials refresher loop Jul 7 05:53:10.903568 amazon-ssm-agent[2157]: 2025-07-07 05:53:10 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 7 05:53:10.910542 tar[2118]: linux-arm64/LICENSE Jul 7 05:53:10.910542 tar[2118]: linux-arm64/README.md Jul 7 05:53:10.933361 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 05:53:10.967617 amazon-ssm-agent[2157]: 2025-07-07 05:53:10 INFO [CredentialRefresher] Next credential rotation will be in 31.616647586066666 minutes Jul 7 05:53:11.931678 amazon-ssm-agent[2157]: 2025-07-07 05:53:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 7 05:53:12.032809 amazon-ssm-agent[2157]: 2025-07-07 05:53:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2357) started Jul 7 05:53:12.133005 amazon-ssm-agent[2157]: 2025-07-07 05:53:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 7 05:53:13.354867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:13.360328 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:53:13.365370 systemd[1]: Startup finished in 9.754s (kernel) + 12.085s (userspace) = 21.839s. Jul 7 05:53:13.373346 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:14.756759 kubelet[2375]: E0707 05:53:14.756672 2375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:14.761415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:14.762344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:15.867403 systemd-resolved[2019]: Clock change detected. Flushing caches. Jul 7 05:53:18.563912 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:53:18.574184 systemd[1]: Started sshd@0-172.31.20.83:22-139.178.89.65:43900.service - OpenSSH per-connection server daemon (139.178.89.65:43900). Jul 7 05:53:18.756017 sshd[2387]: Accepted publickey for core from 139.178.89.65 port 43900 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:18.759674 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:18.776534 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:53:18.782242 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:53:18.787851 systemd-logind[2093]: New session 1 of user core. Jul 7 05:53:18.812622 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:53:18.826298 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:53:18.844149 (systemd)[2393]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:53:19.070583 systemd[2393]: Queued start job for default target default.target. Jul 7 05:53:19.071302 systemd[2393]: Created slice app.slice - User Application Slice. Jul 7 05:53:19.071355 systemd[2393]: Reached target paths.target - Paths. Jul 7 05:53:19.071388 systemd[2393]: Reached target timers.target - Timers. Jul 7 05:53:19.082895 systemd[2393]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:53:19.096512 systemd[2393]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:53:19.096844 systemd[2393]: Reached target sockets.target - Sockets. Jul 7 05:53:19.097010 systemd[2393]: Reached target basic.target - Basic System. Jul 7 05:53:19.097469 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:53:19.098869 systemd[2393]: Reached target default.target - Main User Target. Jul 7 05:53:19.099101 systemd[2393]: Startup finished in 243ms. Jul 7 05:53:19.106405 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:53:19.258452 systemd[1]: Started sshd@1-172.31.20.83:22-139.178.89.65:43914.service - OpenSSH per-connection server daemon (139.178.89.65:43914). Jul 7 05:53:19.436268 sshd[2405]: Accepted publickey for core from 139.178.89.65 port 43914 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:19.438998 sshd[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:19.447696 systemd-logind[2093]: New session 2 of user core. Jul 7 05:53:19.457373 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:53:19.587070 sshd[2405]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:19.594080 systemd[1]: sshd@1-172.31.20.83:22-139.178.89.65:43914.service: Deactivated successfully. Jul 7 05:53:19.599029 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 05:53:19.600440 systemd-logind[2093]: Session 2 logged out. Waiting for processes to exit. Jul 7 05:53:19.602186 systemd-logind[2093]: Removed session 2. Jul 7 05:53:19.614250 systemd[1]: Started sshd@2-172.31.20.83:22-139.178.89.65:56278.service - OpenSSH per-connection server daemon (139.178.89.65:56278). Jul 7 05:53:19.791552 sshd[2413]: Accepted publickey for core from 139.178.89.65 port 56278 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:19.794015 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:19.801510 systemd-logind[2093]: New session 3 of user core. Jul 7 05:53:19.809279 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:53:19.928120 sshd[2413]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:19.935083 systemd-logind[2093]: Session 3 logged out. Waiting for processes to exit. Jul 7 05:53:19.936308 systemd[1]: sshd@2-172.31.20.83:22-139.178.89.65:56278.service: Deactivated successfully. Jul 7 05:53:19.941572 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 05:53:19.943669 systemd-logind[2093]: Removed session 3. Jul 7 05:53:19.962178 systemd[1]: Started sshd@3-172.31.20.83:22-139.178.89.65:56286.service - OpenSSH per-connection server daemon (139.178.89.65:56286). Jul 7 05:53:20.134039 sshd[2421]: Accepted publickey for core from 139.178.89.65 port 56286 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:20.136044 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:20.144469 systemd-logind[2093]: New session 4 of user core. Jul 7 05:53:20.151220 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:53:20.282037 sshd[2421]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:20.286830 systemd[1]: sshd@3-172.31.20.83:22-139.178.89.65:56286.service: Deactivated successfully. Jul 7 05:53:20.293611 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:53:20.295391 systemd-logind[2093]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:53:20.297307 systemd-logind[2093]: Removed session 4. Jul 7 05:53:20.314169 systemd[1]: Started sshd@4-172.31.20.83:22-139.178.89.65:56292.service - OpenSSH per-connection server daemon (139.178.89.65:56292). Jul 7 05:53:20.486119 sshd[2429]: Accepted publickey for core from 139.178.89.65 port 56292 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:20.489215 sshd[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:20.497576 systemd-logind[2093]: New session 5 of user core. Jul 7 05:53:20.504335 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:53:20.625587 sudo[2433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 05:53:20.626272 sudo[2433]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:20.641301 sudo[2433]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:20.665628 sshd[2429]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:20.673210 systemd[1]: sshd@4-172.31.20.83:22-139.178.89.65:56292.service: Deactivated successfully. Jul 7 05:53:20.678097 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:53:20.678548 systemd-logind[2093]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:53:20.681931 systemd-logind[2093]: Removed session 5. Jul 7 05:53:20.696264 systemd[1]: Started sshd@5-172.31.20.83:22-139.178.89.65:56302.service - OpenSSH per-connection server daemon (139.178.89.65:56302). Jul 7 05:53:20.881565 sshd[2438]: Accepted publickey for core from 139.178.89.65 port 56302 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:20.883599 sshd[2438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:20.892195 systemd-logind[2093]: New session 6 of user core. Jul 7 05:53:20.904266 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:53:21.012189 sudo[2443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 05:53:21.012858 sudo[2443]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:21.018890 sudo[2443]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:21.028634 sudo[2442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 05:53:21.029362 sudo[2442]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:21.056438 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:21.059028 auditctl[2446]: No rules Jul 7 05:53:21.060104 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 05:53:21.060587 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:21.072714 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:53:21.121611 augenrules[2465]: No rules Jul 7 05:53:21.124971 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:53:21.129528 sudo[2442]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:21.154056 sshd[2438]: pam_unix(sshd:session): session closed for user core Jul 7 05:53:21.161170 systemd-logind[2093]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:53:21.162602 systemd[1]: sshd@5-172.31.20.83:22-139.178.89.65:56302.service: Deactivated successfully. Jul 7 05:53:21.167241 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:53:21.169225 systemd-logind[2093]: Removed session 6. Jul 7 05:53:21.193245 systemd[1]: Started sshd@6-172.31.20.83:22-139.178.89.65:56306.service - OpenSSH per-connection server daemon (139.178.89.65:56306). Jul 7 05:53:21.363789 sshd[2474]: Accepted publickey for core from 139.178.89.65 port 56306 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:53:21.366372 sshd[2474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:53:21.373935 systemd-logind[2093]: New session 7 of user core. Jul 7 05:53:21.384188 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:53:21.491546 sudo[2478]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:53:21.492245 sudo[2478]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:53:21.998248 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 05:53:21.998664 (dockerd)[2494]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 05:53:22.403444 dockerd[2494]: time="2025-07-07T05:53:22.403272744Z" level=info msg="Starting up" Jul 7 05:53:22.771038 dockerd[2494]: time="2025-07-07T05:53:22.770352662Z" level=info msg="Loading containers: start." Jul 7 05:53:22.936808 kernel: Initializing XFRM netlink socket Jul 7 05:53:22.971777 (udev-worker)[2516]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:53:23.065578 systemd-networkd[1690]: docker0: Link UP Jul 7 05:53:23.090117 dockerd[2494]: time="2025-07-07T05:53:23.089766695Z" level=info msg="Loading containers: done." Jul 7 05:53:23.116304 dockerd[2494]: time="2025-07-07T05:53:23.116138472Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 05:53:23.116535 dockerd[2494]: time="2025-07-07T05:53:23.116346660Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 05:53:23.116596 dockerd[2494]: time="2025-07-07T05:53:23.116548848Z" level=info msg="Daemon has completed initialization" Jul 7 05:53:23.179453 dockerd[2494]: time="2025-07-07T05:53:23.178017120Z" level=info msg="API listen on /run/docker.sock" Jul 7 05:53:23.179253 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 05:53:24.314416 containerd[2124]: time="2025-07-07T05:53:24.314329273Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 05:53:24.912329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210915999.mount: Deactivated successfully. Jul 7 05:53:25.482476 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 05:53:25.491153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:25.874119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:25.893007 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:25.994700 kubelet[2698]: E0707 05:53:25.994596 2698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:26.006008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:26.007503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:26.554840 containerd[2124]: time="2025-07-07T05:53:26.553933205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:26.556561 containerd[2124]: time="2025-07-07T05:53:26.556494893Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 7 05:53:26.559414 containerd[2124]: time="2025-07-07T05:53:26.559317737Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:26.565637 containerd[2124]: time="2025-07-07T05:53:26.565567457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:26.568458 containerd[2124]: time="2025-07-07T05:53:26.568065977Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.253656712s" Jul 7 05:53:26.568458 containerd[2124]: time="2025-07-07T05:53:26.568129793Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 05:53:26.571510 containerd[2124]: time="2025-07-07T05:53:26.571457765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 05:53:27.971985 containerd[2124]: time="2025-07-07T05:53:27.971868260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:27.974195 containerd[2124]: time="2025-07-07T05:53:27.974128652Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 7 05:53:27.975732 containerd[2124]: time="2025-07-07T05:53:27.974611532Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:27.981152 containerd[2124]: time="2025-07-07T05:53:27.981087008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:27.983832 containerd[2124]: time="2025-07-07T05:53:27.983715092Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.412185447s" Jul 7 05:53:27.983832 containerd[2124]: time="2025-07-07T05:53:27.983826632Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 05:53:27.984704 containerd[2124]: time="2025-07-07T05:53:27.984493280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 05:53:29.124431 containerd[2124]: time="2025-07-07T05:53:29.124111865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:29.126443 containerd[2124]: time="2025-07-07T05:53:29.126347201Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 7 05:53:29.127458 containerd[2124]: time="2025-07-07T05:53:29.126869477Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:29.133036 containerd[2124]: time="2025-07-07T05:53:29.132946493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:29.135719 containerd[2124]: time="2025-07-07T05:53:29.135486977Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.150919669s" Jul 7 05:53:29.135719 containerd[2124]: time="2025-07-07T05:53:29.135562589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 05:53:29.136704 containerd[2124]: time="2025-07-07T05:53:29.136419149Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 05:53:30.405537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202587228.mount: Deactivated successfully. Jul 7 05:53:30.954970 containerd[2124]: time="2025-07-07T05:53:30.954875650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:30.957051 containerd[2124]: time="2025-07-07T05:53:30.956964718Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 7 05:53:30.959691 containerd[2124]: time="2025-07-07T05:53:30.959595634Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:30.964916 containerd[2124]: time="2025-07-07T05:53:30.964814410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:30.966692 containerd[2124]: time="2025-07-07T05:53:30.966393526Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.829916297s" Jul 7 05:53:30.966692 containerd[2124]: time="2025-07-07T05:53:30.966470950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 05:53:30.967813 containerd[2124]: time="2025-07-07T05:53:30.967238171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 05:53:31.506341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981734369.mount: Deactivated successfully. Jul 7 05:53:32.908840 containerd[2124]: time="2025-07-07T05:53:32.908267448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:32.910997 containerd[2124]: time="2025-07-07T05:53:32.910916280Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 7 05:53:32.913836 containerd[2124]: time="2025-07-07T05:53:32.913736052Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:32.924610 containerd[2124]: time="2025-07-07T05:53:32.924515208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:32.927733 containerd[2124]: time="2025-07-07T05:53:32.927452040Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.960139445s" Jul 7 05:53:32.927733 containerd[2124]: time="2025-07-07T05:53:32.927536148Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 05:53:32.929250 containerd[2124]: time="2025-07-07T05:53:32.928928928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 05:53:33.453365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount739028535.mount: Deactivated successfully. Jul 7 05:53:33.466319 containerd[2124]: time="2025-07-07T05:53:33.466237895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:33.468245 containerd[2124]: time="2025-07-07T05:53:33.468187835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 7 05:53:33.470829 containerd[2124]: time="2025-07-07T05:53:33.470726315Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:33.476117 containerd[2124]: time="2025-07-07T05:53:33.476024207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:33.478025 containerd[2124]: time="2025-07-07T05:53:33.477775811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 548.786019ms" Jul 7 05:53:33.478025 containerd[2124]: time="2025-07-07T05:53:33.477843287Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 05:53:33.478868 containerd[2124]: time="2025-07-07T05:53:33.478425035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 05:53:34.087295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337072164.mount: Deactivated successfully. Jul 7 05:53:36.013152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 05:53:36.024670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:36.582043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:36.601798 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:53:36.717739 kubelet[2842]: E0707 05:53:36.717401 2842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:53:36.723466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:53:36.729099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:53:37.554716 containerd[2124]: time="2025-07-07T05:53:37.553823739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:37.556471 containerd[2124]: time="2025-07-07T05:53:37.556377591Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 7 05:53:37.558559 containerd[2124]: time="2025-07-07T05:53:37.558444387Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:37.565677 containerd[2124]: time="2025-07-07T05:53:37.565611003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:53:37.568970 containerd[2124]: time="2025-07-07T05:53:37.568684095Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.090207556s" Jul 7 05:53:37.568970 containerd[2124]: time="2025-07-07T05:53:37.568794771Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 05:53:40.580621 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 05:53:45.169462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:45.176213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:45.240117 systemd[1]: Reloading requested from client PID 2886 ('systemctl') (unit session-7.scope)... Jul 7 05:53:45.240363 systemd[1]: Reloading... Jul 7 05:53:45.456801 zram_generator::config[2929]: No configuration found. Jul 7 05:53:45.717511 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:45.885526 systemd[1]: Reloading finished in 644 ms. Jul 7 05:53:45.969295 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 05:53:45.969680 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 05:53:45.970554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:45.979474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:46.329110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:46.340820 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:46.419559 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:46.419559 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:53:46.419559 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:46.420182 kubelet[2999]: I0707 05:53:46.419719 2999 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:53:47.710611 kubelet[2999]: I0707 05:53:47.710502 2999 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:53:47.710611 kubelet[2999]: I0707 05:53:47.710550 2999 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:53:47.711331 kubelet[2999]: I0707 05:53:47.711031 2999 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:53:47.764936 kubelet[2999]: E0707 05:53:47.764335 2999 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:47.766246 kubelet[2999]: I0707 05:53:47.766185 2999 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:47.778605 kubelet[2999]: E0707 05:53:47.778534 2999 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:53:47.778605 kubelet[2999]: I0707 05:53:47.778592 2999 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:53:47.785850 kubelet[2999]: I0707 05:53:47.785792 2999 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:53:47.786973 kubelet[2999]: I0707 05:53:47.786921 2999 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:53:47.787289 kubelet[2999]: I0707 05:53:47.787224 2999 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:53:47.787607 kubelet[2999]: I0707 05:53:47.787283 2999 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 05:53:47.787805 kubelet[2999]: I0707 05:53:47.787762 2999 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:53:47.787805 kubelet[2999]: I0707 05:53:47.787784 2999 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:53:47.788141 kubelet[2999]: I0707 05:53:47.788097 2999 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:47.793720 kubelet[2999]: I0707 05:53:47.793657 2999 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:53:47.793720 kubelet[2999]: I0707 05:53:47.793711 2999 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:53:47.793911 kubelet[2999]: I0707 05:53:47.793828 2999 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:53:47.794029 kubelet[2999]: I0707 05:53:47.793989 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:53:47.798194 kubelet[2999]: W0707 05:53:47.798091 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-83&limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:47.798194 kubelet[2999]: E0707 05:53:47.798192 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-83&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:47.802200 kubelet[2999]: W0707 05:53:47.802136 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:47.802544 kubelet[2999]: E0707 05:53:47.802394 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:47.804783 kubelet[2999]: I0707 05:53:47.803397 2999 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:53:47.804783 kubelet[2999]: I0707 05:53:47.804706 2999 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:53:47.805323 kubelet[2999]: W0707 05:53:47.805300 2999 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:53:47.807919 kubelet[2999]: I0707 05:53:47.807881 2999 server.go:1274] "Started kubelet" Jul 7 05:53:47.810580 kubelet[2999]: I0707 05:53:47.810509 2999 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:53:47.818453 kubelet[2999]: I0707 05:53:47.818354 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:53:47.819783 kubelet[2999]: I0707 05:53:47.819730 2999 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:53:47.820940 kubelet[2999]: I0707 05:53:47.819821 2999 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:53:47.825522 kubelet[2999]: E0707 05:53:47.823312 2999 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.83:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.83:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-83.184fe25360fe2c0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-83,UID:ip-172-31-20-83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-83,},FirstTimestamp:2025-07-07 05:53:47.80784539 +0000 UTC m=+1.460352596,LastTimestamp:2025-07-07 05:53:47.80784539 +0000 UTC m=+1.460352596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-83,}" Jul 7 05:53:47.830177 kubelet[2999]: I0707 05:53:47.830116 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:53:47.833024 kubelet[2999]: I0707 05:53:47.832968 2999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:53:47.834553 kubelet[2999]: I0707 05:53:47.834525 2999 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:53:47.836601 kubelet[2999]: I0707 05:53:47.834939 2999 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:53:47.836936 kubelet[2999]: I0707 05:53:47.836915 2999 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:53:47.837091 kubelet[2999]: E0707 05:53:47.835280 2999 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Jul 7 05:53:47.840156 kubelet[2999]: E0707 05:53:47.840070 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="200ms" Jul 7 05:53:47.840784 kubelet[2999]: W0707 05:53:47.840536 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:47.840784 kubelet[2999]: E0707 05:53:47.840633 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:47.841135 kubelet[2999]: I0707 05:53:47.841085 2999 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:53:47.842638 kubelet[2999]: E0707 05:53:47.842605 2999 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:53:47.846095 kubelet[2999]: I0707 05:53:47.845998 2999 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:53:47.846095 kubelet[2999]: I0707 05:53:47.846036 2999 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:53:47.889028 kubelet[2999]: I0707 05:53:47.888966 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:53:47.892313 kubelet[2999]: I0707 05:53:47.891419 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:53:47.892313 kubelet[2999]: I0707 05:53:47.891467 2999 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:53:47.892313 kubelet[2999]: I0707 05:53:47.891500 2999 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:53:47.892313 kubelet[2999]: E0707 05:53:47.891578 2999 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:53:47.900572 kubelet[2999]: W0707 05:53:47.900525 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:47.900834 kubelet[2999]: E0707 05:53:47.900799 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:47.902667 kubelet[2999]: I0707 05:53:47.902633 2999 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:53:47.902931 kubelet[2999]: I0707 05:53:47.902908 2999 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:53:47.903047 kubelet[2999]: I0707 05:53:47.903029 2999 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:47.909097 kubelet[2999]: I0707 05:53:47.909063 2999 policy_none.go:49] "None policy: Start" Jul 7 05:53:47.910261 kubelet[2999]: I0707 05:53:47.910235 2999 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:53:47.910599 kubelet[2999]: I0707 05:53:47.910512 2999 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:53:47.922460 kubelet[2999]: I0707 05:53:47.922403 2999 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:53:47.923045 kubelet[2999]: I0707 05:53:47.923020 2999 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:53:47.923329 kubelet[2999]: I0707 05:53:47.923167 2999 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:53:47.927334 kubelet[2999]: I0707 05:53:47.927304 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:53:47.929605 kubelet[2999]: E0707 05:53:47.929560 2999 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-83\" not found" Jul 7 05:53:48.027590 kubelet[2999]: I0707 05:53:48.027553 2999 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-83" Jul 7 05:53:48.028673 kubelet[2999]: E0707 05:53:48.028627 2999 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Jul 7 05:53:48.038419 kubelet[2999]: I0707 05:53:48.038377 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed35bcd68e168260660fdd487e11baea-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-83\" (UID: \"ed35bcd68e168260660fdd487e11baea\") " pod="kube-system/kube-scheduler-ip-172-31-20-83" Jul 7 05:53:48.038535 kubelet[2999]: I0707 05:53:48.038437 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea55810a82e04196477a559f323b1a6e-ca-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ea55810a82e04196477a559f323b1a6e\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:48.038535 kubelet[2999]: I0707 05:53:48.038484 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea55810a82e04196477a559f323b1a6e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ea55810a82e04196477a559f323b1a6e\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:48.038535 kubelet[2999]: I0707 05:53:48.038522 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:48.038720 kubelet[2999]: I0707 05:53:48.038556 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:48.038720 kubelet[2999]: I0707 05:53:48.038588 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:48.038720 kubelet[2999]: I0707 05:53:48.038623 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:48.038720 kubelet[2999]: I0707 05:53:48.038658 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea55810a82e04196477a559f323b1a6e-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ea55810a82e04196477a559f323b1a6e\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:48.038720 kubelet[2999]: I0707 05:53:48.038692 2999 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:48.041517 kubelet[2999]: E0707 05:53:48.041456 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="400ms" Jul 7 05:53:48.230895 kubelet[2999]: I0707 05:53:48.230816 2999 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-83" Jul 7 05:53:48.231333 kubelet[2999]: E0707 05:53:48.231295 2999 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Jul 7 05:53:48.310860 containerd[2124]: time="2025-07-07T05:53:48.310671913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-83,Uid:ea55810a82e04196477a559f323b1a6e,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:48.312876 containerd[2124]: time="2025-07-07T05:53:48.312565849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-83,Uid:ccddc88213e7686bac005d8a1ce20169,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:48.319653 containerd[2124]: time="2025-07-07T05:53:48.319568665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-83,Uid:ed35bcd68e168260660fdd487e11baea,Namespace:kube-system,Attempt:0,}" Jul 7 05:53:48.441991 kubelet[2999]: E0707 05:53:48.441915 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="800ms" Jul 7 05:53:48.633628 kubelet[2999]: I0707 05:53:48.633403 2999 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-83" Jul 7 05:53:48.634059 kubelet[2999]: E0707 05:53:48.634030 2999 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Jul 7 05:53:48.829313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049078630.mount: Deactivated successfully. Jul 7 05:53:48.841964 containerd[2124]: time="2025-07-07T05:53:48.841881519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:48.844276 containerd[2124]: time="2025-07-07T05:53:48.844203987Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:48.846273 containerd[2124]: time="2025-07-07T05:53:48.846171951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 7 05:53:48.848189 containerd[2124]: time="2025-07-07T05:53:48.848134899Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:53:48.850291 containerd[2124]: time="2025-07-07T05:53:48.850231443Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:48.853448 containerd[2124]: time="2025-07-07T05:53:48.853259751Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:48.855088 containerd[2124]: time="2025-07-07T05:53:48.854982015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:53:48.859599 containerd[2124]: time="2025-07-07T05:53:48.859515195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:53:48.864246 containerd[2124]: time="2025-07-07T05:53:48.863844051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.163074ms" Jul 7 05:53:48.868212 containerd[2124]: time="2025-07-07T05:53:48.868134435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.315378ms" Jul 7 05:53:48.874974 containerd[2124]: time="2025-07-07T05:53:48.874696203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.812946ms" Jul 7 05:53:48.944158 kubelet[2999]: W0707 05:53:48.941059 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:48.944158 kubelet[2999]: E0707 05:53:48.941157 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:49.039643 kubelet[2999]: W0707 05:53:49.039564 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-83&limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:49.039895 kubelet[2999]: E0707 05:53:49.039861 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-83&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:49.081130 containerd[2124]: time="2025-07-07T05:53:49.080606364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:49.081130 containerd[2124]: time="2025-07-07T05:53:49.080708580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:49.081130 containerd[2124]: time="2025-07-07T05:53:49.080802552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:49.082721 containerd[2124]: time="2025-07-07T05:53:49.082548540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:49.087123 containerd[2124]: time="2025-07-07T05:53:49.086116872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:49.088946 containerd[2124]: time="2025-07-07T05:53:49.087618877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:49.088946 containerd[2124]: time="2025-07-07T05:53:49.088066657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:49.092795 containerd[2124]: time="2025-07-07T05:53:49.092130925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:49.093951 containerd[2124]: time="2025-07-07T05:53:49.092553085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:53:49.093951 containerd[2124]: time="2025-07-07T05:53:49.092638945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:53:49.093951 containerd[2124]: time="2025-07-07T05:53:49.092666197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:49.094399 containerd[2124]: time="2025-07-07T05:53:49.094290229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:53:49.199715 kubelet[2999]: W0707 05:53:49.198962 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:49.200244 kubelet[2999]: E0707 05:53:49.200090 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:49.241649 containerd[2124]: time="2025-07-07T05:53:49.241542205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-83,Uid:ccddc88213e7686bac005d8a1ce20169,Namespace:kube-system,Attempt:0,} returns sandbox id \"960e97298d04527447597c77f577f7c49faf247a30df07d3c6a548411f6c115d\"" Jul 7 05:53:49.242713 kubelet[2999]: E0707 05:53:49.242611 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": dial tcp 172.31.20.83:6443: connect: connection refused" interval="1.6s" Jul 7 05:53:49.252735 containerd[2124]: time="2025-07-07T05:53:49.252667921Z" level=info msg="CreateContainer within sandbox \"960e97298d04527447597c77f577f7c49faf247a30df07d3c6a548411f6c115d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 05:53:49.257242 containerd[2124]: time="2025-07-07T05:53:49.257194201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-83,Uid:ed35bcd68e168260660fdd487e11baea,Namespace:kube-system,Attempt:0,} returns sandbox id \"73b1bb6fca6811ae9ea7d13c25465fc9af4f5d8a043c69b3eeb936add7691d04\"" Jul 7 05:53:49.261232 containerd[2124]: time="2025-07-07T05:53:49.260973889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-83,Uid:ea55810a82e04196477a559f323b1a6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e50e346597a483635a7f53bf2fec1ea280adc41fdccb6e867fee2b49fbefb2e5\"" Jul 7 05:53:49.269442 containerd[2124]: time="2025-07-07T05:53:49.269000989Z" level=info msg="CreateContainer within sandbox \"e50e346597a483635a7f53bf2fec1ea280adc41fdccb6e867fee2b49fbefb2e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 05:53:49.269724 containerd[2124]: time="2025-07-07T05:53:49.269127517Z" level=info msg="CreateContainer within sandbox \"73b1bb6fca6811ae9ea7d13c25465fc9af4f5d8a043c69b3eeb936add7691d04\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 05:53:49.297739 containerd[2124]: time="2025-07-07T05:53:49.297669062Z" level=info msg="CreateContainer within sandbox \"960e97298d04527447597c77f577f7c49faf247a30df07d3c6a548411f6c115d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"14b57c36c4d7183fb6a01bbf3d5ea238e0c8716ec0f7e786c30b725ace8f0d6a\"" Jul 7 05:53:49.299547 containerd[2124]: time="2025-07-07T05:53:49.299321906Z" level=info msg="StartContainer for \"14b57c36c4d7183fb6a01bbf3d5ea238e0c8716ec0f7e786c30b725ace8f0d6a\"" Jul 7 05:53:49.317314 containerd[2124]: time="2025-07-07T05:53:49.317092346Z" level=info msg="CreateContainer within sandbox \"73b1bb6fca6811ae9ea7d13c25465fc9af4f5d8a043c69b3eeb936add7691d04\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4f18e1e784916ae7704a46b1299fbbd3acd2f214246b54e2cc0dbd2954681faf\"" Jul 7 05:53:49.318770 containerd[2124]: time="2025-07-07T05:53:49.318704330Z" level=info msg="StartContainer for \"4f18e1e784916ae7704a46b1299fbbd3acd2f214246b54e2cc0dbd2954681faf\"" Jul 7 05:53:49.329163 containerd[2124]: time="2025-07-07T05:53:49.329054438Z" level=info msg="CreateContainer within sandbox \"e50e346597a483635a7f53bf2fec1ea280adc41fdccb6e867fee2b49fbefb2e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f1db92cb583157c58a1eddd2aa425534136b41c81b25f5f78dbb877eb624227\"" Jul 7 05:53:49.330875 containerd[2124]: time="2025-07-07T05:53:49.330760574Z" level=info msg="StartContainer for \"3f1db92cb583157c58a1eddd2aa425534136b41c81b25f5f78dbb877eb624227\"" Jul 7 05:53:49.409661 kubelet[2999]: W0707 05:53:49.407636 2999 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.83:6443: connect: connection refused Jul 7 05:53:49.409661 kubelet[2999]: E0707 05:53:49.407761 2999 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.83:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:53:49.443772 kubelet[2999]: I0707 05:53:49.441698 2999 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-83" Jul 7 05:53:49.443772 kubelet[2999]: E0707 05:53:49.442210 2999 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.83:6443/api/v1/nodes\": dial tcp 172.31.20.83:6443: connect: connection refused" node="ip-172-31-20-83" Jul 7 05:53:49.514639 containerd[2124]: time="2025-07-07T05:53:49.514526643Z" level=info msg="StartContainer for \"14b57c36c4d7183fb6a01bbf3d5ea238e0c8716ec0f7e786c30b725ace8f0d6a\" returns successfully" Jul 7 05:53:49.537351 containerd[2124]: time="2025-07-07T05:53:49.537183363Z" level=info msg="StartContainer for \"4f18e1e784916ae7704a46b1299fbbd3acd2f214246b54e2cc0dbd2954681faf\" returns successfully" Jul 7 05:53:49.554237 containerd[2124]: time="2025-07-07T05:53:49.554153547Z" level=info msg="StartContainer for \"3f1db92cb583157c58a1eddd2aa425534136b41c81b25f5f78dbb877eb624227\" returns successfully" Jul 7 05:53:51.047398 kubelet[2999]: I0707 05:53:51.047325 2999 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-83" Jul 7 05:53:52.991116 kubelet[2999]: E0707 05:53:52.991043 2999 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-83\" not found" node="ip-172-31-20-83" Jul 7 05:53:53.086448 kubelet[2999]: I0707 05:53:53.086169 2999 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-83" Jul 7 05:53:53.086448 kubelet[2999]: E0707 05:53:53.086226 2999 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-83\": node \"ip-172-31-20-83\" not found" Jul 7 05:53:53.815822 kubelet[2999]: I0707 05:53:53.813540 2999 apiserver.go:52] "Watching apiserver" Jul 7 05:53:53.837584 kubelet[2999]: I0707 05:53:53.837360 2999 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:53:54.009917 update_engine[2095]: I20250707 05:53:54.009794 2095 update_attempter.cc:509] Updating boot flags... Jul 7 05:53:54.216471 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3288) Jul 7 05:53:55.003822 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3279) Jul 7 05:53:55.351809 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3279) Jul 7 05:53:55.834270 systemd[1]: Reloading requested from client PID 3543 ('systemctl') (unit session-7.scope)... Jul 7 05:53:55.834302 systemd[1]: Reloading... Jul 7 05:53:56.005788 zram_generator::config[3586]: No configuration found. Jul 7 05:53:56.408033 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:53:56.601096 systemd[1]: Reloading finished in 766 ms. Jul 7 05:53:56.674458 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:56.693142 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:53:56.694033 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:56.706619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:53:57.080077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:53:57.095683 (kubelet)[3653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:53:57.234655 kubelet[3653]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:57.235928 kubelet[3653]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:53:57.236179 kubelet[3653]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:53:57.236569 kubelet[3653]: I0707 05:53:57.236466 3653 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:53:57.252511 kubelet[3653]: I0707 05:53:57.252449 3653 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:53:57.252766 kubelet[3653]: I0707 05:53:57.252730 3653 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:53:57.253415 kubelet[3653]: I0707 05:53:57.253374 3653 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:53:57.256911 kubelet[3653]: I0707 05:53:57.256869 3653 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 05:53:57.262570 kubelet[3653]: I0707 05:53:57.262481 3653 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:53:57.270278 kubelet[3653]: E0707 05:53:57.270173 3653 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:53:57.270278 kubelet[3653]: I0707 05:53:57.270234 3653 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:53:57.275823 kubelet[3653]: I0707 05:53:57.275721 3653 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:53:57.278368 kubelet[3653]: I0707 05:53:57.276480 3653 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:53:57.278368 kubelet[3653]: I0707 05:53:57.276797 3653 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:53:57.278368 kubelet[3653]: I0707 05:53:57.276860 3653 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 05:53:57.278368 kubelet[3653]: I0707 05:53:57.277424 3653 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:53:57.278842 kubelet[3653]: I0707 05:53:57.277456 3653 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:53:57.278842 kubelet[3653]: I0707 05:53:57.277538 3653 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:57.278842 kubelet[3653]: I0707 05:53:57.277770 3653 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:53:57.278842 kubelet[3653]: I0707 05:53:57.277801 3653 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:53:57.278842 kubelet[3653]: I0707 05:53:57.277836 3653 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:53:57.278842 kubelet[3653]: I0707 05:53:57.277866 3653 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:53:57.293568 kubelet[3653]: I0707 05:53:57.290355 3653 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:53:57.294089 kubelet[3653]: I0707 05:53:57.294045 3653 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:53:57.295361 kubelet[3653]: I0707 05:53:57.295297 3653 server.go:1274] "Started kubelet" Jul 7 05:53:57.303504 kubelet[3653]: I0707 05:53:57.303433 3653 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:53:57.303531 sudo[3668]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 05:53:57.305426 sudo[3668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 05:53:57.313072 kubelet[3653]: I0707 05:53:57.312659 3653 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:53:57.313484 kubelet[3653]: I0707 05:53:57.313446 3653 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:53:57.314888 kubelet[3653]: I0707 05:53:57.314826 3653 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:53:57.324583 kubelet[3653]: I0707 05:53:57.324540 3653 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:53:57.325815 kubelet[3653]: E0707 05:53:57.325176 3653 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-83\" not found" Jul 7 05:53:57.328999 kubelet[3653]: I0707 05:53:57.328963 3653 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:53:57.329423 kubelet[3653]: I0707 05:53:57.329399 3653 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:53:57.330047 kubelet[3653]: I0707 05:53:57.329983 3653 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:53:57.335040 kubelet[3653]: I0707 05:53:57.334889 3653 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:53:57.389380 kubelet[3653]: I0707 05:53:57.389341 3653 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:53:57.390659 kubelet[3653]: I0707 05:53:57.389938 3653 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:53:57.407442 kubelet[3653]: I0707 05:53:57.407363 3653 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:53:57.410131 kubelet[3653]: E0707 05:53:57.410061 3653 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:53:57.442951 kubelet[3653]: I0707 05:53:57.442876 3653 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:53:57.453025 kubelet[3653]: I0707 05:53:57.451082 3653 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:53:57.453341 kubelet[3653]: I0707 05:53:57.453313 3653 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:53:57.460095 kubelet[3653]: I0707 05:53:57.460060 3653 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:53:57.460565 kubelet[3653]: E0707 05:53:57.460414 3653 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:53:57.563315 kubelet[3653]: E0707 05:53:57.563239 3653 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 05:53:57.636918 kubelet[3653]: I0707 05:53:57.636687 3653 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:53:57.636918 kubelet[3653]: I0707 05:53:57.636727 3653 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:53:57.636918 kubelet[3653]: I0707 05:53:57.636824 3653 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:53:57.637797 kubelet[3653]: I0707 05:53:57.637547 3653 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 05:53:57.637797 kubelet[3653]: I0707 05:53:57.637624 3653 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 05:53:57.637797 kubelet[3653]: I0707 05:53:57.637699 3653 policy_none.go:49] "None policy: Start" Jul 7 05:53:57.641289 kubelet[3653]: I0707 05:53:57.641239 3653 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:53:57.641289 kubelet[3653]: I0707 05:53:57.641291 3653 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:53:57.641867 kubelet[3653]: I0707 05:53:57.641827 3653 state_mem.go:75] "Updated machine memory state" Jul 7 05:53:57.651430 kubelet[3653]: I0707 05:53:57.648931 3653 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:53:57.651430 kubelet[3653]: I0707 05:53:57.649240 3653 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:53:57.651430 kubelet[3653]: I0707 05:53:57.649262 3653 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:53:57.654927 kubelet[3653]: I0707 05:53:57.654376 3653 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:53:57.776195 kubelet[3653]: I0707 05:53:57.776155 3653 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-83" Jul 7 05:53:57.784801 kubelet[3653]: E0707 05:53:57.781077 3653 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-83\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:57.784801 kubelet[3653]: E0707 05:53:57.781244 3653 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-20-83\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:57.801006 kubelet[3653]: I0707 05:53:57.800942 3653 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-20-83" Jul 7 05:53:57.801164 kubelet[3653]: I0707 05:53:57.801078 3653 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-83" Jul 7 05:53:57.833322 kubelet[3653]: I0707 05:53:57.833261 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:57.833486 kubelet[3653]: I0707 05:53:57.833330 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea55810a82e04196477a559f323b1a6e-ca-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ea55810a82e04196477a559f323b1a6e\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:57.833486 kubelet[3653]: I0707 05:53:57.833375 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:57.833486 kubelet[3653]: I0707 05:53:57.833410 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:57.833486 kubelet[3653]: I0707 05:53:57.833451 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:57.833701 kubelet[3653]: I0707 05:53:57.833492 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccddc88213e7686bac005d8a1ce20169-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-83\" (UID: \"ccddc88213e7686bac005d8a1ce20169\") " pod="kube-system/kube-controller-manager-ip-172-31-20-83" Jul 7 05:53:57.833701 kubelet[3653]: I0707 05:53:57.833528 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed35bcd68e168260660fdd487e11baea-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-83\" (UID: \"ed35bcd68e168260660fdd487e11baea\") " pod="kube-system/kube-scheduler-ip-172-31-20-83" Jul 7 05:53:57.833701 kubelet[3653]: I0707 05:53:57.833561 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea55810a82e04196477a559f323b1a6e-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ea55810a82e04196477a559f323b1a6e\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:57.833701 kubelet[3653]: I0707 05:53:57.833597 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea55810a82e04196477a559f323b1a6e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-83\" (UID: \"ea55810a82e04196477a559f323b1a6e\") " pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:58.314197 kubelet[3653]: I0707 05:53:58.314139 3653 apiserver.go:52] "Watching apiserver" Jul 7 05:53:58.330411 kubelet[3653]: I0707 05:53:58.330331 3653 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:53:58.332635 sudo[3668]: pam_unix(sudo:session): session closed for user root Jul 7 05:53:58.560440 kubelet[3653]: E0707 05:53:58.558587 3653 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-83\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-83" Jul 7 05:53:58.610825 kubelet[3653]: I0707 05:53:58.610223 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-83" podStartSLOduration=1.6101994720000001 podStartE2EDuration="1.610199472s" podCreationTimestamp="2025-07-07 05:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:58.609878904 +0000 UTC m=+1.500686901" watchObservedRunningTime="2025-07-07 05:53:58.610199472 +0000 UTC m=+1.501007337" Jul 7 05:53:58.610825 kubelet[3653]: I0707 05:53:58.610410 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-83" podStartSLOduration=2.6103982759999997 podStartE2EDuration="2.610398276s" podCreationTimestamp="2025-07-07 05:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:58.594900888 +0000 UTC m=+1.485708753" watchObservedRunningTime="2025-07-07 05:53:58.610398276 +0000 UTC m=+1.501206117" Jul 7 05:53:58.651165 kubelet[3653]: I0707 05:53:58.650953 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-83" podStartSLOduration=3.650928804 podStartE2EDuration="3.650928804s" podCreationTimestamp="2025-07-07 05:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:53:58.630011832 +0000 UTC m=+1.520819697" watchObservedRunningTime="2025-07-07 05:53:58.650928804 +0000 UTC m=+1.541736669" Jul 7 05:54:00.667786 kubelet[3653]: I0707 05:54:00.665853 3653 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 05:54:00.669061 containerd[2124]: time="2025-07-07T05:54:00.668712638Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:54:00.670169 kubelet[3653]: I0707 05:54:00.669445 3653 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 05:54:01.164887 kubelet[3653]: I0707 05:54:01.162935 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/badc0f45-9086-46a6-953a-bad76ef54ea1-xtables-lock\") pod \"kube-proxy-tn46d\" (UID: \"badc0f45-9086-46a6-953a-bad76ef54ea1\") " pod="kube-system/kube-proxy-tn46d" Jul 7 05:54:01.164887 kubelet[3653]: I0707 05:54:01.163030 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/badc0f45-9086-46a6-953a-bad76ef54ea1-lib-modules\") pod \"kube-proxy-tn46d\" (UID: \"badc0f45-9086-46a6-953a-bad76ef54ea1\") " pod="kube-system/kube-proxy-tn46d" Jul 7 05:54:01.164887 kubelet[3653]: I0707 05:54:01.163077 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/badc0f45-9086-46a6-953a-bad76ef54ea1-kube-proxy\") pod \"kube-proxy-tn46d\" (UID: \"badc0f45-9086-46a6-953a-bad76ef54ea1\") " pod="kube-system/kube-proxy-tn46d" Jul 7 05:54:01.164887 kubelet[3653]: I0707 05:54:01.163123 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4dnf\" (UniqueName: \"kubernetes.io/projected/badc0f45-9086-46a6-953a-bad76ef54ea1-kube-api-access-k4dnf\") pod \"kube-proxy-tn46d\" (UID: \"badc0f45-9086-46a6-953a-bad76ef54ea1\") " pod="kube-system/kube-proxy-tn46d" Jul 7 05:54:01.268698 kubelet[3653]: I0707 05:54:01.267647 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cni-path\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.268698 kubelet[3653]: I0707 05:54:01.267830 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-bpf-maps\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.268698 kubelet[3653]: I0707 05:54:01.268148 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-cgroup\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.268698 kubelet[3653]: I0707 05:54:01.268205 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-etc-cni-netd\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.269417 kubelet[3653]: I0707 05:54:01.269096 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-net\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.269417 kubelet[3653]: I0707 05:54:01.269202 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-kernel\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.269417 kubelet[3653]: I0707 05:54:01.269257 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-run\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.269417 kubelet[3653]: I0707 05:54:01.269308 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-config-path\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.271790 kubelet[3653]: I0707 05:54:01.270726 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hostproc\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.271790 kubelet[3653]: I0707 05:54:01.270874 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4646b\" (UniqueName: \"kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-kube-api-access-4646b\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.271790 kubelet[3653]: I0707 05:54:01.270970 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-lib-modules\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.271790 kubelet[3653]: I0707 05:54:01.271027 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hubble-tls\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.271790 kubelet[3653]: I0707 05:54:01.271078 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-xtables-lock\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.271790 kubelet[3653]: I0707 05:54:01.271125 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-clustermesh-secrets\") pod \"cilium-r49z8\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " pod="kube-system/cilium-r49z8" Jul 7 05:54:01.307785 kubelet[3653]: E0707 05:54:01.306928 3653 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 05:54:01.309912 kubelet[3653]: E0707 05:54:01.309846 3653 projected.go:194] Error preparing data for projected volume kube-api-access-k4dnf for pod kube-system/kube-proxy-tn46d: configmap "kube-root-ca.crt" not found Jul 7 05:54:01.310096 kubelet[3653]: E0707 05:54:01.309999 3653 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/badc0f45-9086-46a6-953a-bad76ef54ea1-kube-api-access-k4dnf podName:badc0f45-9086-46a6-953a-bad76ef54ea1 nodeName:}" failed. No retries permitted until 2025-07-07 05:54:01.809961621 +0000 UTC m=+4.700769474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k4dnf" (UniqueName: "kubernetes.io/projected/badc0f45-9086-46a6-953a-bad76ef54ea1-kube-api-access-k4dnf") pod "kube-proxy-tn46d" (UID: "badc0f45-9086-46a6-953a-bad76ef54ea1") : configmap "kube-root-ca.crt" not found Jul 7 05:54:01.416526 kubelet[3653]: E0707 05:54:01.416406 3653 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 05:54:01.417369 kubelet[3653]: E0707 05:54:01.417235 3653 projected.go:194] Error preparing data for projected volume kube-api-access-4646b for pod kube-system/cilium-r49z8: configmap "kube-root-ca.crt" not found Jul 7 05:54:01.417692 kubelet[3653]: E0707 05:54:01.417478 3653 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-kube-api-access-4646b podName:178ac068-0fd7-4c52-ab31-776ba0fc0ea0 nodeName:}" failed. No retries permitted until 2025-07-07 05:54:01.917326882 +0000 UTC m=+4.808134735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4646b" (UniqueName: "kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-kube-api-access-4646b") pod "cilium-r49z8" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0") : configmap "kube-root-ca.crt" not found Jul 7 05:54:01.693165 sudo[2478]: pam_unix(sudo:session): session closed for user root Jul 7 05:54:01.723267 sshd[2474]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:01.746263 systemd[1]: sshd@6-172.31.20.83:22-139.178.89.65:56306.service: Deactivated successfully. Jul 7 05:54:01.749876 systemd-logind[2093]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:54:01.770450 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:54:01.775057 kubelet[3653]: I0707 05:54:01.774978 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b37be48-e38c-44a6-8f55-86ba4c7ac492-cilium-config-path\") pod \"cilium-operator-5d85765b45-dzxsj\" (UID: \"9b37be48-e38c-44a6-8f55-86ba4c7ac492\") " pod="kube-system/cilium-operator-5d85765b45-dzxsj" Jul 7 05:54:01.775699 kubelet[3653]: I0707 05:54:01.775096 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8kl\" (UniqueName: \"kubernetes.io/projected/9b37be48-e38c-44a6-8f55-86ba4c7ac492-kube-api-access-qr8kl\") pod \"cilium-operator-5d85765b45-dzxsj\" (UID: \"9b37be48-e38c-44a6-8f55-86ba4c7ac492\") " pod="kube-system/cilium-operator-5d85765b45-dzxsj" Jul 7 05:54:01.784559 systemd-logind[2093]: Removed session 7. Jul 7 05:54:02.017075 containerd[2124]: time="2025-07-07T05:54:02.017002285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dzxsj,Uid:9b37be48-e38c-44a6-8f55-86ba4c7ac492,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:02.061887 containerd[2124]: time="2025-07-07T05:54:02.061582813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:02.061887 containerd[2124]: time="2025-07-07T05:54:02.061684933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:02.061887 containerd[2124]: time="2025-07-07T05:54:02.061784833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:02.062472 containerd[2124]: time="2025-07-07T05:54:02.061989529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:02.079450 containerd[2124]: time="2025-07-07T05:54:02.079370473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tn46d,Uid:badc0f45-9086-46a6-953a-bad76ef54ea1,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:02.124832 containerd[2124]: time="2025-07-07T05:54:02.124539361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r49z8,Uid:178ac068-0fd7-4c52-ab31-776ba0fc0ea0,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:02.151436 containerd[2124]: time="2025-07-07T05:54:02.144311773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:02.151436 containerd[2124]: time="2025-07-07T05:54:02.144422113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:02.151436 containerd[2124]: time="2025-07-07T05:54:02.144459889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:02.151436 containerd[2124]: time="2025-07-07T05:54:02.144634213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:02.188706 containerd[2124]: time="2025-07-07T05:54:02.188140034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dzxsj,Uid:9b37be48-e38c-44a6-8f55-86ba4c7ac492,Namespace:kube-system,Attempt:0,} returns sandbox id \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\"" Jul 7 05:54:02.197972 containerd[2124]: time="2025-07-07T05:54:02.197555714Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 05:54:02.230417 containerd[2124]: time="2025-07-07T05:54:02.230164190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:02.232995 containerd[2124]: time="2025-07-07T05:54:02.232294574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:02.232995 containerd[2124]: time="2025-07-07T05:54:02.232366226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:02.232995 containerd[2124]: time="2025-07-07T05:54:02.232543526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:02.264763 containerd[2124]: time="2025-07-07T05:54:02.264673670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tn46d,Uid:badc0f45-9086-46a6-953a-bad76ef54ea1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e18eef339a9695d73d3592bb8150654ac6bce59fd3e28133655d530d798bfdab\"" Jul 7 05:54:02.280099 containerd[2124]: time="2025-07-07T05:54:02.279698498Z" level=info msg="CreateContainer within sandbox \"e18eef339a9695d73d3592bb8150654ac6bce59fd3e28133655d530d798bfdab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:54:02.316718 containerd[2124]: time="2025-07-07T05:54:02.316556474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r49z8,Uid:178ac068-0fd7-4c52-ab31-776ba0fc0ea0,Namespace:kube-system,Attempt:0,} returns sandbox id \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\"" Jul 7 05:54:02.322707 containerd[2124]: time="2025-07-07T05:54:02.322377062Z" level=info msg="CreateContainer within sandbox \"e18eef339a9695d73d3592bb8150654ac6bce59fd3e28133655d530d798bfdab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7f6bf393f807a70b4235db63cffcf1a1dac1dbc88e79db4370341e1c4afb8ce\"" Jul 7 05:54:02.324426 containerd[2124]: time="2025-07-07T05:54:02.323842658Z" level=info msg="StartContainer for \"f7f6bf393f807a70b4235db63cffcf1a1dac1dbc88e79db4370341e1c4afb8ce\"" Jul 7 05:54:02.453885 containerd[2124]: time="2025-07-07T05:54:02.453824463Z" level=info msg="StartContainer for \"f7f6bf393f807a70b4235db63cffcf1a1dac1dbc88e79db4370341e1c4afb8ce\" returns successfully" Jul 7 05:54:02.588875 kubelet[3653]: I0707 05:54:02.587290 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tn46d" podStartSLOduration=1.587264764 podStartE2EDuration="1.587264764s" podCreationTimestamp="2025-07-07 05:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:02.5819146 +0000 UTC m=+5.472722489" watchObservedRunningTime="2025-07-07 05:54:02.587264764 +0000 UTC m=+5.478072701" Jul 7 05:54:03.478099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062015664.mount: Deactivated successfully. Jul 7 05:54:04.158175 containerd[2124]: time="2025-07-07T05:54:04.158018943Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:04.161642 containerd[2124]: time="2025-07-07T05:54:04.161543583Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 7 05:54:04.164207 containerd[2124]: time="2025-07-07T05:54:04.164120127Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:04.167450 containerd[2124]: time="2025-07-07T05:54:04.167215455Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.969580469s" Jul 7 05:54:04.167450 containerd[2124]: time="2025-07-07T05:54:04.167283099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 7 05:54:04.170443 containerd[2124]: time="2025-07-07T05:54:04.170311863Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 05:54:04.174093 containerd[2124]: time="2025-07-07T05:54:04.172947423Z" level=info msg="CreateContainer within sandbox \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 05:54:04.203518 containerd[2124]: time="2025-07-07T05:54:04.203344756Z" level=info msg="CreateContainer within sandbox \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\"" Jul 7 05:54:04.205352 containerd[2124]: time="2025-07-07T05:54:04.205064272Z" level=info msg="StartContainer for \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\"" Jul 7 05:54:04.316379 containerd[2124]: time="2025-07-07T05:54:04.315958528Z" level=info msg="StartContainer for \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\" returns successfully" Jul 7 05:54:07.759421 kubelet[3653]: I0707 05:54:07.757576 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dzxsj" podStartSLOduration=4.780940712 podStartE2EDuration="6.757552581s" podCreationTimestamp="2025-07-07 05:54:01 +0000 UTC" firstStartedPulling="2025-07-07 05:54:02.192402302 +0000 UTC m=+5.083210155" lastFinishedPulling="2025-07-07 05:54:04.169014171 +0000 UTC m=+7.059822024" observedRunningTime="2025-07-07 05:54:04.623347278 +0000 UTC m=+7.514155407" watchObservedRunningTime="2025-07-07 05:54:07.757552581 +0000 UTC m=+10.648360434" Jul 7 05:54:10.528936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3584798460.mount: Deactivated successfully. Jul 7 05:54:13.203172 containerd[2124]: time="2025-07-07T05:54:13.203075100Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:13.205533 containerd[2124]: time="2025-07-07T05:54:13.205108752Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 7 05:54:13.208796 containerd[2124]: time="2025-07-07T05:54:13.207691944Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:54:13.211788 containerd[2124]: time="2025-07-07T05:54:13.211410384Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.041011281s" Jul 7 05:54:13.211788 containerd[2124]: time="2025-07-07T05:54:13.211482024Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 7 05:54:13.219332 containerd[2124]: time="2025-07-07T05:54:13.219247848Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 05:54:13.249523 containerd[2124]: time="2025-07-07T05:54:13.249440557Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\"" Jul 7 05:54:13.250405 containerd[2124]: time="2025-07-07T05:54:13.250281085Z" level=info msg="StartContainer for \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\"" Jul 7 05:54:13.354580 containerd[2124]: time="2025-07-07T05:54:13.354488329Z" level=info msg="StartContainer for \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\" returns successfully" Jul 7 05:54:14.235176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447-rootfs.mount: Deactivated successfully. Jul 7 05:54:14.586997 containerd[2124]: time="2025-07-07T05:54:14.586730235Z" level=info msg="shim disconnected" id=2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447 namespace=k8s.io Jul 7 05:54:14.586997 containerd[2124]: time="2025-07-07T05:54:14.586901487Z" level=warning msg="cleaning up after shim disconnected" id=2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447 namespace=k8s.io Jul 7 05:54:14.586997 containerd[2124]: time="2025-07-07T05:54:14.586925475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:14.647850 containerd[2124]: time="2025-07-07T05:54:14.645615843Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 05:54:14.690902 containerd[2124]: time="2025-07-07T05:54:14.689813356Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\"" Jul 7 05:54:14.693130 containerd[2124]: time="2025-07-07T05:54:14.692886520Z" level=info msg="StartContainer for \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\"" Jul 7 05:54:14.809512 containerd[2124]: time="2025-07-07T05:54:14.809378884Z" level=info msg="StartContainer for \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\" returns successfully" Jul 7 05:54:14.832548 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:54:14.833314 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:14.833433 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:14.857095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:54:14.905254 containerd[2124]: time="2025-07-07T05:54:14.905164769Z" level=info msg="shim disconnected" id=dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af namespace=k8s.io Jul 7 05:54:14.905978 containerd[2124]: time="2025-07-07T05:54:14.905604101Z" level=warning msg="cleaning up after shim disconnected" id=dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af namespace=k8s.io Jul 7 05:54:14.906365 containerd[2124]: time="2025-07-07T05:54:14.905841437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:14.914419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:54:15.235275 systemd[1]: run-containerd-runc-k8s.io-dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af-runc.Hze69l.mount: Deactivated successfully. Jul 7 05:54:15.236032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af-rootfs.mount: Deactivated successfully. Jul 7 05:54:15.654055 containerd[2124]: time="2025-07-07T05:54:15.653697256Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 05:54:15.697902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514498741.mount: Deactivated successfully. Jul 7 05:54:15.704763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909192769.mount: Deactivated successfully. Jul 7 05:54:15.715706 containerd[2124]: time="2025-07-07T05:54:15.715268813Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\"" Jul 7 05:54:15.718274 containerd[2124]: time="2025-07-07T05:54:15.717947477Z" level=info msg="StartContainer for \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\"" Jul 7 05:54:15.839716 containerd[2124]: time="2025-07-07T05:54:15.839522201Z" level=info msg="StartContainer for \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\" returns successfully" Jul 7 05:54:15.891106 containerd[2124]: time="2025-07-07T05:54:15.890972322Z" level=info msg="shim disconnected" id=dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63 namespace=k8s.io Jul 7 05:54:15.891106 containerd[2124]: time="2025-07-07T05:54:15.891062058Z" level=warning msg="cleaning up after shim disconnected" id=dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63 namespace=k8s.io Jul 7 05:54:15.891106 containerd[2124]: time="2025-07-07T05:54:15.891084294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:16.235640 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63-rootfs.mount: Deactivated successfully. Jul 7 05:54:16.660531 containerd[2124]: time="2025-07-07T05:54:16.660461453Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 05:54:16.699973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034123431.mount: Deactivated successfully. Jul 7 05:54:16.702031 containerd[2124]: time="2025-07-07T05:54:16.700518378Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\"" Jul 7 05:54:16.704607 containerd[2124]: time="2025-07-07T05:54:16.704539902Z" level=info msg="StartContainer for \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\"" Jul 7 05:54:16.801436 containerd[2124]: time="2025-07-07T05:54:16.801368898Z" level=info msg="StartContainer for \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\" returns successfully" Jul 7 05:54:16.846773 containerd[2124]: time="2025-07-07T05:54:16.845340066Z" level=info msg="shim disconnected" id=14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56 namespace=k8s.io Jul 7 05:54:16.846773 containerd[2124]: time="2025-07-07T05:54:16.845411406Z" level=warning msg="cleaning up after shim disconnected" id=14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56 namespace=k8s.io Jul 7 05:54:16.846773 containerd[2124]: time="2025-07-07T05:54:16.845431566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:54:17.235213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56-rootfs.mount: Deactivated successfully. Jul 7 05:54:17.667850 containerd[2124]: time="2025-07-07T05:54:17.667036254Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 05:54:17.703205 containerd[2124]: time="2025-07-07T05:54:17.703018867Z" level=info msg="CreateContainer within sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\"" Jul 7 05:54:17.706827 containerd[2124]: time="2025-07-07T05:54:17.706729927Z" level=info msg="StartContainer for \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\"" Jul 7 05:54:17.768322 systemd[1]: run-containerd-runc-k8s.io-db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac-runc.46E9Y8.mount: Deactivated successfully. Jul 7 05:54:17.832557 containerd[2124]: time="2025-07-07T05:54:17.830785867Z" level=info msg="StartContainer for \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\" returns successfully" Jul 7 05:54:18.034784 kubelet[3653]: I0707 05:54:18.031331 3653 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 05:54:18.211412 kubelet[3653]: I0707 05:54:18.211270 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5x6p\" (UniqueName: \"kubernetes.io/projected/7a3ac8b4-271d-4e2b-a58e-798ad224b60c-kube-api-access-p5x6p\") pod \"coredns-7c65d6cfc9-m9tj5\" (UID: \"7a3ac8b4-271d-4e2b-a58e-798ad224b60c\") " pod="kube-system/coredns-7c65d6cfc9-m9tj5" Jul 7 05:54:18.211412 kubelet[3653]: I0707 05:54:18.211366 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a3ac8b4-271d-4e2b-a58e-798ad224b60c-config-volume\") pod \"coredns-7c65d6cfc9-m9tj5\" (UID: \"7a3ac8b4-271d-4e2b-a58e-798ad224b60c\") " pod="kube-system/coredns-7c65d6cfc9-m9tj5" Jul 7 05:54:18.211412 kubelet[3653]: I0707 05:54:18.211411 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0dd083e8-b521-4cc5-aaec-653c08f5f793-config-volume\") pod \"coredns-7c65d6cfc9-4lspd\" (UID: \"0dd083e8-b521-4cc5-aaec-653c08f5f793\") " pod="kube-system/coredns-7c65d6cfc9-4lspd" Jul 7 05:54:18.211699 kubelet[3653]: I0707 05:54:18.211450 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtgzb\" (UniqueName: \"kubernetes.io/projected/0dd083e8-b521-4cc5-aaec-653c08f5f793-kube-api-access-jtgzb\") pod \"coredns-7c65d6cfc9-4lspd\" (UID: \"0dd083e8-b521-4cc5-aaec-653c08f5f793\") " pod="kube-system/coredns-7c65d6cfc9-4lspd" Jul 7 05:54:18.403975 containerd[2124]: time="2025-07-07T05:54:18.403738326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m9tj5,Uid:7a3ac8b4-271d-4e2b-a58e-798ad224b60c,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:18.416352 containerd[2124]: time="2025-07-07T05:54:18.415841358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4lspd,Uid:0dd083e8-b521-4cc5-aaec-653c08f5f793,Namespace:kube-system,Attempt:0,}" Jul 7 05:54:20.794080 systemd-networkd[1690]: cilium_host: Link UP Jul 7 05:54:20.794719 (udev-worker)[4441]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:20.795177 systemd-networkd[1690]: cilium_net: Link UP Jul 7 05:54:20.795647 systemd-networkd[1690]: cilium_net: Gained carrier Jul 7 05:54:20.796823 systemd-networkd[1690]: cilium_host: Gained carrier Jul 7 05:54:20.801448 (udev-worker)[4480]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:20.986587 systemd-networkd[1690]: cilium_vxlan: Link UP Jul 7 05:54:20.986607 systemd-networkd[1690]: cilium_vxlan: Gained carrier Jul 7 05:54:21.227198 systemd-networkd[1690]: cilium_net: Gained IPv6LL Jul 7 05:54:21.554105 kernel: NET: Registered PF_ALG protocol family Jul 7 05:54:21.691144 systemd-networkd[1690]: cilium_host: Gained IPv6LL Jul 7 05:54:22.267964 systemd-networkd[1690]: cilium_vxlan: Gained IPv6LL Jul 7 05:54:22.878322 systemd-networkd[1690]: lxc_health: Link UP Jul 7 05:54:22.891631 (udev-worker)[4501]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:22.911362 systemd-networkd[1690]: lxc_health: Gained carrier Jul 7 05:54:23.629734 systemd-networkd[1690]: lxc81792dd81d57: Link UP Jul 7 05:54:23.643079 kernel: eth0: renamed from tmp4bf51 Jul 7 05:54:23.649279 systemd-networkd[1690]: lxc7129546afab3: Link UP Jul 7 05:54:23.670054 (udev-worker)[4499]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:54:23.670288 systemd-networkd[1690]: lxc81792dd81d57: Gained carrier Jul 7 05:54:23.681781 kernel: eth0: renamed from tmp49564 Jul 7 05:54:23.688864 systemd-networkd[1690]: lxc7129546afab3: Gained carrier Jul 7 05:54:24.170739 kubelet[3653]: I0707 05:54:24.170547 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r49z8" podStartSLOduration=12.275376601 podStartE2EDuration="23.170499443s" podCreationTimestamp="2025-07-07 05:54:01 +0000 UTC" firstStartedPulling="2025-07-07 05:54:02.31868783 +0000 UTC m=+5.209495695" lastFinishedPulling="2025-07-07 05:54:13.213810672 +0000 UTC m=+16.104618537" observedRunningTime="2025-07-07 05:54:18.735519164 +0000 UTC m=+21.626327029" watchObservedRunningTime="2025-07-07 05:54:24.170499443 +0000 UTC m=+27.061307308" Jul 7 05:54:24.187003 systemd-networkd[1690]: lxc_health: Gained IPv6LL Jul 7 05:54:24.827420 systemd-networkd[1690]: lxc7129546afab3: Gained IPv6LL Jul 7 05:54:25.531106 systemd-networkd[1690]: lxc81792dd81d57: Gained IPv6LL Jul 7 05:54:27.867456 ntpd[2080]: Listen normally on 6 cilium_host 192.168.0.32:123 Jul 7 05:54:27.868690 ntpd[2080]: 7 Jul 05:54:27 ntpd[2080]: Listen normally on 6 cilium_host 192.168.0.32:123 Jul 7 05:54:27.868690 ntpd[2080]: 7 Jul 05:54:27 ntpd[2080]: Listen normally on 7 cilium_net [fe80::b470:d9ff:fe09:1dc9%4]:123 Jul 7 05:54:27.868690 ntpd[2080]: 7 Jul 05:54:27 ntpd[2080]: Listen normally on 8 cilium_host [fe80::5434:ecff:fe77:d31e%5]:123 Jul 7 05:54:27.868690 ntpd[2080]: 7 Jul 05:54:27 ntpd[2080]: Listen normally on 9 cilium_vxlan [fe80::ccc8:e1ff:feb0:94c3%6]:123 Jul 7 05:54:27.868690 ntpd[2080]: 7 Jul 05:54:27 ntpd[2080]: Listen normally on 10 lxc_health [fe80::3898:a5ff:feb2:f69c%8]:123 Jul 7 05:54:27.868690 ntpd[2080]: 7 Jul 05:54:27 ntpd[2080]: Listen normally on 11 lxc81792dd81d57 [fe80::ac28:11ff:fe6e:d131%10]:123 Jul 7 05:54:27.867589 ntpd[2080]: Listen normally on 7 cilium_net [fe80::b470:d9ff:fe09:1dc9%4]:123 Jul 7 05:54:27.867701 ntpd[2080]: Listen normally on 8 cilium_host [fe80::5434:ecff:fe77:d31e%5]:123 Jul 7 05:54:27.867826 ntpd[2080]: Listen normally on 9 cilium_vxlan [fe80::ccc8:e1ff:feb0:94c3%6]:123 Jul 7 05:54:27.867915 ntpd[2080]: Listen normally on 10 lxc_health [fe80::3898:a5ff:feb2:f69c%8]:123 Jul 7 05:54:27.867989 ntpd[2080]: Listen normally on 11 lxc81792dd81d57 [fe80::ac28:11ff:fe6e:d131%10]:123 Jul 7 05:54:27.869508 ntpd[2080]: Listen normally on 12 lxc7129546afab3 [fe80::dce3:63ff:fe3e:ed12%12]:123 Jul 7 05:54:27.870832 ntpd[2080]: 7 Jul 05:54:27 ntpd[2080]: Listen normally on 12 lxc7129546afab3 [fe80::dce3:63ff:fe3e:ed12%12]:123 Jul 7 05:54:32.346156 containerd[2124]: time="2025-07-07T05:54:32.344926531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:32.346156 containerd[2124]: time="2025-07-07T05:54:32.345067795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:32.346156 containerd[2124]: time="2025-07-07T05:54:32.345109315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:32.346156 containerd[2124]: time="2025-07-07T05:54:32.345340255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:32.361860 containerd[2124]: time="2025-07-07T05:54:32.360217195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:54:32.361860 containerd[2124]: time="2025-07-07T05:54:32.360364075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:54:32.361860 containerd[2124]: time="2025-07-07T05:54:32.360396895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:32.361860 containerd[2124]: time="2025-07-07T05:54:32.360614239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:54:32.603781 containerd[2124]: time="2025-07-07T05:54:32.603424005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4lspd,Uid:0dd083e8-b521-4cc5-aaec-653c08f5f793,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bf519675334d3a151cb3907b97abd1b3dbf37e3bded642dba267904988932d3\"" Jul 7 05:54:32.616586 containerd[2124]: time="2025-07-07T05:54:32.616362333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-m9tj5,Uid:7a3ac8b4-271d-4e2b-a58e-798ad224b60c,Namespace:kube-system,Attempt:0,} returns sandbox id \"49564027c146bdeac2ee81b4d3450512621ea8056df50306b585fb0483e40642\"" Jul 7 05:54:32.623471 containerd[2124]: time="2025-07-07T05:54:32.623174817Z" level=info msg="CreateContainer within sandbox \"4bf519675334d3a151cb3907b97abd1b3dbf37e3bded642dba267904988932d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:32.626536 containerd[2124]: time="2025-07-07T05:54:32.626345697Z" level=info msg="CreateContainer within sandbox \"49564027c146bdeac2ee81b4d3450512621ea8056df50306b585fb0483e40642\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:54:32.676455 containerd[2124]: time="2025-07-07T05:54:32.676349961Z" level=info msg="CreateContainer within sandbox \"4bf519675334d3a151cb3907b97abd1b3dbf37e3bded642dba267904988932d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d33e498abd3dfd666cec799566b3cccaa511c86da44b4182ab78326f4a8a525e\"" Jul 7 05:54:32.679313 containerd[2124]: time="2025-07-07T05:54:32.677478237Z" level=info msg="StartContainer for \"d33e498abd3dfd666cec799566b3cccaa511c86da44b4182ab78326f4a8a525e\"" Jul 7 05:54:32.684775 containerd[2124]: time="2025-07-07T05:54:32.684417717Z" level=info msg="CreateContainer within sandbox \"49564027c146bdeac2ee81b4d3450512621ea8056df50306b585fb0483e40642\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eef6e8ba6a92092d49f82a802fe9be5ff1eb823c1bf0909f4852b2f210e2561e\"" Jul 7 05:54:32.686848 containerd[2124]: time="2025-07-07T05:54:32.686010285Z" level=info msg="StartContainer for \"eef6e8ba6a92092d49f82a802fe9be5ff1eb823c1bf0909f4852b2f210e2561e\"" Jul 7 05:54:32.843770 containerd[2124]: time="2025-07-07T05:54:32.843574414Z" level=info msg="StartContainer for \"d33e498abd3dfd666cec799566b3cccaa511c86da44b4182ab78326f4a8a525e\" returns successfully" Jul 7 05:54:32.844270 containerd[2124]: time="2025-07-07T05:54:32.844094662Z" level=info msg="StartContainer for \"eef6e8ba6a92092d49f82a802fe9be5ff1eb823c1bf0909f4852b2f210e2561e\" returns successfully" Jul 7 05:54:33.803022 kubelet[3653]: I0707 05:54:33.802829 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-m9tj5" podStartSLOduration=32.802804559 podStartE2EDuration="32.802804559s" podCreationTimestamp="2025-07-07 05:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:33.797678699 +0000 UTC m=+36.688486576" watchObservedRunningTime="2025-07-07 05:54:33.802804559 +0000 UTC m=+36.693612448" Jul 7 05:54:41.664716 systemd[1]: Started sshd@7-172.31.20.83:22-139.178.89.65:51526.service - OpenSSH per-connection server daemon (139.178.89.65:51526). Jul 7 05:54:41.859510 sshd[5020]: Accepted publickey for core from 139.178.89.65 port 51526 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:41.862390 sshd[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:41.870867 systemd-logind[2093]: New session 8 of user core. Jul 7 05:54:41.881653 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 05:54:42.147540 sshd[5020]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:42.154134 systemd[1]: sshd@7-172.31.20.83:22-139.178.89.65:51526.service: Deactivated successfully. Jul 7 05:54:42.167112 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 05:54:42.173326 systemd-logind[2093]: Session 8 logged out. Waiting for processes to exit. Jul 7 05:54:42.175341 systemd-logind[2093]: Removed session 8. Jul 7 05:54:47.179397 systemd[1]: Started sshd@8-172.31.20.83:22-139.178.89.65:51536.service - OpenSSH per-connection server daemon (139.178.89.65:51536). Jul 7 05:54:47.363660 sshd[5035]: Accepted publickey for core from 139.178.89.65 port 51536 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:47.366451 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:47.375186 systemd-logind[2093]: New session 9 of user core. Jul 7 05:54:47.383258 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 05:54:47.624083 sshd[5035]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:47.629502 systemd[1]: sshd@8-172.31.20.83:22-139.178.89.65:51536.service: Deactivated successfully. Jul 7 05:54:47.637024 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 05:54:47.639282 systemd-logind[2093]: Session 9 logged out. Waiting for processes to exit. Jul 7 05:54:47.643017 systemd-logind[2093]: Removed session 9. Jul 7 05:54:52.657288 systemd[1]: Started sshd@9-172.31.20.83:22-139.178.89.65:42024.service - OpenSSH per-connection server daemon (139.178.89.65:42024). Jul 7 05:54:52.841414 sshd[5049]: Accepted publickey for core from 139.178.89.65 port 42024 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:52.843688 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:52.855922 systemd-logind[2093]: New session 10 of user core. Jul 7 05:54:52.868610 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 05:54:53.118044 sshd[5049]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:53.126136 systemd[1]: sshd@9-172.31.20.83:22-139.178.89.65:42024.service: Deactivated successfully. Jul 7 05:54:53.134015 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 05:54:53.134099 systemd-logind[2093]: Session 10 logged out. Waiting for processes to exit. Jul 7 05:54:53.138280 systemd-logind[2093]: Removed session 10. Jul 7 05:54:58.150309 systemd[1]: Started sshd@10-172.31.20.83:22-139.178.89.65:42032.service - OpenSSH per-connection server daemon (139.178.89.65:42032). Jul 7 05:54:58.336308 sshd[5068]: Accepted publickey for core from 139.178.89.65 port 42032 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:58.339415 sshd[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:58.349344 systemd-logind[2093]: New session 11 of user core. Jul 7 05:54:58.354435 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 05:54:58.615126 sshd[5068]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:58.621669 systemd[1]: sshd@10-172.31.20.83:22-139.178.89.65:42032.service: Deactivated successfully. Jul 7 05:54:58.623366 systemd-logind[2093]: Session 11 logged out. Waiting for processes to exit. Jul 7 05:54:58.631335 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 05:54:58.633785 systemd-logind[2093]: Removed session 11. Jul 7 05:54:58.645325 systemd[1]: Started sshd@11-172.31.20.83:22-139.178.89.65:42036.service - OpenSSH per-connection server daemon (139.178.89.65:42036). Jul 7 05:54:58.829266 sshd[5083]: Accepted publickey for core from 139.178.89.65 port 42036 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:58.831882 sshd[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:58.839922 systemd-logind[2093]: New session 12 of user core. Jul 7 05:54:58.849264 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 05:54:59.172885 sshd[5083]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:59.192913 systemd[1]: sshd@11-172.31.20.83:22-139.178.89.65:42036.service: Deactivated successfully. Jul 7 05:54:59.203912 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 05:54:59.207770 systemd-logind[2093]: Session 12 logged out. Waiting for processes to exit. Jul 7 05:54:59.221262 systemd[1]: Started sshd@12-172.31.20.83:22-139.178.89.65:42052.service - OpenSSH per-connection server daemon (139.178.89.65:42052). Jul 7 05:54:59.223410 systemd-logind[2093]: Removed session 12. Jul 7 05:54:59.410227 sshd[5095]: Accepted publickey for core from 139.178.89.65 port 42052 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:54:59.413033 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:54:59.420924 systemd-logind[2093]: New session 13 of user core. Jul 7 05:54:59.438400 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 05:54:59.677109 sshd[5095]: pam_unix(sshd:session): session closed for user core Jul 7 05:54:59.682238 systemd-logind[2093]: Session 13 logged out. Waiting for processes to exit. Jul 7 05:54:59.683731 systemd[1]: sshd@12-172.31.20.83:22-139.178.89.65:42052.service: Deactivated successfully. Jul 7 05:54:59.692494 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 05:54:59.696632 systemd-logind[2093]: Removed session 13. Jul 7 05:55:04.708269 systemd[1]: Started sshd@13-172.31.20.83:22-139.178.89.65:46644.service - OpenSSH per-connection server daemon (139.178.89.65:46644). Jul 7 05:55:04.883934 sshd[5111]: Accepted publickey for core from 139.178.89.65 port 46644 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:04.886617 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:04.895793 systemd-logind[2093]: New session 14 of user core. Jul 7 05:55:04.908288 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 05:55:05.146309 sshd[5111]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:05.152551 systemd-logind[2093]: Session 14 logged out. Waiting for processes to exit. Jul 7 05:55:05.156193 systemd[1]: sshd@13-172.31.20.83:22-139.178.89.65:46644.service: Deactivated successfully. Jul 7 05:55:05.163451 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 05:55:05.166450 systemd-logind[2093]: Removed session 14. Jul 7 05:55:10.178940 systemd[1]: Started sshd@14-172.31.20.83:22-139.178.89.65:40608.service - OpenSSH per-connection server daemon (139.178.89.65:40608). Jul 7 05:55:10.364820 sshd[5124]: Accepted publickey for core from 139.178.89.65 port 40608 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:10.367836 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:10.376913 systemd-logind[2093]: New session 15 of user core. Jul 7 05:55:10.384308 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 05:55:10.631101 sshd[5124]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:10.636895 systemd[1]: sshd@14-172.31.20.83:22-139.178.89.65:40608.service: Deactivated successfully. Jul 7 05:55:10.638039 systemd-logind[2093]: Session 15 logged out. Waiting for processes to exit. Jul 7 05:55:10.647546 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 05:55:10.649027 systemd-logind[2093]: Removed session 15. Jul 7 05:55:15.661373 systemd[1]: Started sshd@15-172.31.20.83:22-139.178.89.65:40624.service - OpenSSH per-connection server daemon (139.178.89.65:40624). Jul 7 05:55:15.845118 sshd[5139]: Accepted publickey for core from 139.178.89.65 port 40624 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:15.847863 sshd[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:15.859703 systemd-logind[2093]: New session 16 of user core. Jul 7 05:55:15.865265 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 05:55:16.110312 sshd[5139]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:16.119293 systemd[1]: sshd@15-172.31.20.83:22-139.178.89.65:40624.service: Deactivated successfully. Jul 7 05:55:16.126172 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 05:55:16.127638 systemd-logind[2093]: Session 16 logged out. Waiting for processes to exit. Jul 7 05:55:16.129382 systemd-logind[2093]: Removed session 16. Jul 7 05:55:21.143268 systemd[1]: Started sshd@16-172.31.20.83:22-139.178.89.65:35860.service - OpenSSH per-connection server daemon (139.178.89.65:35860). Jul 7 05:55:21.324993 sshd[5152]: Accepted publickey for core from 139.178.89.65 port 35860 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:21.327712 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:21.335507 systemd-logind[2093]: New session 17 of user core. Jul 7 05:55:21.341278 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 05:55:21.593172 sshd[5152]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:21.600936 systemd[1]: sshd@16-172.31.20.83:22-139.178.89.65:35860.service: Deactivated successfully. Jul 7 05:55:21.607901 systemd-logind[2093]: Session 17 logged out. Waiting for processes to exit. Jul 7 05:55:21.609043 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 05:55:21.611906 systemd-logind[2093]: Removed session 17. Jul 7 05:55:21.625294 systemd[1]: Started sshd@17-172.31.20.83:22-139.178.89.65:35870.service - OpenSSH per-connection server daemon (139.178.89.65:35870). Jul 7 05:55:21.812554 sshd[5166]: Accepted publickey for core from 139.178.89.65 port 35870 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:21.815264 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:21.822762 systemd-logind[2093]: New session 18 of user core. Jul 7 05:55:21.836292 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 05:55:22.173851 sshd[5166]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:22.180487 systemd[1]: sshd@17-172.31.20.83:22-139.178.89.65:35870.service: Deactivated successfully. Jul 7 05:55:22.189890 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 05:55:22.191663 systemd-logind[2093]: Session 18 logged out. Waiting for processes to exit. Jul 7 05:55:22.196647 systemd-logind[2093]: Removed session 18. Jul 7 05:55:22.203412 systemd[1]: Started sshd@18-172.31.20.83:22-139.178.89.65:35874.service - OpenSSH per-connection server daemon (139.178.89.65:35874). Jul 7 05:55:22.378693 sshd[5178]: Accepted publickey for core from 139.178.89.65 port 35874 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:22.381472 sshd[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:22.389502 systemd-logind[2093]: New session 19 of user core. Jul 7 05:55:22.399250 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 05:55:25.109187 sshd[5178]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:25.121381 systemd-logind[2093]: Session 19 logged out. Waiting for processes to exit. Jul 7 05:55:25.122473 systemd[1]: sshd@18-172.31.20.83:22-139.178.89.65:35874.service: Deactivated successfully. Jul 7 05:55:25.132678 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 05:55:25.151291 systemd[1]: Started sshd@19-172.31.20.83:22-139.178.89.65:35888.service - OpenSSH per-connection server daemon (139.178.89.65:35888). Jul 7 05:55:25.156291 systemd-logind[2093]: Removed session 19. Jul 7 05:55:25.348547 sshd[5198]: Accepted publickey for core from 139.178.89.65 port 35888 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:25.351301 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:25.359622 systemd-logind[2093]: New session 20 of user core. Jul 7 05:55:25.365465 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 05:55:25.856104 sshd[5198]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:25.863399 systemd[1]: sshd@19-172.31.20.83:22-139.178.89.65:35888.service: Deactivated successfully. Jul 7 05:55:25.870983 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 05:55:25.871294 systemd-logind[2093]: Session 20 logged out. Waiting for processes to exit. Jul 7 05:55:25.874888 systemd-logind[2093]: Removed session 20. Jul 7 05:55:25.886644 systemd[1]: Started sshd@20-172.31.20.83:22-139.178.89.65:35894.service - OpenSSH per-connection server daemon (139.178.89.65:35894). Jul 7 05:55:26.072643 sshd[5210]: Accepted publickey for core from 139.178.89.65 port 35894 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:26.076035 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:26.084261 systemd-logind[2093]: New session 21 of user core. Jul 7 05:55:26.092381 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 05:55:26.334061 sshd[5210]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:26.340923 systemd-logind[2093]: Session 21 logged out. Waiting for processes to exit. Jul 7 05:55:26.342182 systemd[1]: sshd@20-172.31.20.83:22-139.178.89.65:35894.service: Deactivated successfully. Jul 7 05:55:26.348970 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 05:55:26.351851 systemd-logind[2093]: Removed session 21. Jul 7 05:55:31.365243 systemd[1]: Started sshd@21-172.31.20.83:22-139.178.89.65:41432.service - OpenSSH per-connection server daemon (139.178.89.65:41432). Jul 7 05:55:31.552400 sshd[5223]: Accepted publickey for core from 139.178.89.65 port 41432 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:31.555238 sshd[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:31.563865 systemd-logind[2093]: New session 22 of user core. Jul 7 05:55:31.572507 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 05:55:31.821565 sshd[5223]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:31.831077 systemd[1]: sshd@21-172.31.20.83:22-139.178.89.65:41432.service: Deactivated successfully. Jul 7 05:55:31.838556 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 05:55:31.842350 systemd-logind[2093]: Session 22 logged out. Waiting for processes to exit. Jul 7 05:55:31.845080 systemd-logind[2093]: Removed session 22. Jul 7 05:55:36.863292 systemd[1]: Started sshd@22-172.31.20.83:22-139.178.89.65:41434.service - OpenSSH per-connection server daemon (139.178.89.65:41434). Jul 7 05:55:37.045386 sshd[5242]: Accepted publickey for core from 139.178.89.65 port 41434 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:37.048376 sshd[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:37.057879 systemd-logind[2093]: New session 23 of user core. Jul 7 05:55:37.064518 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 05:55:37.306921 sshd[5242]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:37.314338 systemd[1]: sshd@22-172.31.20.83:22-139.178.89.65:41434.service: Deactivated successfully. Jul 7 05:55:37.322553 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 05:55:37.325146 systemd-logind[2093]: Session 23 logged out. Waiting for processes to exit. Jul 7 05:55:37.327553 systemd-logind[2093]: Removed session 23. Jul 7 05:55:42.337241 systemd[1]: Started sshd@23-172.31.20.83:22-139.178.89.65:47950.service - OpenSSH per-connection server daemon (139.178.89.65:47950). Jul 7 05:55:42.522623 sshd[5256]: Accepted publickey for core from 139.178.89.65 port 47950 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:42.525467 sshd[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:42.534051 systemd-logind[2093]: New session 24 of user core. Jul 7 05:55:42.539253 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 05:55:42.778183 sshd[5256]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:42.786429 systemd-logind[2093]: Session 24 logged out. Waiting for processes to exit. Jul 7 05:55:42.787836 systemd[1]: sshd@23-172.31.20.83:22-139.178.89.65:47950.service: Deactivated successfully. Jul 7 05:55:42.794952 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 05:55:42.798341 systemd-logind[2093]: Removed session 24. Jul 7 05:55:47.809267 systemd[1]: Started sshd@24-172.31.20.83:22-139.178.89.65:47954.service - OpenSSH per-connection server daemon (139.178.89.65:47954). Jul 7 05:55:47.990897 sshd[5270]: Accepted publickey for core from 139.178.89.65 port 47954 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:47.993593 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:48.001677 systemd-logind[2093]: New session 25 of user core. Jul 7 05:55:48.011268 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 05:55:48.248134 sshd[5270]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:48.256327 systemd[1]: sshd@24-172.31.20.83:22-139.178.89.65:47954.service: Deactivated successfully. Jul 7 05:55:48.263383 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 05:55:48.266039 systemd-logind[2093]: Session 25 logged out. Waiting for processes to exit. Jul 7 05:55:48.268022 systemd-logind[2093]: Removed session 25. Jul 7 05:55:48.280240 systemd[1]: Started sshd@25-172.31.20.83:22-139.178.89.65:47962.service - OpenSSH per-connection server daemon (139.178.89.65:47962). Jul 7 05:55:48.468262 sshd[5284]: Accepted publickey for core from 139.178.89.65 port 47962 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:48.470986 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:48.479560 systemd-logind[2093]: New session 26 of user core. Jul 7 05:55:48.486286 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 05:55:51.346470 kubelet[3653]: I0707 05:55:51.346369 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4lspd" podStartSLOduration=110.346346436 podStartE2EDuration="1m50.346346436s" podCreationTimestamp="2025-07-07 05:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:54:33.858626267 +0000 UTC m=+36.749434144" watchObservedRunningTime="2025-07-07 05:55:51.346346436 +0000 UTC m=+114.237154277" Jul 7 05:55:51.374616 containerd[2124]: time="2025-07-07T05:55:51.373555176Z" level=info msg="StopContainer for \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\" with timeout 30 (s)" Jul 7 05:55:51.380727 containerd[2124]: time="2025-07-07T05:55:51.379351152Z" level=info msg="Stop container \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\" with signal terminated" Jul 7 05:55:51.413947 systemd[1]: run-containerd-runc-k8s.io-db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac-runc.aSID0H.mount: Deactivated successfully. Jul 7 05:55:51.443561 containerd[2124]: time="2025-07-07T05:55:51.443427780Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:55:51.459762 containerd[2124]: time="2025-07-07T05:55:51.459557352Z" level=info msg="StopContainer for \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\" with timeout 2 (s)" Jul 7 05:55:51.461823 containerd[2124]: time="2025-07-07T05:55:51.460372644Z" level=info msg="Stop container \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\" with signal terminated" Jul 7 05:55:51.482287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f-rootfs.mount: Deactivated successfully. Jul 7 05:55:51.490176 systemd-networkd[1690]: lxc_health: Link DOWN Jul 7 05:55:51.492810 systemd-networkd[1690]: lxc_health: Lost carrier Jul 7 05:55:51.514001 containerd[2124]: time="2025-07-07T05:55:51.513804529Z" level=info msg="shim disconnected" id=8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f namespace=k8s.io Jul 7 05:55:51.514001 containerd[2124]: time="2025-07-07T05:55:51.513889909Z" level=warning msg="cleaning up after shim disconnected" id=8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f namespace=k8s.io Jul 7 05:55:51.514001 containerd[2124]: time="2025-07-07T05:55:51.513920449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:51.544591 containerd[2124]: time="2025-07-07T05:55:51.544281073Z" level=warning msg="cleanup warnings time=\"2025-07-07T05:55:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 05:55:51.551975 containerd[2124]: time="2025-07-07T05:55:51.551922937Z" level=info msg="StopContainer for \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\" returns successfully" Jul 7 05:55:51.554073 containerd[2124]: time="2025-07-07T05:55:51.553910809Z" level=info msg="StopPodSandbox for \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\"" Jul 7 05:55:51.554073 containerd[2124]: time="2025-07-07T05:55:51.554003053Z" level=info msg="Container to stop \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:51.560450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f-shm.mount: Deactivated successfully. Jul 7 05:55:51.572057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac-rootfs.mount: Deactivated successfully. Jul 7 05:55:51.578333 containerd[2124]: time="2025-07-07T05:55:51.578081245Z" level=info msg="shim disconnected" id=db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac namespace=k8s.io Jul 7 05:55:51.578333 containerd[2124]: time="2025-07-07T05:55:51.578248753Z" level=warning msg="cleaning up after shim disconnected" id=db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac namespace=k8s.io Jul 7 05:55:51.578333 containerd[2124]: time="2025-07-07T05:55:51.578274025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:51.611554 containerd[2124]: time="2025-07-07T05:55:51.611317057Z" level=warning msg="cleanup warnings time=\"2025-07-07T05:55:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 05:55:51.620010 containerd[2124]: time="2025-07-07T05:55:51.619866721Z" level=info msg="StopContainer for \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\" returns successfully" Jul 7 05:55:51.620975 containerd[2124]: time="2025-07-07T05:55:51.620911369Z" level=info msg="StopPodSandbox for \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\"" Jul 7 05:55:51.621153 containerd[2124]: time="2025-07-07T05:55:51.621005341Z" level=info msg="Container to stop \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:51.621153 containerd[2124]: time="2025-07-07T05:55:51.621035041Z" level=info msg="Container to stop \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:51.621153 containerd[2124]: time="2025-07-07T05:55:51.621058657Z" level=info msg="Container to stop \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:51.621153 containerd[2124]: time="2025-07-07T05:55:51.621081805Z" level=info msg="Container to stop \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:51.621153 containerd[2124]: time="2025-07-07T05:55:51.621103921Z" level=info msg="Container to stop \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:55:51.653693 containerd[2124]: time="2025-07-07T05:55:51.653243269Z" level=info msg="shim disconnected" id=41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f namespace=k8s.io Jul 7 05:55:51.653693 containerd[2124]: time="2025-07-07T05:55:51.653681533Z" level=warning msg="cleaning up after shim disconnected" id=41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f namespace=k8s.io Jul 7 05:55:51.654033 containerd[2124]: time="2025-07-07T05:55:51.653707093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:51.690451 containerd[2124]: time="2025-07-07T05:55:51.690384697Z" level=info msg="TearDown network for sandbox \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" successfully" Jul 7 05:55:51.690451 containerd[2124]: time="2025-07-07T05:55:51.690451381Z" level=info msg="StopPodSandbox for \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" returns successfully" Jul 7 05:55:51.695370 containerd[2124]: time="2025-07-07T05:55:51.695128322Z" level=info msg="shim disconnected" id=71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931 namespace=k8s.io Jul 7 05:55:51.695370 containerd[2124]: time="2025-07-07T05:55:51.695361710Z" level=warning msg="cleaning up after shim disconnected" id=71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931 namespace=k8s.io Jul 7 05:55:51.695565 containerd[2124]: time="2025-07-07T05:55:51.695387078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:51.732926 containerd[2124]: time="2025-07-07T05:55:51.732528722Z" level=info msg="TearDown network for sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" successfully" Jul 7 05:55:51.732926 containerd[2124]: time="2025-07-07T05:55:51.732595622Z" level=info msg="StopPodSandbox for \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" returns successfully" Jul 7 05:55:51.842308 kubelet[3653]: I0707 05:55:51.840475 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-kernel\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.842308 kubelet[3653]: I0707 05:55:51.840546 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4646b\" (UniqueName: \"kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-kube-api-access-4646b\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.842308 kubelet[3653]: I0707 05:55:51.840585 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-xtables-lock\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.842308 kubelet[3653]: I0707 05:55:51.840604 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.842308 kubelet[3653]: I0707 05:55:51.840622 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-etc-cni-netd\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.842308 kubelet[3653]: I0707 05:55:51.840654 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-run\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844179 kubelet[3653]: I0707 05:55:51.840666 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.844179 kubelet[3653]: I0707 05:55:51.840691 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-config-path\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844179 kubelet[3653]: I0707 05:55:51.840727 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hubble-tls\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844179 kubelet[3653]: I0707 05:55:51.840784 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-bpf-maps\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844179 kubelet[3653]: I0707 05:55:51.840824 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-clustermesh-secrets\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844179 kubelet[3653]: I0707 05:55:51.840858 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cni-path\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844552 kubelet[3653]: I0707 05:55:51.840891 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-cgroup\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844552 kubelet[3653]: I0707 05:55:51.840922 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hostproc\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844552 kubelet[3653]: I0707 05:55:51.840961 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b37be48-e38c-44a6-8f55-86ba4c7ac492-cilium-config-path\") pod \"9b37be48-e38c-44a6-8f55-86ba4c7ac492\" (UID: \"9b37be48-e38c-44a6-8f55-86ba4c7ac492\") " Jul 7 05:55:51.844552 kubelet[3653]: I0707 05:55:51.840997 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-net\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844552 kubelet[3653]: I0707 05:55:51.841029 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-lib-modules\") pod \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\" (UID: \"178ac068-0fd7-4c52-ab31-776ba0fc0ea0\") " Jul 7 05:55:51.844552 kubelet[3653]: I0707 05:55:51.841067 3653 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr8kl\" (UniqueName: \"kubernetes.io/projected/9b37be48-e38c-44a6-8f55-86ba4c7ac492-kube-api-access-qr8kl\") pod \"9b37be48-e38c-44a6-8f55-86ba4c7ac492\" (UID: \"9b37be48-e38c-44a6-8f55-86ba4c7ac492\") " Jul 7 05:55:51.844917 kubelet[3653]: I0707 05:55:51.841124 3653 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-kernel\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.844917 kubelet[3653]: I0707 05:55:51.841149 3653 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-xtables-lock\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.844917 kubelet[3653]: I0707 05:55:51.844075 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.844917 kubelet[3653]: I0707 05:55:51.844168 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.854461 kubelet[3653]: I0707 05:55:51.853819 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b37be48-e38c-44a6-8f55-86ba4c7ac492-kube-api-access-qr8kl" (OuterVolumeSpecName: "kube-api-access-qr8kl") pod "9b37be48-e38c-44a6-8f55-86ba4c7ac492" (UID: "9b37be48-e38c-44a6-8f55-86ba4c7ac492"). InnerVolumeSpecName "kube-api-access-qr8kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:55:51.855761 kubelet[3653]: I0707 05:55:51.855652 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.855761 kubelet[3653]: I0707 05:55:51.855852 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.855761 kubelet[3653]: I0707 05:55:51.855898 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.856902 kubelet[3653]: I0707 05:55:51.856856 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hostproc" (OuterVolumeSpecName: "hostproc") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.859069 kubelet[3653]: I0707 05:55:51.858980 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cni-path" (OuterVolumeSpecName: "cni-path") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.861494 kubelet[3653]: I0707 05:55:51.860686 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:55:51.861494 kubelet[3653]: I0707 05:55:51.861028 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:55:51.861494 kubelet[3653]: I0707 05:55:51.861176 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:55:51.863137 kubelet[3653]: I0707 05:55:51.862977 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-kube-api-access-4646b" (OuterVolumeSpecName: "kube-api-access-4646b") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "kube-api-access-4646b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:55:51.866195 kubelet[3653]: I0707 05:55:51.866138 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b37be48-e38c-44a6-8f55-86ba4c7ac492-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b37be48-e38c-44a6-8f55-86ba4c7ac492" (UID: "9b37be48-e38c-44a6-8f55-86ba4c7ac492"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:55:51.869194 kubelet[3653]: I0707 05:55:51.869144 3653 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "178ac068-0fd7-4c52-ab31-776ba0fc0ea0" (UID: "178ac068-0fd7-4c52-ab31-776ba0fc0ea0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 05:55:51.941645 kubelet[3653]: I0707 05:55:51.941605 3653 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-host-proc-sys-net\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.942302 kubelet[3653]: I0707 05:55:51.942267 3653 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-lib-modules\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.942973 kubelet[3653]: I0707 05:55:51.942418 3653 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr8kl\" (UniqueName: \"kubernetes.io/projected/9b37be48-e38c-44a6-8f55-86ba4c7ac492-kube-api-access-qr8kl\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.942973 kubelet[3653]: I0707 05:55:51.942445 3653 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4646b\" (UniqueName: \"kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-kube-api-access-4646b\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.942973 kubelet[3653]: I0707 05:55:51.942672 3653 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-etc-cni-netd\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.942973 kubelet[3653]: I0707 05:55:51.942721 3653 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-run\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943384 kubelet[3653]: I0707 05:55:51.943324 3653 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-config-path\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943456 kubelet[3653]: I0707 05:55:51.943394 3653 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hubble-tls\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943456 kubelet[3653]: I0707 05:55:51.943420 3653 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cni-path\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943456 kubelet[3653]: I0707 05:55:51.943441 3653 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-bpf-maps\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943659 kubelet[3653]: I0707 05:55:51.943463 3653 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-clustermesh-secrets\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943659 kubelet[3653]: I0707 05:55:51.943489 3653 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-cilium-cgroup\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943659 kubelet[3653]: I0707 05:55:51.943510 3653 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/178ac068-0fd7-4c52-ab31-776ba0fc0ea0-hostproc\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:51.943659 kubelet[3653]: I0707 05:55:51.943530 3653 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b37be48-e38c-44a6-8f55-86ba4c7ac492-cilium-config-path\") on node \"ip-172-31-20-83\" DevicePath \"\"" Jul 7 05:55:52.005322 kubelet[3653]: I0707 05:55:52.005231 3653 scope.go:117] "RemoveContainer" containerID="db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac" Jul 7 05:55:52.015781 containerd[2124]: time="2025-07-07T05:55:52.015438587Z" level=info msg="RemoveContainer for \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\"" Jul 7 05:55:52.029398 containerd[2124]: time="2025-07-07T05:55:52.029346743Z" level=info msg="RemoveContainer for \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\" returns successfully" Jul 7 05:55:52.030053 kubelet[3653]: I0707 05:55:52.029994 3653 scope.go:117] "RemoveContainer" containerID="14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56" Jul 7 05:55:52.037102 containerd[2124]: time="2025-07-07T05:55:52.035090627Z" level=info msg="RemoveContainer for \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\"" Jul 7 05:55:52.041617 containerd[2124]: time="2025-07-07T05:55:52.041562119Z" level=info msg="RemoveContainer for \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\" returns successfully" Jul 7 05:55:52.044227 kubelet[3653]: I0707 05:55:52.043791 3653 scope.go:117] "RemoveContainer" containerID="dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63" Jul 7 05:55:52.053321 containerd[2124]: time="2025-07-07T05:55:52.053269043Z" level=info msg="RemoveContainer for \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\"" Jul 7 05:55:52.061182 containerd[2124]: time="2025-07-07T05:55:52.061046375Z" level=info msg="RemoveContainer for \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\" returns successfully" Jul 7 05:55:52.063872 kubelet[3653]: I0707 05:55:52.062387 3653 scope.go:117] "RemoveContainer" containerID="dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af" Jul 7 05:55:52.066216 containerd[2124]: time="2025-07-07T05:55:52.066166127Z" level=info msg="RemoveContainer for \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\"" Jul 7 05:55:52.072660 containerd[2124]: time="2025-07-07T05:55:52.072582851Z" level=info msg="RemoveContainer for \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\" returns successfully" Jul 7 05:55:52.073524 kubelet[3653]: I0707 05:55:52.073264 3653 scope.go:117] "RemoveContainer" containerID="2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447" Jul 7 05:55:52.076379 containerd[2124]: time="2025-07-07T05:55:52.075998399Z" level=info msg="RemoveContainer for \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\"" Jul 7 05:55:52.082116 containerd[2124]: time="2025-07-07T05:55:52.082066799Z" level=info msg="RemoveContainer for \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\" returns successfully" Jul 7 05:55:52.082906 kubelet[3653]: I0707 05:55:52.082719 3653 scope.go:117] "RemoveContainer" containerID="db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac" Jul 7 05:55:52.083844 containerd[2124]: time="2025-07-07T05:55:52.083563043Z" level=error msg="ContainerStatus for \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\": not found" Jul 7 05:55:52.084010 kubelet[3653]: E0707 05:55:52.083916 3653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\": not found" containerID="db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac" Jul 7 05:55:52.084296 kubelet[3653]: I0707 05:55:52.084112 3653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac"} err="failed to get container status \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"db3965f7d2785f50c469b0b57de6aaf84b85d8b5fc66bc7b7e4e9ab9550065ac\": not found" Jul 7 05:55:52.084394 kubelet[3653]: I0707 05:55:52.084329 3653 scope.go:117] "RemoveContainer" containerID="14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56" Jul 7 05:55:52.085037 containerd[2124]: time="2025-07-07T05:55:52.084950051Z" level=error msg="ContainerStatus for \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\": not found" Jul 7 05:55:52.085308 kubelet[3653]: E0707 05:55:52.085267 3653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\": not found" containerID="14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56" Jul 7 05:55:52.085386 kubelet[3653]: I0707 05:55:52.085343 3653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56"} err="failed to get container status \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\": rpc error: code = NotFound desc = an error occurred when try to find container \"14fbe345f6e1775633426770945471324915c65db667d4103cd1d62dac845a56\": not found" Jul 7 05:55:52.085450 kubelet[3653]: I0707 05:55:52.085381 3653 scope.go:117] "RemoveContainer" containerID="dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63" Jul 7 05:55:52.085920 containerd[2124]: time="2025-07-07T05:55:52.085783139Z" level=error msg="ContainerStatus for \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\": not found" Jul 7 05:55:52.086128 kubelet[3653]: E0707 05:55:52.086081 3653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\": not found" containerID="dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63" Jul 7 05:55:52.086198 kubelet[3653]: I0707 05:55:52.086151 3653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63"} err="failed to get container status \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfe7721872a2b0f1a5a5fc3fd797ba0e1c93335e3302ec0aa723751008b73d63\": not found" Jul 7 05:55:52.086198 kubelet[3653]: I0707 05:55:52.086185 3653 scope.go:117] "RemoveContainer" containerID="dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af" Jul 7 05:55:52.086527 containerd[2124]: time="2025-07-07T05:55:52.086471255Z" level=error msg="ContainerStatus for \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\": not found" Jul 7 05:55:52.087055 kubelet[3653]: E0707 05:55:52.086854 3653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\": not found" containerID="dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af" Jul 7 05:55:52.087055 kubelet[3653]: I0707 05:55:52.086901 3653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af"} err="failed to get container status \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc852d1d8ea2155a6b6ea8991f6d435171850772b94fd185b223432d382a63af\": not found" Jul 7 05:55:52.087055 kubelet[3653]: I0707 05:55:52.086932 3653 scope.go:117] "RemoveContainer" containerID="2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447" Jul 7 05:55:52.087427 containerd[2124]: time="2025-07-07T05:55:52.087321131Z" level=error msg="ContainerStatus for \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\": not found" Jul 7 05:55:52.087635 kubelet[3653]: E0707 05:55:52.087593 3653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\": not found" containerID="2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447" Jul 7 05:55:52.087725 kubelet[3653]: I0707 05:55:52.087644 3653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447"} err="failed to get container status \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e809519cfa830b0e65894647869ad0a5dac49220dae0afb77610b927ccfb447\": not found" Jul 7 05:55:52.087725 kubelet[3653]: I0707 05:55:52.087677 3653 scope.go:117] "RemoveContainer" containerID="8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f" Jul 7 05:55:52.089846 containerd[2124]: time="2025-07-07T05:55:52.089639219Z" level=info msg="RemoveContainer for \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\"" Jul 7 05:55:52.095869 containerd[2124]: time="2025-07-07T05:55:52.095809871Z" level=info msg="RemoveContainer for \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\" returns successfully" Jul 7 05:55:52.096258 kubelet[3653]: I0707 05:55:52.096186 3653 scope.go:117] "RemoveContainer" containerID="8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f" Jul 7 05:55:52.096916 containerd[2124]: time="2025-07-07T05:55:52.096821112Z" level=error msg="ContainerStatus for \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\": not found" Jul 7 05:55:52.097244 kubelet[3653]: E0707 05:55:52.097203 3653 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\": not found" containerID="8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f" Jul 7 05:55:52.097329 kubelet[3653]: I0707 05:55:52.097258 3653 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f"} err="failed to get container status \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8701d41684da1012114e7c734a8c7994230bf7234b50a9f61e953f8201a2954f\": not found" Jul 7 05:55:52.393939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931-rootfs.mount: Deactivated successfully. Jul 7 05:55:52.394216 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931-shm.mount: Deactivated successfully. Jul 7 05:55:52.394435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f-rootfs.mount: Deactivated successfully. Jul 7 05:55:52.394646 systemd[1]: var-lib-kubelet-pods-178ac068\x2d0fd7\x2d4c52\x2dab31\x2d776ba0fc0ea0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4646b.mount: Deactivated successfully. Jul 7 05:55:52.395455 systemd[1]: var-lib-kubelet-pods-9b37be48\x2de38c\x2d44a6\x2d8f55\x2d86ba4c7ac492-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqr8kl.mount: Deactivated successfully. Jul 7 05:55:52.395852 systemd[1]: var-lib-kubelet-pods-178ac068\x2d0fd7\x2d4c52\x2dab31\x2d776ba0fc0ea0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 05:55:52.396221 systemd[1]: var-lib-kubelet-pods-178ac068\x2d0fd7\x2d4c52\x2dab31\x2d776ba0fc0ea0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 05:55:52.699101 kubelet[3653]: E0707 05:55:52.698731 3653 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 05:55:53.310320 sshd[5284]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:53.319013 systemd[1]: sshd@25-172.31.20.83:22-139.178.89.65:47962.service: Deactivated successfully. Jul 7 05:55:53.324907 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 05:55:53.325360 systemd-logind[2093]: Session 26 logged out. Waiting for processes to exit. Jul 7 05:55:53.328596 systemd-logind[2093]: Removed session 26. Jul 7 05:55:53.346257 systemd[1]: Started sshd@26-172.31.20.83:22-139.178.89.65:58122.service - OpenSSH per-connection server daemon (139.178.89.65:58122). Jul 7 05:55:53.466509 kubelet[3653]: I0707 05:55:53.466379 3653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="178ac068-0fd7-4c52-ab31-776ba0fc0ea0" path="/var/lib/kubelet/pods/178ac068-0fd7-4c52-ab31-776ba0fc0ea0/volumes" Jul 7 05:55:53.467901 kubelet[3653]: I0707 05:55:53.467857 3653 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b37be48-e38c-44a6-8f55-86ba4c7ac492" path="/var/lib/kubelet/pods/9b37be48-e38c-44a6-8f55-86ba4c7ac492/volumes" Jul 7 05:55:53.517812 sshd[5454]: Accepted publickey for core from 139.178.89.65 port 58122 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:53.520520 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:53.529922 systemd-logind[2093]: New session 27 of user core. Jul 7 05:55:53.540258 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 05:55:53.867373 ntpd[2080]: Deleting interface #10 lxc_health, fe80::3898:a5ff:feb2:f69c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jul 7 05:55:53.868301 ntpd[2080]: 7 Jul 05:55:53 ntpd[2080]: Deleting interface #10 lxc_health, fe80::3898:a5ff:feb2:f69c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Jul 7 05:55:54.462783 kubelet[3653]: E0707 05:55:54.460889 3653 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-4lspd" podUID="0dd083e8-b521-4cc5-aaec-653c08f5f793" Jul 7 05:55:54.992365 sshd[5454]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:55.010612 kubelet[3653]: E0707 05:55:55.005850 3653 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="178ac068-0fd7-4c52-ab31-776ba0fc0ea0" containerName="mount-bpf-fs" Jul 7 05:55:55.010612 kubelet[3653]: E0707 05:55:55.005901 3653 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="178ac068-0fd7-4c52-ab31-776ba0fc0ea0" containerName="mount-cgroup" Jul 7 05:55:55.010612 kubelet[3653]: E0707 05:55:55.005919 3653 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b37be48-e38c-44a6-8f55-86ba4c7ac492" containerName="cilium-operator" Jul 7 05:55:55.010612 kubelet[3653]: E0707 05:55:55.005948 3653 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="178ac068-0fd7-4c52-ab31-776ba0fc0ea0" containerName="apply-sysctl-overwrites" Jul 7 05:55:55.010612 kubelet[3653]: E0707 05:55:55.005968 3653 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="178ac068-0fd7-4c52-ab31-776ba0fc0ea0" containerName="clean-cilium-state" Jul 7 05:55:55.010612 kubelet[3653]: E0707 05:55:55.005984 3653 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="178ac068-0fd7-4c52-ab31-776ba0fc0ea0" containerName="cilium-agent" Jul 7 05:55:55.010612 kubelet[3653]: I0707 05:55:55.006028 3653 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b37be48-e38c-44a6-8f55-86ba4c7ac492" containerName="cilium-operator" Jul 7 05:55:55.010612 kubelet[3653]: I0707 05:55:55.006046 3653 memory_manager.go:354] "RemoveStaleState removing state" podUID="178ac068-0fd7-4c52-ab31-776ba0fc0ea0" containerName="cilium-agent" Jul 7 05:55:55.007364 systemd[1]: sshd@26-172.31.20.83:22-139.178.89.65:58122.service: Deactivated successfully. Jul 7 05:55:55.031867 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 05:55:55.040889 systemd-logind[2093]: Session 27 logged out. Waiting for processes to exit. Jul 7 05:55:55.060366 systemd[1]: Started sshd@27-172.31.20.83:22-139.178.89.65:58124.service - OpenSSH per-connection server daemon (139.178.89.65:58124). Jul 7 05:55:55.063779 systemd-logind[2093]: Removed session 27. Jul 7 05:55:55.072066 kubelet[3653]: I0707 05:55:55.072018 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-etc-cni-netd\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.072330 kubelet[3653]: I0707 05:55:55.072298 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09246d02-3c49-4a34-a3c4-252d87ade9f1-cilium-config-path\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.073961 kubelet[3653]: I0707 05:55:55.073901 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-cni-path\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074089 kubelet[3653]: I0707 05:55:55.073971 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-host-proc-sys-net\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074089 kubelet[3653]: I0707 05:55:55.074023 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2fl\" (UniqueName: \"kubernetes.io/projected/09246d02-3c49-4a34-a3c4-252d87ade9f1-kube-api-access-5h2fl\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074089 kubelet[3653]: I0707 05:55:55.074066 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09246d02-3c49-4a34-a3c4-252d87ade9f1-clustermesh-secrets\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074269 kubelet[3653]: I0707 05:55:55.074156 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-bpf-maps\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074269 kubelet[3653]: I0707 05:55:55.074195 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-cilium-run\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074375 kubelet[3653]: I0707 05:55:55.074267 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-hostproc\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074375 kubelet[3653]: I0707 05:55:55.074310 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-cilium-cgroup\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074484 kubelet[3653]: I0707 05:55:55.074374 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-lib-modules\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074484 kubelet[3653]: I0707 05:55:55.074413 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09246d02-3c49-4a34-a3c4-252d87ade9f1-hubble-tls\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074484 kubelet[3653]: I0707 05:55:55.074460 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-xtables-lock\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074645 kubelet[3653]: I0707 05:55:55.074495 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09246d02-3c49-4a34-a3c4-252d87ade9f1-cilium-ipsec-secrets\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.074645 kubelet[3653]: I0707 05:55:55.074529 3653 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09246d02-3c49-4a34-a3c4-252d87ade9f1-host-proc-sys-kernel\") pod \"cilium-fbfqt\" (UID: \"09246d02-3c49-4a34-a3c4-252d87ade9f1\") " pod="kube-system/cilium-fbfqt" Jul 7 05:55:55.310000 sshd[5468]: Accepted publickey for core from 139.178.89.65 port 58124 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:55.312778 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:55.320916 systemd-logind[2093]: New session 28 of user core. Jul 7 05:55:55.328348 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 05:55:55.372908 containerd[2124]: time="2025-07-07T05:55:55.372412756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbfqt,Uid:09246d02-3c49-4a34-a3c4-252d87ade9f1,Namespace:kube-system,Attempt:0,}" Jul 7 05:55:55.413462 containerd[2124]: time="2025-07-07T05:55:55.412977520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:55:55.413462 containerd[2124]: time="2025-07-07T05:55:55.413180848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:55:55.413462 containerd[2124]: time="2025-07-07T05:55:55.413223340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:55.413920 containerd[2124]: time="2025-07-07T05:55:55.413410744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:55:55.456445 sshd[5468]: pam_unix(sshd:session): session closed for user core Jul 7 05:55:55.473085 systemd[1]: sshd@27-172.31.20.83:22-139.178.89.65:58124.service: Deactivated successfully. Jul 7 05:55:55.482406 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 05:55:55.488324 systemd-logind[2093]: Session 28 logged out. Waiting for processes to exit. Jul 7 05:55:55.503125 systemd[1]: Started sshd@28-172.31.20.83:22-139.178.89.65:58140.service - OpenSSH per-connection server daemon (139.178.89.65:58140). Jul 7 05:55:55.508797 systemd-logind[2093]: Removed session 28. Jul 7 05:55:55.519570 containerd[2124]: time="2025-07-07T05:55:55.519469913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbfqt,Uid:09246d02-3c49-4a34-a3c4-252d87ade9f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\"" Jul 7 05:55:55.532086 containerd[2124]: time="2025-07-07T05:55:55.531803885Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 05:55:55.555179 containerd[2124]: time="2025-07-07T05:55:55.555112025Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"339ecf7b63c80d2f7594f52e155e5d1ea3114ef502f84b380fe0531cb2d2b955\"" Jul 7 05:55:55.557708 containerd[2124]: time="2025-07-07T05:55:55.556813853Z" level=info msg="StartContainer for \"339ecf7b63c80d2f7594f52e155e5d1ea3114ef502f84b380fe0531cb2d2b955\"" Jul 7 05:55:55.662990 containerd[2124]: time="2025-07-07T05:55:55.660281885Z" level=info msg="StartContainer for \"339ecf7b63c80d2f7594f52e155e5d1ea3114ef502f84b380fe0531cb2d2b955\" returns successfully" Jul 7 05:55:55.703005 sshd[5516]: Accepted publickey for core from 139.178.89.65 port 58140 ssh2: RSA SHA256:byQh04q5diV0gbNLNbGxy5NKXZJrwSK1WXG9xVxkktU Jul 7 05:55:55.706558 sshd[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:55:55.718309 systemd-logind[2093]: New session 29 of user core. Jul 7 05:55:55.721503 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 05:55:55.746158 containerd[2124]: time="2025-07-07T05:55:55.745890366Z" level=info msg="shim disconnected" id=339ecf7b63c80d2f7594f52e155e5d1ea3114ef502f84b380fe0531cb2d2b955 namespace=k8s.io Jul 7 05:55:55.746158 containerd[2124]: time="2025-07-07T05:55:55.745964598Z" level=warning msg="cleaning up after shim disconnected" id=339ecf7b63c80d2f7594f52e155e5d1ea3114ef502f84b380fe0531cb2d2b955 namespace=k8s.io Jul 7 05:55:55.746158 containerd[2124]: time="2025-07-07T05:55:55.745985514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:56.050441 containerd[2124]: time="2025-07-07T05:55:56.050386755Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 05:55:56.074535 containerd[2124]: time="2025-07-07T05:55:56.074458455Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a1a27229ab076263af2818498bebcb387f76ae651ededb7f48ffe5228a19dd3b\"" Jul 7 05:55:56.077403 containerd[2124]: time="2025-07-07T05:55:56.076156527Z" level=info msg="StartContainer for \"a1a27229ab076263af2818498bebcb387f76ae651ededb7f48ffe5228a19dd3b\"" Jul 7 05:55:56.163868 containerd[2124]: time="2025-07-07T05:55:56.163665940Z" level=info msg="StartContainer for \"a1a27229ab076263af2818498bebcb387f76ae651ededb7f48ffe5228a19dd3b\" returns successfully" Jul 7 05:55:56.221504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1a27229ab076263af2818498bebcb387f76ae651ededb7f48ffe5228a19dd3b-rootfs.mount: Deactivated successfully. Jul 7 05:55:56.224842 containerd[2124]: time="2025-07-07T05:55:56.224027140Z" level=info msg="shim disconnected" id=a1a27229ab076263af2818498bebcb387f76ae651ededb7f48ffe5228a19dd3b namespace=k8s.io Jul 7 05:55:56.224842 containerd[2124]: time="2025-07-07T05:55:56.224199496Z" level=warning msg="cleaning up after shim disconnected" id=a1a27229ab076263af2818498bebcb387f76ae651ededb7f48ffe5228a19dd3b namespace=k8s.io Jul 7 05:55:56.224842 containerd[2124]: time="2025-07-07T05:55:56.224225596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:56.460968 kubelet[3653]: E0707 05:55:56.460779 3653 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-4lspd" podUID="0dd083e8-b521-4cc5-aaec-653c08f5f793" Jul 7 05:55:57.058671 containerd[2124]: time="2025-07-07T05:55:57.058551988Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 05:55:57.094064 containerd[2124]: time="2025-07-07T05:55:57.093883864Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b17d099bbc46158339bf6a8ceaf4e86893bf424569668aa569e09a2de9de84d3\"" Jul 7 05:55:57.095445 containerd[2124]: time="2025-07-07T05:55:57.095188588Z" level=info msg="StartContainer for \"b17d099bbc46158339bf6a8ceaf4e86893bf424569668aa569e09a2de9de84d3\"" Jul 7 05:55:57.206372 containerd[2124]: time="2025-07-07T05:55:57.206210297Z" level=info msg="StartContainer for \"b17d099bbc46158339bf6a8ceaf4e86893bf424569668aa569e09a2de9de84d3\" returns successfully" Jul 7 05:55:57.251235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b17d099bbc46158339bf6a8ceaf4e86893bf424569668aa569e09a2de9de84d3-rootfs.mount: Deactivated successfully. Jul 7 05:55:57.263183 containerd[2124]: time="2025-07-07T05:55:57.263029265Z" level=info msg="shim disconnected" id=b17d099bbc46158339bf6a8ceaf4e86893bf424569668aa569e09a2de9de84d3 namespace=k8s.io Jul 7 05:55:57.263480 containerd[2124]: time="2025-07-07T05:55:57.263185373Z" level=warning msg="cleaning up after shim disconnected" id=b17d099bbc46158339bf6a8ceaf4e86893bf424569668aa569e09a2de9de84d3 namespace=k8s.io Jul 7 05:55:57.263480 containerd[2124]: time="2025-07-07T05:55:57.263215133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:57.416583 containerd[2124]: time="2025-07-07T05:55:57.416390742Z" level=info msg="StopPodSandbox for \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\"" Jul 7 05:55:57.416583 containerd[2124]: time="2025-07-07T05:55:57.416558346Z" level=info msg="TearDown network for sandbox \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" successfully" Jul 7 05:55:57.416817 containerd[2124]: time="2025-07-07T05:55:57.416587422Z" level=info msg="StopPodSandbox for \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" returns successfully" Jul 7 05:55:57.418068 containerd[2124]: time="2025-07-07T05:55:57.417953082Z" level=info msg="RemovePodSandbox for \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\"" Jul 7 05:55:57.418242 containerd[2124]: time="2025-07-07T05:55:57.418067394Z" level=info msg="Forcibly stopping sandbox \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\"" Jul 7 05:55:57.418242 containerd[2124]: time="2025-07-07T05:55:57.418222926Z" level=info msg="TearDown network for sandbox \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" successfully" Jul 7 05:55:57.425309 containerd[2124]: time="2025-07-07T05:55:57.425236590Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:57.425466 containerd[2124]: time="2025-07-07T05:55:57.425343210Z" level=info msg="RemovePodSandbox \"41ff33801e586fd1e9b55210fe027b631acf09554f05e27244b2f746fc4ed42f\" returns successfully" Jul 7 05:55:57.426260 containerd[2124]: time="2025-07-07T05:55:57.426204906Z" level=info msg="StopPodSandbox for \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\"" Jul 7 05:55:57.426389 containerd[2124]: time="2025-07-07T05:55:57.426348378Z" level=info msg="TearDown network for sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" successfully" Jul 7 05:55:57.426389 containerd[2124]: time="2025-07-07T05:55:57.426373782Z" level=info msg="StopPodSandbox for \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" returns successfully" Jul 7 05:55:57.427318 containerd[2124]: time="2025-07-07T05:55:57.427253358Z" level=info msg="RemovePodSandbox for \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\"" Jul 7 05:55:57.427424 containerd[2124]: time="2025-07-07T05:55:57.427322706Z" level=info msg="Forcibly stopping sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\"" Jul 7 05:55:57.427485 containerd[2124]: time="2025-07-07T05:55:57.427429674Z" level=info msg="TearDown network for sandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" successfully" Jul 7 05:55:57.433623 containerd[2124]: time="2025-07-07T05:55:57.433545474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 05:55:57.434020 containerd[2124]: time="2025-07-07T05:55:57.433649946Z" level=info msg="RemovePodSandbox \"71f2c49444d7b2ff037fa15fb6e1c499adbd99580fd35eba7ae8f71a17244931\" returns successfully" Jul 7 05:55:57.703021 kubelet[3653]: E0707 05:55:57.702024 3653 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 05:55:58.075378 containerd[2124]: time="2025-07-07T05:55:58.075294725Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 05:55:58.131824 containerd[2124]: time="2025-07-07T05:55:58.123018893Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e35016f38ca8d3cb50cdd2551d9daa0223488a07e7e80fac074f8fa73f096fe\"" Jul 7 05:55:58.131824 containerd[2124]: time="2025-07-07T05:55:58.125717297Z" level=info msg="StartContainer for \"8e35016f38ca8d3cb50cdd2551d9daa0223488a07e7e80fac074f8fa73f096fe\"" Jul 7 05:55:58.134124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747100983.mount: Deactivated successfully. Jul 7 05:55:58.356654 containerd[2124]: time="2025-07-07T05:55:58.355705291Z" level=info msg="StartContainer for \"8e35016f38ca8d3cb50cdd2551d9daa0223488a07e7e80fac074f8fa73f096fe\" returns successfully" Jul 7 05:55:58.398951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e35016f38ca8d3cb50cdd2551d9daa0223488a07e7e80fac074f8fa73f096fe-rootfs.mount: Deactivated successfully. Jul 7 05:55:58.410150 containerd[2124]: time="2025-07-07T05:55:58.409027639Z" level=info msg="shim disconnected" id=8e35016f38ca8d3cb50cdd2551d9daa0223488a07e7e80fac074f8fa73f096fe namespace=k8s.io Jul 7 05:55:58.410150 containerd[2124]: time="2025-07-07T05:55:58.409803547Z" level=warning msg="cleaning up after shim disconnected" id=8e35016f38ca8d3cb50cdd2551d9daa0223488a07e7e80fac074f8fa73f096fe namespace=k8s.io Jul 7 05:55:58.410150 containerd[2124]: time="2025-07-07T05:55:58.409837555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:55:58.461087 kubelet[3653]: E0707 05:55:58.461001 3653 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-4lspd" podUID="0dd083e8-b521-4cc5-aaec-653c08f5f793" Jul 7 05:55:59.082037 containerd[2124]: time="2025-07-07T05:55:59.081980526Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 05:55:59.108522 containerd[2124]: time="2025-07-07T05:55:59.108327186Z" level=info msg="CreateContainer within sandbox \"2fe53b83189fa965a9b38a972fbd4ab7cb9e850b161c450672154f3f3b094b69\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3bbc8e46ff0442f355dfc3083773d304aac12d4663adc876000d984900af6b66\"" Jul 7 05:55:59.109265 containerd[2124]: time="2025-07-07T05:55:59.109212474Z" level=info msg="StartContainer for \"3bbc8e46ff0442f355dfc3083773d304aac12d4663adc876000d984900af6b66\"" Jul 7 05:55:59.213629 containerd[2124]: time="2025-07-07T05:55:59.213453259Z" level=info msg="StartContainer for \"3bbc8e46ff0442f355dfc3083773d304aac12d4663adc876000d984900af6b66\" returns successfully" Jul 7 05:55:59.868066 kubelet[3653]: I0707 05:55:59.867809 3653 setters.go:600] "Node became not ready" node="ip-172-31-20-83" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T05:55:59Z","lastTransitionTime":"2025-07-07T05:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 05:55:59.978846 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 7 05:56:00.129876 kubelet[3653]: I0707 05:56:00.129565 3653 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fbfqt" podStartSLOduration=6.129542335 podStartE2EDuration="6.129542335s" podCreationTimestamp="2025-07-07 05:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:56:00.122719135 +0000 UTC m=+123.013527072" watchObservedRunningTime="2025-07-07 05:56:00.129542335 +0000 UTC m=+123.020350188" Jul 7 05:56:00.461530 kubelet[3653]: E0707 05:56:00.461241 3653 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-4lspd" podUID="0dd083e8-b521-4cc5-aaec-653c08f5f793" Jul 7 05:56:02.462925 kubelet[3653]: E0707 05:56:02.462847 3653 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-4lspd" podUID="0dd083e8-b521-4cc5-aaec-653c08f5f793" Jul 7 05:56:03.005043 update_engine[2095]: I20250707 05:56:03.002843 2095 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 05:56:03.005043 update_engine[2095]: I20250707 05:56:03.002911 2095 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 05:56:03.005043 update_engine[2095]: I20250707 05:56:03.003340 2095 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 05:56:03.008059 update_engine[2095]: I20250707 05:56:03.007275 2095 omaha_request_params.cc:62] Current group set to lts Jul 7 05:56:03.009189 update_engine[2095]: I20250707 05:56:03.008529 2095 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 05:56:03.010607 update_engine[2095]: I20250707 05:56:03.009363 2095 update_attempter.cc:643] Scheduling an action processor start. Jul 7 05:56:03.010607 update_engine[2095]: I20250707 05:56:03.009428 2095 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 05:56:03.010607 update_engine[2095]: I20250707 05:56:03.009495 2095 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 05:56:03.010607 update_engine[2095]: I20250707 05:56:03.009620 2095 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 05:56:03.010607 update_engine[2095]: I20250707 05:56:03.009641 2095 omaha_request_action.cc:272] Request: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: Jul 7 05:56:03.010607 update_engine[2095]: I20250707 05:56:03.009658 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 05:56:03.015162 locksmithd[2151]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 05:56:03.022358 update_engine[2095]: I20250707 05:56:03.020912 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 05:56:03.022358 update_engine[2095]: I20250707 05:56:03.021482 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 05:56:03.061518 update_engine[2095]: E20250707 05:56:03.061446 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 05:56:03.061847 update_engine[2095]: I20250707 05:56:03.061798 2095 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 05:56:04.185680 systemd-networkd[1690]: lxc_health: Link UP Jul 7 05:56:04.196321 systemd-networkd[1690]: lxc_health: Gained carrier Jul 7 05:56:04.200923 (udev-worker)[6303]: Network interface NamePolicy= disabled on kernel command line. Jul 7 05:56:06.204941 systemd-networkd[1690]: lxc_health: Gained IPv6LL Jul 7 05:56:06.932600 systemd[1]: run-containerd-runc-k8s.io-3bbc8e46ff0442f355dfc3083773d304aac12d4663adc876000d984900af6b66-runc.ExzWs7.mount: Deactivated successfully. Jul 7 05:56:08.867437 ntpd[2080]: Listen normally on 13 lxc_health [fe80::30be:31ff:feb2:608a%14]:123 Jul 7 05:56:08.868045 ntpd[2080]: 7 Jul 05:56:08 ntpd[2080]: Listen normally on 13 lxc_health [fe80::30be:31ff:feb2:608a%14]:123 Jul 7 05:56:11.583267 systemd[1]: run-containerd-runc-k8s.io-3bbc8e46ff0442f355dfc3083773d304aac12d4663adc876000d984900af6b66-runc.q1gqZT.mount: Deactivated successfully. Jul 7 05:56:11.725096 sshd[5516]: pam_unix(sshd:session): session closed for user core Jul 7 05:56:11.737474 systemd[1]: sshd@28-172.31.20.83:22-139.178.89.65:58140.service: Deactivated successfully. Jul 7 05:56:11.748779 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 05:56:11.751550 systemd-logind[2093]: Session 29 logged out. Waiting for processes to exit. Jul 7 05:56:11.756222 systemd-logind[2093]: Removed session 29. Jul 7 05:56:12.999803 update_engine[2095]: I20250707 05:56:12.996798 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 05:56:12.999803 update_engine[2095]: I20250707 05:56:12.997156 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 05:56:12.999803 update_engine[2095]: I20250707 05:56:12.997459 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 05:56:13.001031 update_engine[2095]: E20250707 05:56:13.000877 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 05:56:13.001031 update_engine[2095]: I20250707 05:56:13.000985 2095 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 05:56:23.003814 update_engine[2095]: I20250707 05:56:23.003202 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 05:56:23.003814 update_engine[2095]: I20250707 05:56:23.003543 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 05:56:23.004693 update_engine[2095]: I20250707 05:56:23.003859 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 05:56:23.004693 update_engine[2095]: E20250707 05:56:23.004376 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 05:56:23.004693 update_engine[2095]: I20250707 05:56:23.004451 2095 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 05:56:25.634809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14b57c36c4d7183fb6a01bbf3d5ea238e0c8716ec0f7e786c30b725ace8f0d6a-rootfs.mount: Deactivated successfully. Jul 7 05:56:25.679767 containerd[2124]: time="2025-07-07T05:56:25.679615834Z" level=info msg="shim disconnected" id=14b57c36c4d7183fb6a01bbf3d5ea238e0c8716ec0f7e786c30b725ace8f0d6a namespace=k8s.io Jul 7 05:56:25.680539 containerd[2124]: time="2025-07-07T05:56:25.679781182Z" level=warning msg="cleaning up after shim disconnected" id=14b57c36c4d7183fb6a01bbf3d5ea238e0c8716ec0f7e786c30b725ace8f0d6a namespace=k8s.io Jul 7 05:56:25.680539 containerd[2124]: time="2025-07-07T05:56:25.679803826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:56:26.172868 kubelet[3653]: I0707 05:56:26.172632 3653 scope.go:117] "RemoveContainer" containerID="14b57c36c4d7183fb6a01bbf3d5ea238e0c8716ec0f7e786c30b725ace8f0d6a" Jul 7 05:56:26.176793 containerd[2124]: time="2025-07-07T05:56:26.176643969Z" level=info msg="CreateContainer within sandbox \"960e97298d04527447597c77f577f7c49faf247a30df07d3c6a548411f6c115d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 7 05:56:26.201266 containerd[2124]: time="2025-07-07T05:56:26.201131505Z" level=info msg="CreateContainer within sandbox \"960e97298d04527447597c77f577f7c49faf247a30df07d3c6a548411f6c115d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c2f61ea4e0d39ca3e2359a4d4d7f1665cd54df87c2af76ea7ef7b37077048bd4\"" Jul 7 05:56:26.203804 containerd[2124]: time="2025-07-07T05:56:26.201817473Z" level=info msg="StartContainer for \"c2f61ea4e0d39ca3e2359a4d4d7f1665cd54df87c2af76ea7ef7b37077048bd4\"" Jul 7 05:56:26.320995 containerd[2124]: time="2025-07-07T05:56:26.320916405Z" level=info msg="StartContainer for \"c2f61ea4e0d39ca3e2359a4d4d7f1665cd54df87c2af76ea7ef7b37077048bd4\" returns successfully" Jul 7 05:56:30.365942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f18e1e784916ae7704a46b1299fbbd3acd2f214246b54e2cc0dbd2954681faf-rootfs.mount: Deactivated successfully. Jul 7 05:56:30.378244 containerd[2124]: time="2025-07-07T05:56:30.378136358Z" level=info msg="shim disconnected" id=4f18e1e784916ae7704a46b1299fbbd3acd2f214246b54e2cc0dbd2954681faf namespace=k8s.io Jul 7 05:56:30.378244 containerd[2124]: time="2025-07-07T05:56:30.378220970Z" level=warning msg="cleaning up after shim disconnected" id=4f18e1e784916ae7704a46b1299fbbd3acd2f214246b54e2cc0dbd2954681faf namespace=k8s.io Jul 7 05:56:30.378244 containerd[2124]: time="2025-07-07T05:56:30.378244718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:56:30.682454 kubelet[3653]: E0707 05:56:30.681593 3653 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 7 05:56:31.192252 kubelet[3653]: I0707 05:56:31.191982 3653 scope.go:117] "RemoveContainer" containerID="4f18e1e784916ae7704a46b1299fbbd3acd2f214246b54e2cc0dbd2954681faf" Jul 7 05:56:31.194775 containerd[2124]: time="2025-07-07T05:56:31.194696882Z" level=info msg="CreateContainer within sandbox \"73b1bb6fca6811ae9ea7d13c25465fc9af4f5d8a043c69b3eeb936add7691d04\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 7 05:56:31.224020 containerd[2124]: time="2025-07-07T05:56:31.223943210Z" level=info msg="CreateContainer within sandbox \"73b1bb6fca6811ae9ea7d13c25465fc9af4f5d8a043c69b3eeb936add7691d04\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"724f36d94f81f76222ae2f0e62a0070de24bba827fcbc219fe91926a2beb3240\"" Jul 7 05:56:31.224718 containerd[2124]: time="2025-07-07T05:56:31.224653310Z" level=info msg="StartContainer for \"724f36d94f81f76222ae2f0e62a0070de24bba827fcbc219fe91926a2beb3240\"" Jul 7 05:56:31.337512 containerd[2124]: time="2025-07-07T05:56:31.337339262Z" level=info msg="StartContainer for \"724f36d94f81f76222ae2f0e62a0070de24bba827fcbc219fe91926a2beb3240\" returns successfully" Jul 7 05:56:33.005860 update_engine[2095]: I20250707 05:56:33.005771 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 05:56:33.006483 update_engine[2095]: I20250707 05:56:33.006117 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 05:56:33.006483 update_engine[2095]: I20250707 05:56:33.006400 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 05:56:33.009243 update_engine[2095]: E20250707 05:56:33.006929 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007028 2095 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007050 2095 omaha_request_action.cc:617] Omaha request response: Jul 7 05:56:33.009243 update_engine[2095]: E20250707 05:56:33.007169 2095 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007200 2095 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007217 2095 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007233 2095 update_attempter.cc:306] Processing Done. Jul 7 05:56:33.009243 update_engine[2095]: E20250707 05:56:33.007261 2095 update_attempter.cc:619] Update failed. Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007277 2095 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007292 2095 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007308 2095 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007414 2095 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007468 2095 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 05:56:33.009243 update_engine[2095]: I20250707 05:56:33.007489 2095 omaha_request_action.cc:272] Request: Jul 7 05:56:33.009243 update_engine[2095]: Jul 7 05:56:33.009243 update_engine[2095]: Jul 7 05:56:33.010360 locksmithd[2151]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 05:56:33.010913 update_engine[2095]: Jul 7 05:56:33.010913 update_engine[2095]: Jul 7 05:56:33.010913 update_engine[2095]: Jul 7 05:56:33.010913 update_engine[2095]: Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.007506 2095 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.007782 2095 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.008029 2095 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 05:56:33.010913 update_engine[2095]: E20250707 05:56:33.008818 2095 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.008898 2095 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.008915 2095 omaha_request_action.cc:617] Omaha request response: Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.008934 2095 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.008949 2095 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.008966 2095 update_attempter.cc:306] Processing Done. Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.008982 2095 update_attempter.cc:310] Error event sent. Jul 7 05:56:33.010913 update_engine[2095]: I20250707 05:56:33.009003 2095 update_check_scheduler.cc:74] Next update check in 42m26s Jul 7 05:56:33.011817 locksmithd[2151]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 05:56:40.683087 kubelet[3653]: E0707 05:56:40.682562 3653 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-83?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"