Jul 9 23:46:10.098315 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 9 23:46:10.098358 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:46:10.098382 kernel: KASLR disabled due to lack of seed Jul 9 23:46:10.098398 kernel: efi: EFI v2.7 by EDK II Jul 9 23:46:10.098413 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Jul 9 23:46:10.098427 kernel: secureboot: Secure boot disabled Jul 9 23:46:10.098443 kernel: ACPI: Early table checksum verification disabled Jul 9 23:46:10.098458 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 9 23:46:10.098473 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 9 23:46:10.098488 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 9 23:46:10.098560 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 9 23:46:10.098599 kernel: ACPI: FACS 0x0000000078630000 000040 Jul 9 23:46:10.098615 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 9 23:46:10.098631 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 9 23:46:10.098649 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 9 23:46:10.098665 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 9 23:46:10.098686 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 9 23:46:10.098703 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 9 23:46:10.098719 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 9 23:46:10.098735 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 9 23:46:10.098750 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 9 23:46:10.098766 kernel: printk: legacy bootconsole [uart0] enabled Jul 9 23:46:10.098782 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:46:10.098798 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 9 23:46:10.098814 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] Jul 9 23:46:10.098830 kernel: Zone ranges: Jul 9 23:46:10.098845 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 9 23:46:10.098886 kernel: DMA32 empty Jul 9 23:46:10.098902 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 9 23:46:10.098918 kernel: Device empty Jul 9 23:46:10.098933 kernel: Movable zone start for each node Jul 9 23:46:10.098948 kernel: Early memory node ranges Jul 9 23:46:10.098965 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 9 23:46:10.098980 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 9 23:46:10.098996 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 9 23:46:10.099011 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 9 23:46:10.099026 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 9 23:46:10.099042 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 9 23:46:10.099058 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 9 23:46:10.099078 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 9 23:46:10.099101 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 9 23:46:10.099117 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 9 23:46:10.099134 kernel: psci: probing for conduit method from ACPI. Jul 9 23:46:10.099150 kernel: psci: PSCIv1.0 detected in firmware. Jul 9 23:46:10.099170 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:46:10.099186 kernel: psci: Trusted OS migration not required Jul 9 23:46:10.099202 kernel: psci: SMC Calling Convention v1.1 Jul 9 23:46:10.099219 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 9 23:46:10.099235 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:46:10.099251 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:46:10.099268 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 9 23:46:10.099284 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:46:10.099301 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:46:10.099317 kernel: CPU features: detected: Spectre-v2 Jul 9 23:46:10.099333 kernel: CPU features: detected: Spectre-v3a Jul 9 23:46:10.099353 kernel: CPU features: detected: Spectre-BHB Jul 9 23:46:10.099369 kernel: CPU features: detected: ARM erratum 1742098 Jul 9 23:46:10.099386 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 9 23:46:10.099402 kernel: alternatives: applying boot alternatives Jul 9 23:46:10.099421 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:46:10.099438 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:46:10.099455 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:46:10.099471 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:46:10.099487 kernel: Fallback order for Node 0: 0 Jul 9 23:46:10.103819 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jul 9 23:46:10.103855 kernel: Policy zone: Normal Jul 9 23:46:10.103873 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:46:10.103890 kernel: software IO TLB: area num 2. Jul 9 23:46:10.103908 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 9 23:46:10.103925 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 23:46:10.103942 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:46:10.103959 kernel: rcu: RCU event tracing is enabled. Jul 9 23:46:10.103977 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 23:46:10.103994 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:46:10.104011 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:46:10.104028 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:46:10.104044 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 23:46:10.104065 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:46:10.104083 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 23:46:10.104100 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:46:10.104117 kernel: GICv3: 96 SPIs implemented Jul 9 23:46:10.104134 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:46:10.104150 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:46:10.104167 kernel: GICv3: GICv3 features: 16 PPIs Jul 9 23:46:10.104184 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 9 23:46:10.104200 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 9 23:46:10.104217 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 9 23:46:10.104234 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jul 9 23:46:10.104251 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jul 9 23:46:10.104273 kernel: GICv3: using LPI property table @0x0000000400110000 Jul 9 23:46:10.104290 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 9 23:46:10.104307 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jul 9 23:46:10.104324 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:46:10.104340 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 9 23:46:10.104357 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 9 23:46:10.104374 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 9 23:46:10.104391 kernel: Console: colour dummy device 80x25 Jul 9 23:46:10.104409 kernel: printk: legacy console [tty1] enabled Jul 9 23:46:10.104426 kernel: ACPI: Core revision 20240827 Jul 9 23:46:10.104447 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 9 23:46:10.104464 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:46:10.104481 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:46:10.104537 kernel: landlock: Up and running. Jul 9 23:46:10.104562 kernel: SELinux: Initializing. Jul 9 23:46:10.104581 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:46:10.104599 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:46:10.104616 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:46:10.104634 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:46:10.104658 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:46:10.104675 kernel: Remapping and enabling EFI services. Jul 9 23:46:10.104692 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:46:10.104708 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:46:10.104726 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 9 23:46:10.104743 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jul 9 23:46:10.104761 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 9 23:46:10.104778 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 23:46:10.104795 kernel: SMP: Total of 2 processors activated. Jul 9 23:46:10.104816 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:46:10.104844 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:46:10.104861 kernel: CPU features: detected: 32-bit EL1 Support Jul 9 23:46:10.104883 kernel: CPU features: detected: CRC32 instructions Jul 9 23:46:10.104900 kernel: alternatives: applying system-wide alternatives Jul 9 23:46:10.104920 kernel: Memory: 3812964K/4030464K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 212540K reserved, 0K cma-reserved) Jul 9 23:46:10.104937 kernel: devtmpfs: initialized Jul 9 23:46:10.104955 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:46:10.104978 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 23:46:10.104995 kernel: 16928 pages in range for non-PLT usage Jul 9 23:46:10.105013 kernel: 508448 pages in range for PLT usage Jul 9 23:46:10.105030 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:46:10.105047 kernel: SMBIOS 3.0.0 present. Jul 9 23:46:10.105065 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 9 23:46:10.105082 kernel: DMI: Memory slots populated: 0/0 Jul 9 23:46:10.105099 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:46:10.105117 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:46:10.105140 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:46:10.105158 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:46:10.105175 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:46:10.105193 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 Jul 9 23:46:10.105210 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:46:10.105228 kernel: cpuidle: using governor menu Jul 9 23:46:10.105246 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:46:10.105263 kernel: ASID allocator initialised with 65536 entries Jul 9 23:46:10.105280 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:46:10.105302 kernel: Serial: AMBA PL011 UART driver Jul 9 23:46:10.105319 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:46:10.105337 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:46:10.105355 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:46:10.105372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:46:10.105390 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:46:10.105408 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:46:10.105426 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:46:10.105443 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:46:10.105465 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:46:10.105483 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:46:10.109647 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:46:10.109688 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:46:10.109706 kernel: ACPI: Interpreter enabled Jul 9 23:46:10.109724 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:46:10.109742 kernel: ACPI: MCFG table detected, 1 entries Jul 9 23:46:10.109759 kernel: ACPI: CPU0 has been hot-added Jul 9 23:46:10.109777 kernel: ACPI: CPU1 has been hot-added Jul 9 23:46:10.109804 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 9 23:46:10.110088 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 23:46:10.110275 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 23:46:10.110456 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 23:46:10.114292 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 9 23:46:10.116221 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 9 23:46:10.116270 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 9 23:46:10.116298 kernel: acpiphp: Slot [1] registered Jul 9 23:46:10.116317 kernel: acpiphp: Slot [2] registered Jul 9 23:46:10.116335 kernel: acpiphp: Slot [3] registered Jul 9 23:46:10.116352 kernel: acpiphp: Slot [4] registered Jul 9 23:46:10.116369 kernel: acpiphp: Slot [5] registered Jul 9 23:46:10.116387 kernel: acpiphp: Slot [6] registered Jul 9 23:46:10.116404 kernel: acpiphp: Slot [7] registered Jul 9 23:46:10.116421 kernel: acpiphp: Slot [8] registered Jul 9 23:46:10.116438 kernel: acpiphp: Slot [9] registered Jul 9 23:46:10.116455 kernel: acpiphp: Slot [10] registered Jul 9 23:46:10.116477 kernel: acpiphp: Slot [11] registered Jul 9 23:46:10.116518 kernel: acpiphp: Slot [12] registered Jul 9 23:46:10.116542 kernel: acpiphp: Slot [13] registered Jul 9 23:46:10.116560 kernel: acpiphp: Slot [14] registered Jul 9 23:46:10.116578 kernel: acpiphp: Slot [15] registered Jul 9 23:46:10.116596 kernel: acpiphp: Slot [16] registered Jul 9 23:46:10.116613 kernel: acpiphp: Slot [17] registered Jul 9 23:46:10.116630 kernel: acpiphp: Slot [18] registered Jul 9 23:46:10.116648 kernel: acpiphp: Slot [19] registered Jul 9 23:46:10.116671 kernel: acpiphp: Slot [20] registered Jul 9 23:46:10.116688 kernel: acpiphp: Slot [21] registered Jul 9 23:46:10.116705 kernel: acpiphp: Slot [22] registered Jul 9 23:46:10.116722 kernel: acpiphp: Slot [23] registered Jul 9 23:46:10.116740 kernel: acpiphp: Slot [24] registered Jul 9 23:46:10.116757 kernel: acpiphp: Slot [25] registered Jul 9 23:46:10.116774 kernel: acpiphp: Slot [26] registered Jul 9 23:46:10.116791 kernel: acpiphp: Slot [27] registered Jul 9 23:46:10.116808 kernel: acpiphp: Slot [28] registered Jul 9 23:46:10.116830 kernel: acpiphp: Slot [29] registered Jul 9 23:46:10.116848 kernel: acpiphp: Slot [30] registered Jul 9 23:46:10.116865 kernel: acpiphp: Slot [31] registered Jul 9 23:46:10.116882 kernel: PCI host bridge to bus 0000:00 Jul 9 23:46:10.117120 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 9 23:46:10.117291 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 23:46:10.117457 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 9 23:46:10.117657 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 9 23:46:10.117881 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jul 9 23:46:10.118101 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jul 9 23:46:10.118292 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jul 9 23:46:10.118491 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jul 9 23:46:10.118709 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jul 9 23:46:10.118917 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 9 23:46:10.119134 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jul 9 23:46:10.119322 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jul 9 23:46:10.120798 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jul 9 23:46:10.121010 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jul 9 23:46:10.121204 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 9 23:46:10.121398 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Jul 9 23:46:10.122710 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Jul 9 23:46:10.122967 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Jul 9 23:46:10.123160 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Jul 9 23:46:10.123352 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Jul 9 23:46:10.124732 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 9 23:46:10.124953 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 23:46:10.125121 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 9 23:46:10.125146 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 23:46:10.125175 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 23:46:10.125194 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 23:46:10.125212 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 23:46:10.125230 kernel: iommu: Default domain type: Translated Jul 9 23:46:10.125247 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:46:10.125265 kernel: efivars: Registered efivars operations Jul 9 23:46:10.125282 kernel: vgaarb: loaded Jul 9 23:46:10.125300 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:46:10.125317 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:46:10.125339 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:46:10.125356 kernel: pnp: PnP ACPI init Jul 9 23:46:10.130208 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 9 23:46:10.130257 kernel: pnp: PnP ACPI: found 1 devices Jul 9 23:46:10.130276 kernel: NET: Registered PF_INET protocol family Jul 9 23:46:10.130294 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:46:10.130312 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:46:10.130330 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:46:10.130349 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:46:10.130376 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:46:10.130394 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:46:10.130412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:46:10.130431 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:46:10.130448 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:46:10.130467 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:46:10.130484 kernel: kvm [1]: HYP mode not available Jul 9 23:46:10.130557 kernel: Initialise system trusted keyrings Jul 9 23:46:10.130581 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:46:10.130606 kernel: Key type asymmetric registered Jul 9 23:46:10.130624 kernel: Asymmetric key parser 'x509' registered Jul 9 23:46:10.130642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:46:10.130660 kernel: io scheduler mq-deadline registered Jul 9 23:46:10.130678 kernel: io scheduler kyber registered Jul 9 23:46:10.130695 kernel: io scheduler bfq registered Jul 9 23:46:10.130975 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 9 23:46:10.131008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 23:46:10.131033 kernel: ACPI: button: Power Button [PWRB] Jul 9 23:46:10.131052 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 9 23:46:10.131070 kernel: ACPI: button: Sleep Button [SLPB] Jul 9 23:46:10.131088 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:46:10.131107 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 9 23:46:10.131309 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 9 23:46:10.131336 kernel: printk: legacy console [ttyS0] disabled Jul 9 23:46:10.131355 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 9 23:46:10.131374 kernel: printk: legacy console [ttyS0] enabled Jul 9 23:46:10.131398 kernel: printk: legacy bootconsole [uart0] disabled Jul 9 23:46:10.131415 kernel: thunder_xcv, ver 1.0 Jul 9 23:46:10.131433 kernel: thunder_bgx, ver 1.0 Jul 9 23:46:10.131450 kernel: nicpf, ver 1.0 Jul 9 23:46:10.131468 kernel: nicvf, ver 1.0 Jul 9 23:46:10.133750 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:46:10.133944 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:46:09 UTC (1752104769) Jul 9 23:46:10.133969 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:46:10.133997 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jul 9 23:46:10.134015 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:46:10.134033 kernel: watchdog: NMI not fully supported Jul 9 23:46:10.134051 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:46:10.134068 kernel: Segment Routing with IPv6 Jul 9 23:46:10.134086 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:46:10.134104 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:46:10.134121 kernel: Key type dns_resolver registered Jul 9 23:46:10.134139 kernel: registered taskstats version 1 Jul 9 23:46:10.134160 kernel: Loading compiled-in X.509 certificates Jul 9 23:46:10.134178 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:46:10.134196 kernel: Demotion targets for Node 0: null Jul 9 23:46:10.134213 kernel: Key type .fscrypt registered Jul 9 23:46:10.134230 kernel: Key type fscrypt-provisioning registered Jul 9 23:46:10.134247 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:46:10.134265 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:46:10.134282 kernel: ima: No architecture policies found Jul 9 23:46:10.134299 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:46:10.134320 kernel: clk: Disabling unused clocks Jul 9 23:46:10.134338 kernel: PM: genpd: Disabling unused power domains Jul 9 23:46:10.134356 kernel: Warning: unable to open an initial console. Jul 9 23:46:10.134374 kernel: Freeing unused kernel memory: 39488K Jul 9 23:46:10.134391 kernel: Run /init as init process Jul 9 23:46:10.134408 kernel: with arguments: Jul 9 23:46:10.134425 kernel: /init Jul 9 23:46:10.134442 kernel: with environment: Jul 9 23:46:10.134459 kernel: HOME=/ Jul 9 23:46:10.134481 kernel: TERM=linux Jul 9 23:46:10.134527 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:46:10.134550 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:46:10.136566 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:46:10.136599 systemd[1]: Detected virtualization amazon. Jul 9 23:46:10.136618 systemd[1]: Detected architecture arm64. Jul 9 23:46:10.136637 systemd[1]: Running in initrd. Jul 9 23:46:10.136656 systemd[1]: No hostname configured, using default hostname. Jul 9 23:46:10.136684 systemd[1]: Hostname set to . Jul 9 23:46:10.136703 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:46:10.136722 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:46:10.136741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:46:10.136761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:46:10.136781 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:46:10.136801 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:46:10.136821 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:46:10.136847 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:46:10.136870 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:46:10.136890 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:46:10.136910 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:46:10.136930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:46:10.136949 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:46:10.136972 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:46:10.136992 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:46:10.137011 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:46:10.137030 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:46:10.137050 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:46:10.137070 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:46:10.137089 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:46:10.137109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:46:10.137129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:46:10.137152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:46:10.137172 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:46:10.137191 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:46:10.137211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:46:10.137230 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:46:10.137251 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:46:10.137271 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:46:10.137290 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:46:10.137314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:46:10.137335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:46:10.137354 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:46:10.137419 systemd-journald[257]: Collecting audit messages is disabled. Jul 9 23:46:10.137468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:46:10.137490 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:46:10.137588 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:46:10.137610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:46:10.137629 kernel: Bridge firewalling registered Jul 9 23:46:10.137659 systemd-journald[257]: Journal started Jul 9 23:46:10.137711 systemd-journald[257]: Runtime Journal (/run/log/journal/ec2840fb1acf0b3a99f13625a86225a2) is 8M, max 75.3M, 67.3M free. Jul 9 23:46:10.088796 systemd-modules-load[259]: Inserted module 'overlay' Jul 9 23:46:10.125466 systemd-modules-load[259]: Inserted module 'br_netfilter' Jul 9 23:46:10.156253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:46:10.156305 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:46:10.152765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:46:10.162613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:46:10.170033 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:10.189811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:46:10.200206 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:46:10.211574 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:10.216872 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:46:10.247224 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:46:10.260867 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:46:10.266553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:46:10.276081 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:46:10.294904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:46:10.305037 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:46:10.344591 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:46:10.378397 systemd-resolved[294]: Positive Trust Anchors: Jul 9 23:46:10.378436 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:46:10.378533 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:46:10.523537 kernel: SCSI subsystem initialized Jul 9 23:46:10.531537 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:46:10.543822 kernel: iscsi: registered transport (tcp) Jul 9 23:46:10.565770 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:46:10.565843 kernel: QLogic iSCSI HBA Driver Jul 9 23:46:10.599681 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:46:10.625417 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:46:10.633361 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:46:10.659537 kernel: random: crng init done Jul 9 23:46:10.660021 systemd-resolved[294]: Defaulting to hostname 'linux'. Jul 9 23:46:10.663373 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:46:10.665580 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:46:10.738070 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:46:10.744724 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:46:10.830548 kernel: raid6: neonx8 gen() 6514 MB/s Jul 9 23:46:10.847545 kernel: raid6: neonx4 gen() 6540 MB/s Jul 9 23:46:10.864541 kernel: raid6: neonx2 gen() 5440 MB/s Jul 9 23:46:10.881546 kernel: raid6: neonx1 gen() 3941 MB/s Jul 9 23:46:10.898542 kernel: raid6: int64x8 gen() 3626 MB/s Jul 9 23:46:10.915550 kernel: raid6: int64x4 gen() 3713 MB/s Jul 9 23:46:10.932543 kernel: raid6: int64x2 gen() 3597 MB/s Jul 9 23:46:10.950579 kernel: raid6: int64x1 gen() 2767 MB/s Jul 9 23:46:10.950635 kernel: raid6: using algorithm neonx4 gen() 6540 MB/s Jul 9 23:46:10.969546 kernel: raid6: .... xor() 4861 MB/s, rmw enabled Jul 9 23:46:10.969618 kernel: raid6: using neon recovery algorithm Jul 9 23:46:10.978367 kernel: xor: measuring software checksum speed Jul 9 23:46:10.978428 kernel: 8regs : 12954 MB/sec Jul 9 23:46:10.980876 kernel: 32regs : 12048 MB/sec Jul 9 23:46:10.980919 kernel: arm64_neon : 9066 MB/sec Jul 9 23:46:10.980943 kernel: xor: using function: 8regs (12954 MB/sec) Jul 9 23:46:11.074544 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:46:11.086724 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:46:11.093126 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:46:11.154052 systemd-udevd[506]: Using default interface naming scheme 'v255'. Jul 9 23:46:11.166020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:46:11.174714 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:46:11.220553 dracut-pre-trigger[514]: rd.md=0: removing MD RAID activation Jul 9 23:46:11.266863 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:46:11.273996 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:46:11.401803 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:46:11.409392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:46:11.586668 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 23:46:11.586756 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 9 23:46:11.593736 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 9 23:46:11.593812 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 9 23:46:11.600984 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 9 23:46:11.601373 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 9 23:46:11.603000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:46:11.603302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:11.612323 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 9 23:46:11.608556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:46:11.618738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:46:11.631955 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 23:46:11.631995 kernel: GPT:9289727 != 16777215 Jul 9 23:46:11.632020 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 23:46:11.626426 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:46:11.646664 kernel: GPT:9289727 != 16777215 Jul 9 23:46:11.646706 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 23:46:11.646732 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 9 23:46:11.646758 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a5:75:01:1a:c5 Jul 9 23:46:11.646492 (udev-worker)[557]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:46:11.680611 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:11.698552 kernel: nvme nvme0: using unchecked data buffer Jul 9 23:46:11.857234 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 9 23:46:11.922230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 9 23:46:11.922928 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:46:11.945588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 9 23:46:11.964680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 9 23:46:11.964872 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 9 23:46:11.965600 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:46:11.965697 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:46:11.966058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:46:11.970721 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:46:11.979285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:46:12.030764 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 9 23:46:12.036626 disk-uuid[688]: Primary Header is updated. Jul 9 23:46:12.036626 disk-uuid[688]: Secondary Entries is updated. Jul 9 23:46:12.036626 disk-uuid[688]: Secondary Header is updated. Jul 9 23:46:12.036200 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:46:13.079536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 9 23:46:13.081160 disk-uuid[695]: The operation has completed successfully. Jul 9 23:46:13.301331 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:46:13.304602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:46:13.354189 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:46:13.389181 sh[957]: Success Jul 9 23:46:13.417996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:46:13.418075 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:46:13.420563 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:46:13.434563 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:46:13.551823 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:46:13.566680 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:46:13.580080 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:46:13.627210 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:46:13.627336 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (980) Jul 9 23:46:13.633077 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:46:13.633152 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:46:13.633178 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:46:13.787295 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:46:13.790911 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:46:13.795618 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:46:13.797121 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:46:13.806977 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:46:13.867543 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Jul 9 23:46:13.873645 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:46:13.873722 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:46:13.873759 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 9 23:46:13.899621 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:46:13.903029 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:46:13.912792 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:46:14.010145 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:46:14.018921 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:46:14.099365 systemd-networkd[1152]: lo: Link UP Jul 9 23:46:14.099887 systemd-networkd[1152]: lo: Gained carrier Jul 9 23:46:14.104194 systemd-networkd[1152]: Enumeration completed Jul 9 23:46:14.104571 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:46:14.106015 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:14.106024 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:46:14.112645 systemd[1]: Reached target network.target - Network. Jul 9 23:46:14.126139 systemd-networkd[1152]: eth0: Link UP Jul 9 23:46:14.126150 systemd-networkd[1152]: eth0: Gained carrier Jul 9 23:46:14.126173 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:14.151618 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.27.216/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 9 23:46:14.455960 ignition[1080]: Ignition 2.21.0 Jul 9 23:46:14.456579 ignition[1080]: Stage: fetch-offline Jul 9 23:46:14.457612 ignition[1080]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:46:14.457639 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 9 23:46:14.465199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:46:14.458469 ignition[1080]: Ignition finished successfully Jul 9 23:46:14.475080 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 23:46:14.521235 ignition[1162]: Ignition 2.21.0 Jul 9 23:46:14.521597 ignition[1162]: Stage: fetch Jul 9 23:46:14.522199 ignition[1162]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:46:14.522237 ignition[1162]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 9 23:46:14.522951 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 9 23:46:14.537287 ignition[1162]: PUT result: OK Jul 9 23:46:14.541394 ignition[1162]: parsed url from cmdline: "" Jul 9 23:46:14.541636 ignition[1162]: no config URL provided Jul 9 23:46:14.542916 ignition[1162]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:46:14.542970 ignition[1162]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:46:14.543481 ignition[1162]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 9 23:46:14.551382 ignition[1162]: PUT result: OK Jul 9 23:46:14.551526 ignition[1162]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 9 23:46:14.555755 ignition[1162]: GET result: OK Jul 9 23:46:14.555965 ignition[1162]: parsing config with SHA512: cc86687fe9bac2d65c228567baf8415bb8d49f7d15d5995de3bb57f7351012068a790b343d7ee01d363c80d25694dcc9717e15ba24bad9979ba21002f25b68ee Jul 9 23:46:14.568488 unknown[1162]: fetched base config from "system" Jul 9 23:46:14.568539 unknown[1162]: fetched base config from "system" Jul 9 23:46:14.568553 unknown[1162]: fetched user config from "aws" Jul 9 23:46:14.572345 ignition[1162]: fetch: fetch complete Jul 9 23:46:14.572358 ignition[1162]: fetch: fetch passed Jul 9 23:46:14.572449 ignition[1162]: Ignition finished successfully Jul 9 23:46:14.581648 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 23:46:14.587720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:46:14.647014 ignition[1168]: Ignition 2.21.0 Jul 9 23:46:14.647044 ignition[1168]: Stage: kargs Jul 9 23:46:14.648852 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:46:14.648879 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 9 23:46:14.650193 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 9 23:46:14.655082 ignition[1168]: PUT result: OK Jul 9 23:46:14.666093 ignition[1168]: kargs: kargs passed Jul 9 23:46:14.666386 ignition[1168]: Ignition finished successfully Jul 9 23:46:14.673556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:46:14.677827 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:46:14.736280 ignition[1175]: Ignition 2.21.0 Jul 9 23:46:14.736313 ignition[1175]: Stage: disks Jul 9 23:46:14.738020 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:46:14.738050 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 9 23:46:14.740773 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 9 23:46:14.749756 ignition[1175]: PUT result: OK Jul 9 23:46:14.755472 ignition[1175]: disks: disks passed Jul 9 23:46:14.755612 ignition[1175]: Ignition finished successfully Jul 9 23:46:14.761172 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:46:14.766244 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:46:14.771155 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:46:14.776686 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:46:14.779271 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:46:14.785834 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:46:14.791466 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:46:14.866605 systemd-fsck[1183]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 23:46:14.873834 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:46:14.881370 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:46:15.009527 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:46:15.010904 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:46:15.014900 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:46:15.021592 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:46:15.036909 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:46:15.042491 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 23:46:15.042593 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:46:15.042644 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:46:15.074536 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1202) Jul 9 23:46:15.075414 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:46:15.082961 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:46:15.083018 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:46:15.083045 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 9 23:46:15.086946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:46:15.102626 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:46:15.225705 systemd-networkd[1152]: eth0: Gained IPv6LL Jul 9 23:46:15.549092 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:46:15.559388 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:46:15.569109 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:46:15.578418 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:46:15.870657 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:46:15.875880 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:46:15.887707 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:46:15.912672 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:46:15.916265 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:46:15.952635 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:46:15.971563 ignition[1315]: INFO : Ignition 2.21.0 Jul 9 23:46:15.973724 ignition[1315]: INFO : Stage: mount Jul 9 23:46:15.976043 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:46:15.978670 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 9 23:46:15.978670 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 9 23:46:15.986706 ignition[1315]: INFO : PUT result: OK Jul 9 23:46:15.992748 ignition[1315]: INFO : mount: mount passed Jul 9 23:46:15.994687 ignition[1315]: INFO : Ignition finished successfully Jul 9 23:46:15.999814 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:46:16.005999 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:46:16.037642 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:46:16.088532 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Jul 9 23:46:16.093716 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:46:16.093793 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:46:16.093821 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 9 23:46:16.105006 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:46:16.166475 ignition[1344]: INFO : Ignition 2.21.0 Jul 9 23:46:16.166475 ignition[1344]: INFO : Stage: files Jul 9 23:46:16.171109 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:46:16.171109 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 9 23:46:16.171109 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 9 23:46:16.185102 ignition[1344]: INFO : PUT result: OK Jul 9 23:46:16.185102 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:46:16.198774 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:46:16.198774 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:46:16.209988 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:46:16.213271 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:46:16.216923 unknown[1344]: wrote ssh authorized keys file for user: core Jul 9 23:46:16.219712 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:46:16.222914 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 9 23:46:16.222914 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 9 23:46:16.335224 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:46:16.495603 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 9 23:46:16.495603 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:46:16.504006 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 23:46:16.955894 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:46:17.097557 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:46:17.097557 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:46:17.097557 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:46:17.097557 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:46:17.113550 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:46:17.113550 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:46:17.113550 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:46:17.113550 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:46:17.113550 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:46:17.133753 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:46:17.133753 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:46:17.133753 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:46:17.147668 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:46:17.147668 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:46:17.157865 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 9 23:46:17.821802 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:46:18.170237 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 9 23:46:18.170237 ignition[1344]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 23:46:18.178057 ignition[1344]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:46:18.185539 ignition[1344]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:46:18.185539 ignition[1344]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 23:46:18.185539 ignition[1344]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:46:18.196954 ignition[1344]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:46:18.196954 ignition[1344]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:46:18.196954 ignition[1344]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:46:18.196954 ignition[1344]: INFO : files: files passed Jul 9 23:46:18.196954 ignition[1344]: INFO : Ignition finished successfully Jul 9 23:46:18.200413 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:46:18.206278 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:46:18.231295 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:46:18.258184 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:46:18.262593 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:46:18.276102 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:46:18.276102 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:46:18.283410 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:46:18.289295 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:46:18.295860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:46:18.301999 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:46:18.375786 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:46:18.376257 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:46:18.384700 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:46:18.389690 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:46:18.392181 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:46:18.394319 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:46:18.451596 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:46:18.459322 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:46:18.499142 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:46:18.499592 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:46:18.507383 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:46:18.510225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:46:18.510670 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:46:18.522293 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:46:18.522833 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:46:18.531855 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:46:18.534759 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:46:18.542410 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:46:18.545579 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:46:18.550094 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:46:18.557069 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:46:18.560392 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:46:18.569240 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:46:18.573702 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:46:18.577149 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:46:18.579376 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:46:18.584602 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:46:18.587201 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:46:18.594629 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:46:18.598924 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:46:18.602078 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:46:18.602319 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:46:18.611172 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:46:18.611622 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:46:18.619535 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:46:18.619964 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:46:18.627323 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:46:18.631120 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:46:18.636700 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:46:18.637018 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:46:18.639941 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:46:18.640192 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:46:18.661040 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:46:18.665950 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:46:18.696848 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:46:18.707656 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:46:18.710095 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:46:18.718543 ignition[1398]: INFO : Ignition 2.21.0 Jul 9 23:46:18.718543 ignition[1398]: INFO : Stage: umount Jul 9 23:46:18.722197 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:46:18.722197 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 9 23:46:18.722197 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 9 23:46:18.732807 ignition[1398]: INFO : PUT result: OK Jul 9 23:46:18.742223 ignition[1398]: INFO : umount: umount passed Jul 9 23:46:18.742223 ignition[1398]: INFO : Ignition finished successfully Jul 9 23:46:18.746283 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:46:18.746516 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:46:18.752454 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:46:18.752574 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:46:18.758604 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:46:18.758847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:46:18.765412 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 23:46:18.765533 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 23:46:18.771751 systemd[1]: Stopped target network.target - Network. Jul 9 23:46:18.773897 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:46:18.773985 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:46:18.780267 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:46:18.782355 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:46:18.789245 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:46:18.792237 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:46:18.794320 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:46:18.798685 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:46:18.798872 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:46:18.805987 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:46:18.806054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:46:18.808898 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:46:18.808993 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:46:18.816407 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:46:18.816486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:46:18.817035 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:46:18.817117 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:46:18.817685 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:46:18.817862 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:46:18.861467 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:46:18.861733 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:46:18.874119 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:46:18.874750 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:46:18.876566 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:46:18.889378 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:46:18.890921 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:46:18.897273 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:46:18.897520 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:46:18.906049 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:46:18.910577 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:46:18.910691 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:46:18.913845 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:46:18.913964 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:18.932669 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:46:18.932767 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:46:18.935421 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:46:18.935529 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:46:18.942546 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:46:18.956992 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:46:18.961143 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:46:18.980671 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:46:18.988811 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:46:18.997466 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:46:18.997672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:46:19.000855 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:46:19.000929 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:46:19.010995 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:46:19.011113 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:46:19.019699 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:46:19.019829 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:46:19.026701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:46:19.027222 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:46:19.039251 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:46:19.043032 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:46:19.043174 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:46:19.046342 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:46:19.046466 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:46:19.051207 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 23:46:19.051317 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:46:19.054871 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:46:19.054978 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:46:19.058713 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:46:19.058850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:19.077048 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:46:19.077187 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 23:46:19.077274 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:46:19.077367 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:46:19.079307 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:46:19.079548 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:46:19.089489 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:46:19.089965 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:46:19.096604 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:46:19.105968 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:46:19.155898 systemd[1]: Switching root. Jul 9 23:46:19.219061 systemd-journald[257]: Journal stopped Jul 9 23:46:21.935280 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Jul 9 23:46:21.935406 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:46:21.935456 kernel: SELinux: policy capability open_perms=1 Jul 9 23:46:21.935486 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:46:21.936209 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:46:21.936248 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:46:21.936277 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:46:21.936305 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:46:21.936337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:46:21.936374 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:46:21.936411 kernel: audit: type=1403 audit(1752104779.828:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:46:21.936442 systemd[1]: Successfully loaded SELinux policy in 88.414ms. Jul 9 23:46:21.937578 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 28.513ms. Jul 9 23:46:21.937630 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:46:21.937663 systemd[1]: Detected virtualization amazon. Jul 9 23:46:21.937691 systemd[1]: Detected architecture arm64. Jul 9 23:46:21.937719 systemd[1]: Detected first boot. Jul 9 23:46:21.937755 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:46:21.937786 zram_generator::config[1441]: No configuration found. Jul 9 23:46:21.937818 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:46:21.937847 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:46:21.937880 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:46:21.937910 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:46:21.937940 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:46:21.937971 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:46:21.938006 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:46:21.938036 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:46:21.938071 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:46:21.938101 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:46:21.938130 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:46:21.938158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:46:21.938188 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:46:21.938218 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:46:21.938248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:46:21.938283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:46:21.938311 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:46:21.938339 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:46:21.938367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:46:21.938399 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:46:21.938428 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 23:46:21.938458 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:46:21.938536 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:46:21.938576 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:46:21.938612 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:46:21.938641 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:46:21.938674 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:46:21.938707 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:46:21.938738 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:46:21.938768 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:46:21.938819 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:46:21.938851 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:46:21.938885 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:46:21.938914 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:46:21.938945 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:46:21.938975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:46:21.939015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:46:21.939045 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:46:21.939079 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:46:21.939108 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:46:21.939138 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:46:21.939175 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:46:21.939204 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:46:21.939232 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:46:21.939264 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:46:21.939296 systemd[1]: Reached target machines.target - Containers. Jul 9 23:46:21.939327 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:46:21.939357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:21.939387 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:46:21.939419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:46:21.939450 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:21.939481 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:46:21.939588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:21.939621 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:46:21.939649 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:21.939679 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:46:21.939708 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:46:21.939736 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:46:21.939771 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:46:21.939799 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:46:21.939830 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:21.939860 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:46:21.939888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:46:21.939916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:46:21.939947 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:46:21.939978 kernel: loop: module loaded Jul 9 23:46:21.940010 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:46:21.940040 kernel: fuse: init (API version 7.41) Jul 9 23:46:21.940068 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:46:21.940107 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:46:21.940138 systemd[1]: Stopped verity-setup.service. Jul 9 23:46:21.940166 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:46:21.940195 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:46:21.940224 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:46:21.940252 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:46:21.940280 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:46:21.940309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:46:21.940341 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:46:21.940371 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:46:21.940399 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:46:21.940439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:21.940470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:21.949361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:21.949428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:21.949467 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:46:21.949568 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:46:21.949605 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:21.949636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:21.949670 kernel: ACPI: bus type drm_connector registered Jul 9 23:46:21.949700 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:46:21.949734 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:46:21.949767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:46:21.949797 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:46:21.949827 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:46:21.949861 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:46:21.949893 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:46:21.949922 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:46:21.949951 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:46:21.949979 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:46:21.950011 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:46:21.950046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:21.950120 systemd-journald[1520]: Collecting audit messages is disabled. Jul 9 23:46:21.950173 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:46:21.950203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:46:21.950236 systemd-journald[1520]: Journal started Jul 9 23:46:21.950288 systemd-journald[1520]: Runtime Journal (/run/log/journal/ec2840fb1acf0b3a99f13625a86225a2) is 8M, max 75.3M, 67.3M free. Jul 9 23:46:21.953621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:46:21.199238 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:46:21.222414 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 9 23:46:21.967659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:46:21.223242 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:46:21.985922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:46:21.997557 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:46:22.008561 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:46:22.008688 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:46:22.019382 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:46:22.026652 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:46:22.029872 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:46:22.067962 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:46:22.103637 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:46:22.125624 kernel: loop0: detected capacity change from 0 to 203944 Jul 9 23:46:22.129269 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:46:22.132051 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:46:22.141772 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:46:22.154417 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:46:22.200111 systemd-journald[1520]: Time spent on flushing to /var/log/journal/ec2840fb1acf0b3a99f13625a86225a2 is 104.633ms for 937 entries. Jul 9 23:46:22.200111 systemd-journald[1520]: System Journal (/var/log/journal/ec2840fb1acf0b3a99f13625a86225a2) is 8M, max 195.6M, 187.6M free. Jul 9 23:46:22.322604 systemd-journald[1520]: Received client request to flush runtime journal. Jul 9 23:46:22.227452 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:46:22.231617 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:46:22.247643 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:46:22.268701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:46:22.281258 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Jul 9 23:46:22.281282 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Jul 9 23:46:22.298628 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:46:22.305904 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:46:22.327876 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:46:22.407343 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:46:22.415796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:46:22.442655 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:46:22.485571 kernel: loop1: detected capacity change from 0 to 107312 Jul 9 23:46:22.485432 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jul 9 23:46:22.485476 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jul 9 23:46:22.499078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:46:22.604649 kernel: loop2: detected capacity change from 0 to 61240 Jul 9 23:46:22.759549 kernel: loop3: detected capacity change from 0 to 138376 Jul 9 23:46:22.880574 kernel: loop4: detected capacity change from 0 to 203944 Jul 9 23:46:22.921553 kernel: loop5: detected capacity change from 0 to 107312 Jul 9 23:46:22.938601 kernel: loop6: detected capacity change from 0 to 61240 Jul 9 23:46:22.963595 kernel: loop7: detected capacity change from 0 to 138376 Jul 9 23:46:22.981973 (sd-merge)[1602]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 9 23:46:22.984075 (sd-merge)[1602]: Merged extensions into '/usr'. Jul 9 23:46:22.996901 systemd[1]: Reload requested from client PID 1556 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:46:22.996931 systemd[1]: Reloading... Jul 9 23:46:23.202215 zram_generator::config[1630]: No configuration found. Jul 9 23:46:23.442707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:23.645606 systemd[1]: Reloading finished in 647 ms. Jul 9 23:46:23.646730 ldconfig[1549]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:46:23.670287 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:46:23.673713 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:46:23.677353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:46:23.695016 systemd[1]: Starting ensure-sysext.service... Jul 9 23:46:23.705770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:46:23.714866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:46:23.752191 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:46:23.752226 systemd[1]: Reloading... Jul 9 23:46:23.808074 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:46:23.808152 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:46:23.808778 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:46:23.809366 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:46:23.811759 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:46:23.812559 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jul 9 23:46:23.812742 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jul 9 23:46:23.829926 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:46:23.829961 systemd-tmpfiles[1682]: Skipping /boot Jul 9 23:46:23.835382 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Jul 9 23:46:23.889981 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:46:23.890022 systemd-tmpfiles[1682]: Skipping /boot Jul 9 23:46:23.929714 zram_generator::config[1710]: No configuration found. Jul 9 23:46:24.287751 (udev-worker)[1724]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:46:24.419223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:24.679777 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 23:46:24.680841 systemd[1]: Reloading finished in 927 ms. Jul 9 23:46:24.739385 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:46:24.898754 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:46:24.985058 systemd[1]: Finished ensure-sysext.service. Jul 9 23:46:25.019268 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:46:25.027103 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:46:25.030202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:46:25.034064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:46:25.039000 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:46:25.044877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:46:25.052069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:46:25.054987 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:46:25.055298 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:46:25.060100 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:46:25.069105 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:46:25.080106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:46:25.082839 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:46:25.093154 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:46:25.106350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:46:25.210121 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:46:25.249839 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:46:25.252623 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:46:25.272966 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:46:25.276735 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:46:25.277140 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:46:25.280362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:46:25.282020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:46:25.285756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:46:25.286151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:46:25.353901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:46:25.354322 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:46:25.424466 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:46:25.434187 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:46:25.454822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 9 23:46:25.475106 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:46:25.505472 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:46:25.524636 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:46:25.537645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:46:25.541210 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:46:25.552221 augenrules[1945]: No rules Jul 9 23:46:25.555849 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:46:25.556519 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:46:25.568451 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:46:25.589157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:46:25.738360 systemd-networkd[1888]: lo: Link UP Jul 9 23:46:25.738386 systemd-networkd[1888]: lo: Gained carrier Jul 9 23:46:25.742054 systemd-networkd[1888]: Enumeration completed Jul 9 23:46:25.742816 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:46:25.744402 systemd-resolved[1889]: Positive Trust Anchors: Jul 9 23:46:25.744639 systemd-resolved[1889]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:46:25.744707 systemd-resolved[1889]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:46:25.745746 systemd-networkd[1888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:25.745756 systemd-networkd[1888]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:46:25.748055 systemd-networkd[1888]: eth0: Link UP Jul 9 23:46:25.748612 systemd-networkd[1888]: eth0: Gained carrier Jul 9 23:46:25.748652 systemd-networkd[1888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:46:25.752230 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:46:25.764004 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:46:25.771782 systemd-resolved[1889]: Defaulting to hostname 'linux'. Jul 9 23:46:25.772630 systemd-networkd[1888]: eth0: DHCPv4 address 172.31.27.216/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 9 23:46:25.777778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:46:25.783619 systemd[1]: Reached target network.target - Network. Jul 9 23:46:25.788270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:46:25.791027 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:46:25.793595 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:46:25.798294 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:46:25.801553 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:46:25.804377 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:46:25.807620 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:46:25.810695 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:46:25.810760 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:46:25.812856 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:46:25.817401 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:46:25.823979 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:46:25.833066 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:46:25.836330 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:46:25.839538 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:46:25.846486 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:46:25.849731 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:46:25.856365 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:46:25.859532 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:46:25.861906 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:46:25.864169 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:46:25.864231 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:46:25.874170 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:46:25.883943 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 23:46:25.889793 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:46:25.897894 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:46:25.908094 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:46:25.920171 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:46:25.922590 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:46:25.927130 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:46:25.936220 systemd[1]: Started ntpd.service - Network Time Service. Jul 9 23:46:25.942041 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:46:25.953295 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 9 23:46:25.961351 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:46:25.972315 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:46:25.989157 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:46:25.993874 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:46:26.001133 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:46:26.018618 jq[1968]: false Jul 9 23:46:26.012026 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:46:26.022891 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:46:26.031607 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:46:26.037019 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:46:26.041406 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:46:26.041965 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:46:26.091388 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:46:26.093263 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:46:26.167757 jq[1980]: true Jul 9 23:46:26.172177 (ntainerd)[1986]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:46:26.173975 tar[1985]: linux-arm64/helm Jul 9 23:46:26.206871 extend-filesystems[1969]: Found /dev/nvme0n1p6 Jul 9 23:46:26.240861 dbus-daemon[1966]: [system] SELinux support is enabled Jul 9 23:46:26.243969 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:46:26.254186 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:46:26.255685 extend-filesystems[1969]: Found /dev/nvme0n1p9 Jul 9 23:46:26.262997 coreos-metadata[1965]: Jul 09 23:46:26.257 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 9 23:46:26.262997 coreos-metadata[1965]: Jul 09 23:46:26.262 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 9 23:46:26.267086 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:46:26.270647 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:46:26.270708 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:46:26.274555 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:46:26.274599 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:46:26.287276 update_engine[1979]: I20250709 23:46:26.281644 1979 main.cc:92] Flatcar Update Engine starting Jul 9 23:46:26.287990 coreos-metadata[1965]: Jul 09 23:46:26.282 INFO Fetch successful Jul 9 23:46:26.287990 coreos-metadata[1965]: Jul 09 23:46:26.282 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 9 23:46:26.287461 dbus-daemon[1966]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1888 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 9 23:46:26.288316 coreos-metadata[1965]: Jul 09 23:46:26.288 INFO Fetch successful Jul 9 23:46:26.288316 coreos-metadata[1965]: Jul 09 23:46:26.288 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 9 23:46:26.295440 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 9 23:46:26.297996 coreos-metadata[1965]: Jul 09 23:46:26.297 INFO Fetch successful Jul 9 23:46:26.297996 coreos-metadata[1965]: Jul 09 23:46:26.297 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 9 23:46:26.300153 extend-filesystems[1969]: Checking size of /dev/nvme0n1p9 Jul 9 23:46:26.309434 coreos-metadata[1965]: Jul 09 23:46:26.309 INFO Fetch successful Jul 9 23:46:26.309434 coreos-metadata[1965]: Jul 09 23:46:26.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 9 23:46:26.311976 coreos-metadata[1965]: Jul 09 23:46:26.311 INFO Fetch failed with 404: resource not found Jul 9 23:46:26.311976 coreos-metadata[1965]: Jul 09 23:46:26.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 9 23:46:26.319730 coreos-metadata[1965]: Jul 09 23:46:26.318 INFO Fetch successful Jul 9 23:46:26.319730 coreos-metadata[1965]: Jul 09 23:46:26.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 9 23:46:26.322784 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 9 23:46:26.325576 coreos-metadata[1965]: Jul 09 23:46:26.323 INFO Fetch successful Jul 9 23:46:26.325576 coreos-metadata[1965]: Jul 09 23:46:26.323 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 9 23:46:26.325576 coreos-metadata[1965]: Jul 09 23:46:26.325 INFO Fetch successful Jul 9 23:46:26.325576 coreos-metadata[1965]: Jul 09 23:46:26.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 9 23:46:26.332794 coreos-metadata[1965]: Jul 09 23:46:26.328 INFO Fetch successful Jul 9 23:46:26.332794 coreos-metadata[1965]: Jul 09 23:46:26.328 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 9 23:46:26.332794 coreos-metadata[1965]: Jul 09 23:46:26.329 INFO Fetch successful Jul 9 23:46:26.337540 jq[2010]: true Jul 9 23:46:26.339771 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 9 23:46:26.344984 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:46:26.351969 update_engine[1979]: I20250709 23:46:26.348805 1979 update_check_scheduler.cc:74] Next update check in 3m21s Jul 9 23:46:26.352794 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:46:26.385078 extend-filesystems[1969]: Resized partition /dev/nvme0n1p9 Jul 9 23:46:26.394834 extend-filesystems[2028]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 23:46:26.425707 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 9 23:46:26.439669 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Wed Jul 9 21:34:42 UTC 2025 (1): Starting Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: ntpd 4.2.8p17@1.4004-o Wed Jul 9 21:34:42 UTC 2025 (1): Starting Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: ---------------------------------------------------- Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: corporation. Support and training for ntp-4 are Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: available at https://www.nwtime.org/support Jul 9 23:46:26.443633 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: ---------------------------------------------------- Jul 9 23:46:26.442990 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 9 23:46:26.455635 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: proto: precision = 0.096 usec (-23) Jul 9 23:46:26.455635 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: basedate set to 2025-06-27 Jul 9 23:46:26.455635 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: gps base set to 2025-06-29 (week 2373) Jul 9 23:46:26.443012 ntpd[1971]: ---------------------------------------------------- Jul 9 23:46:26.443030 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Jul 9 23:46:26.443047 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 9 23:46:26.443065 ntpd[1971]: corporation. Support and training for ntp-4 are Jul 9 23:46:26.443081 ntpd[1971]: available at https://www.nwtime.org/support Jul 9 23:46:26.443098 ntpd[1971]: ---------------------------------------------------- Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Listen normally on 3 eth0 172.31.27.216:123 Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Listen normally on 4 lo [::1]:123 Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: bind(21) AF_INET6 fe80::4a5:75ff:fe01:1ac5%2#123 flags 0x11 failed: Cannot assign requested address Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: unable to create socket on eth0 (5) for fe80::4a5:75ff:fe01:1ac5%2#123 Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: failed to init interface for address fe80::4a5:75ff:fe01:1ac5%2 Jul 9 23:46:26.457551 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: Listening on routing socket on fd #21 for interface updates Jul 9 23:46:26.446334 ntpd[1971]: proto: precision = 0.096 usec (-23) Jul 9 23:46:26.449645 ntpd[1971]: basedate set to 2025-06-27 Jul 9 23:46:26.449712 ntpd[1971]: gps base set to 2025-06-29 (week 2373) Jul 9 23:46:26.456474 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Jul 9 23:46:26.456602 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 9 23:46:26.456908 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Jul 9 23:46:26.456983 ntpd[1971]: Listen normally on 3 eth0 172.31.27.216:123 Jul 9 23:46:26.457051 ntpd[1971]: Listen normally on 4 lo [::1]:123 Jul 9 23:46:26.457132 ntpd[1971]: bind(21) AF_INET6 fe80::4a5:75ff:fe01:1ac5%2#123 flags 0x11 failed: Cannot assign requested address Jul 9 23:46:26.457181 ntpd[1971]: unable to create socket on eth0 (5) for fe80::4a5:75ff:fe01:1ac5%2#123 Jul 9 23:46:26.457207 ntpd[1971]: failed to init interface for address fe80::4a5:75ff:fe01:1ac5%2 Jul 9 23:46:26.457265 ntpd[1971]: Listening on routing socket on fd #21 for interface updates Jul 9 23:46:26.481611 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 9 23:46:26.489071 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 9 23:46:26.489071 ntpd[1971]: 9 Jul 23:46:26 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 9 23:46:26.481682 ntpd[1971]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 9 23:46:26.496205 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:46:26.591531 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 9 23:46:26.598664 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 23:46:26.601742 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:46:26.617144 extend-filesystems[2028]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 9 23:46:26.617144 extend-filesystems[2028]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 23:46:26.617144 extend-filesystems[2028]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 9 23:46:26.635041 extend-filesystems[1969]: Resized filesystem in /dev/nvme0n1p9 Jul 9 23:46:26.623780 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:46:26.625630 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:46:26.710632 bash[2057]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:46:26.719383 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:46:26.732109 systemd[1]: Starting sshkeys.service... Jul 9 23:46:26.796271 systemd-logind[1977]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 23:46:26.796334 systemd-logind[1977]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 9 23:46:26.800409 systemd-logind[1977]: New seat seat0. Jul 9 23:46:26.805464 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:46:26.970318 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 9 23:46:26.979371 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 9 23:46:27.155041 containerd[1986]: time="2025-07-09T23:46:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:46:27.156353 containerd[1986]: time="2025-07-09T23:46:27.156263351Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:46:27.165125 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 9 23:46:27.174302 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 9 23:46:27.177741 dbus-daemon[1966]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2019 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 9 23:46:27.191337 systemd[1]: Starting polkit.service - Authorization Manager... Jul 9 23:46:27.219252 containerd[1986]: time="2025-07-09T23:46:27.218315580Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.308µs" Jul 9 23:46:27.219252 containerd[1986]: time="2025-07-09T23:46:27.218394000Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:46:27.219252 containerd[1986]: time="2025-07-09T23:46:27.218440512Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:46:27.224183 containerd[1986]: time="2025-07-09T23:46:27.221857188Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:46:27.224183 containerd[1986]: time="2025-07-09T23:46:27.221934840Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:46:27.224183 containerd[1986]: time="2025-07-09T23:46:27.222000900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:46:27.224183 containerd[1986]: time="2025-07-09T23:46:27.222138348Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:46:27.224183 containerd[1986]: time="2025-07-09T23:46:27.222170088Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:46:27.224769 containerd[1986]: time="2025-07-09T23:46:27.224557080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:46:27.224769 containerd[1986]: time="2025-07-09T23:46:27.224613168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:46:27.224769 containerd[1986]: time="2025-07-09T23:46:27.224646048Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:46:27.224769 containerd[1986]: time="2025-07-09T23:46:27.224669844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:46:27.224952 containerd[1986]: time="2025-07-09T23:46:27.224896752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:46:27.225727 containerd[1986]: time="2025-07-09T23:46:27.225337908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:46:27.225727 containerd[1986]: time="2025-07-09T23:46:27.225445056Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:46:27.225874 containerd[1986]: time="2025-07-09T23:46:27.225477324Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:46:27.228039 containerd[1986]: time="2025-07-09T23:46:27.225958908Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:46:27.228039 containerd[1986]: time="2025-07-09T23:46:27.226641744Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:46:27.228039 containerd[1986]: time="2025-07-09T23:46:27.226878228Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.240980688Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241113036Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241154520Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241191936Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241222668Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241252848Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241283556Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241312872Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241348680Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241377168Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241402560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:46:27.241651 containerd[1986]: time="2025-07-09T23:46:27.241433688Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241734408Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241794000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241836204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241864668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241893744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241922412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241950720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.241979604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.242008632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.242036508Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:46:27.242219 containerd[1986]: time="2025-07-09T23:46:27.242071704Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:46:27.245855 containerd[1986]: time="2025-07-09T23:46:27.242473032Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:46:27.245855 containerd[1986]: time="2025-07-09T23:46:27.244874172Z" level=info msg="Start snapshots syncer" Jul 9 23:46:27.245855 containerd[1986]: time="2025-07-09T23:46:27.244973016Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:46:27.250201 containerd[1986]: time="2025-07-09T23:46:27.248858508Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:46:27.253853 containerd[1986]: time="2025-07-09T23:46:27.250254648Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:46:27.254137 containerd[1986]: time="2025-07-09T23:46:27.253898928Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:46:27.257120 containerd[1986]: time="2025-07-09T23:46:27.254328552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:46:27.257120 containerd[1986]: time="2025-07-09T23:46:27.254431188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:46:27.257120 containerd[1986]: time="2025-07-09T23:46:27.254488704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:46:27.257120 containerd[1986]: time="2025-07-09T23:46:27.255700236Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:46:27.257120 containerd[1986]: time="2025-07-09T23:46:27.255778080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:46:27.258544 containerd[1986]: time="2025-07-09T23:46:27.257582724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:46:27.258544 containerd[1986]: time="2025-07-09T23:46:27.257693292Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.257792508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260274588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260350260Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260520456Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260573292Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260634408Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260664228Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260716188Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260745948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.260800452Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.261010728Z" level=info msg="runtime interface created" Jul 9 23:46:27.261538 containerd[1986]: time="2025-07-09T23:46:27.261034656Z" level=info msg="created NRI interface" Jul 9 23:46:27.262131 containerd[1986]: time="2025-07-09T23:46:27.261058212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:46:27.262131 containerd[1986]: time="2025-07-09T23:46:27.261614448Z" level=info msg="Connect containerd service" Jul 9 23:46:27.263986 containerd[1986]: time="2025-07-09T23:46:27.263647608Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:46:27.270726 containerd[1986]: time="2025-07-09T23:46:27.270113196Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:46:27.323669 systemd-networkd[1888]: eth0: Gained IPv6LL Jul 9 23:46:27.362296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:46:27.366676 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:46:27.376833 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 9 23:46:27.392847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:27.405612 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:46:27.503604 coreos-metadata[2081]: Jul 09 23:46:27.502 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 9 23:46:27.511286 coreos-metadata[2081]: Jul 09 23:46:27.510 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 9 23:46:27.515534 coreos-metadata[2081]: Jul 09 23:46:27.513 INFO Fetch successful Jul 9 23:46:27.515534 coreos-metadata[2081]: Jul 09 23:46:27.514 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 9 23:46:27.518312 coreos-metadata[2081]: Jul 09 23:46:27.518 INFO Fetch successful Jul 9 23:46:27.526750 unknown[2081]: wrote ssh authorized keys file for user: core Jul 9 23:46:27.639615 update-ssh-keys[2165]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:46:27.644627 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 9 23:46:27.666795 systemd[1]: Finished sshkeys.service. Jul 9 23:46:27.834224 locksmithd[2021]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:46:27.855865 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:46:27.970187 amazon-ssm-agent[2133]: Initializing new seelog logger Jul 9 23:46:27.970187 amazon-ssm-agent[2133]: New Seelog Logger Creation Complete Jul 9 23:46:27.970187 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.970187 amazon-ssm-agent[2133]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.970187 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 processing appconfig overrides Jul 9 23:46:27.972552 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.972552 amazon-ssm-agent[2133]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.972552 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 processing appconfig overrides Jul 9 23:46:27.973650 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.973650 amazon-ssm-agent[2133]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.973650 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 processing appconfig overrides Jul 9 23:46:27.976012 amazon-ssm-agent[2133]: 2025-07-09 23:46:27.9711 INFO Proxy environment variables: Jul 9 23:46:27.988691 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.988691 amazon-ssm-agent[2133]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:27.988691 amazon-ssm-agent[2133]: 2025/07/09 23:46:27 processing appconfig overrides Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.014587607Z" level=info msg="Start subscribing containerd event" Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.014750963Z" level=info msg="Start recovering state" Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.014944595Z" level=info msg="Start event monitor" Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.014977571Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.014998043Z" level=info msg="Start streaming server" Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.015020207Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.015038291Z" level=info msg="runtime interface starting up..." Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.015053735Z" level=info msg="starting plugins..." Jul 9 23:46:28.017134 containerd[1986]: time="2025-07-09T23:46:28.015084875Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:46:28.023540 containerd[1986]: time="2025-07-09T23:46:28.023436180Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:46:28.026137 containerd[1986]: time="2025-07-09T23:46:28.025718328Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:46:28.050629 containerd[1986]: time="2025-07-09T23:46:28.046678740Z" level=info msg="containerd successfully booted in 0.894718s" Jul 9 23:46:28.047762 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:46:28.085547 amazon-ssm-agent[2133]: 2025-07-09 23:46:27.9711 INFO https_proxy: Jul 9 23:46:28.157353 polkitd[2106]: Started polkitd version 126 Jul 9 23:46:28.182941 polkitd[2106]: Loading rules from directory /etc/polkit-1/rules.d Jul 9 23:46:28.185784 polkitd[2106]: Loading rules from directory /run/polkit-1/rules.d Jul 9 23:46:28.185891 polkitd[2106]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 9 23:46:28.186960 amazon-ssm-agent[2133]: 2025-07-09 23:46:27.9711 INFO http_proxy: Jul 9 23:46:28.187157 polkitd[2106]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 9 23:46:28.187398 polkitd[2106]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 9 23:46:28.187626 polkitd[2106]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 9 23:46:28.190407 polkitd[2106]: Finished loading, compiling and executing 2 rules Jul 9 23:46:28.191711 systemd[1]: Started polkit.service - Authorization Manager. Jul 9 23:46:28.191842 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 9 23:46:28.194839 polkitd[2106]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 9 23:46:28.258120 systemd-hostnamed[2019]: Hostname set to (transient) Jul 9 23:46:28.258388 systemd-resolved[1889]: System hostname changed to 'ip-172-31-27-216'. Jul 9 23:46:28.287677 amazon-ssm-agent[2133]: 2025-07-09 23:46:27.9711 INFO no_proxy: Jul 9 23:46:28.386527 amazon-ssm-agent[2133]: 2025-07-09 23:46:27.9714 INFO Checking if agent identity type OnPrem can be assumed Jul 9 23:46:28.488582 amazon-ssm-agent[2133]: 2025-07-09 23:46:27.9727 INFO Checking if agent identity type EC2 can be assumed Jul 9 23:46:28.586312 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2614 INFO Agent will take identity from EC2 Jul 9 23:46:28.685826 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2643 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jul 9 23:46:28.784935 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2643 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 9 23:46:28.832538 amazon-ssm-agent[2133]: 2025/07/09 23:46:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:28.832796 amazon-ssm-agent[2133]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 9 23:46:28.834816 amazon-ssm-agent[2133]: 2025/07/09 23:46:28 processing appconfig overrides Jul 9 23:46:28.862806 tar[1985]: linux-arm64/LICENSE Jul 9 23:46:28.863329 tar[1985]: linux-arm64/README.md Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2643 INFO [amazon-ssm-agent] Starting Core Agent Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2643 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2644 INFO [Registrar] Starting registrar module Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2706 INFO [EC2Identity] Checking disk for registration info Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2707 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.2708 INFO [EC2Identity] Generating registration keypair Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.7946 INFO [EC2Identity] Checking write access before registering Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.7953 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.8322 INFO [EC2Identity] EC2 registration was successful. Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.8322 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.8323 INFO [CredentialRefresher] credentialRefresher has started Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.8324 INFO [CredentialRefresher] Starting credentials refresher loop Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.8807 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 9 23:46:28.881806 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.8810 INFO [CredentialRefresher] Credentials ready Jul 9 23:46:28.885463 amazon-ssm-agent[2133]: 2025-07-09 23:46:28.8813 INFO [CredentialRefresher] Next credential rotation will be in 29.9999898403 minutes Jul 9 23:46:28.903619 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:46:29.444239 ntpd[1971]: Listen normally on 6 eth0 [fe80::4a5:75ff:fe01:1ac5%2]:123 Jul 9 23:46:29.445919 ntpd[1971]: 9 Jul 23:46:29 ntpd[1971]: Listen normally on 6 eth0 [fe80::4a5:75ff:fe01:1ac5%2]:123 Jul 9 23:46:29.796764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:29.818202 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:29.936524 amazon-ssm-agent[2133]: 2025-07-09 23:46:29.9361 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 9 23:46:30.038637 amazon-ssm-agent[2133]: 2025-07-09 23:46:29.9453 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2222) started Jul 9 23:46:30.140062 amazon-ssm-agent[2133]: 2025-07-09 23:46:29.9453 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 9 23:46:30.994308 kubelet[2219]: E0709 23:46:30.994223 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:30.999802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:31.000884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:31.003630 systemd[1]: kubelet.service: Consumed 1.572s CPU time, 257.2M memory peak. Jul 9 23:46:31.025594 sshd_keygen[2005]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:46:31.066102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:46:31.074834 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:46:31.080071 systemd[1]: Started sshd@0-172.31.27.216:22-139.178.89.65:57718.service - OpenSSH per-connection server daemon (139.178.89.65:57718). Jul 9 23:46:31.111472 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:46:31.114667 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:46:31.120985 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:46:31.166147 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:46:31.174028 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:46:31.182704 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 23:46:31.185627 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:46:31.188041 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:46:31.193137 systemd[1]: Startup finished in 3.772s (kernel) + 10.116s (initrd) + 11.453s (userspace) = 25.342s. Jul 9 23:46:31.352060 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 57718 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:46:31.355923 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:31.384589 systemd-logind[1977]: New session 1 of user core. Jul 9 23:46:31.387054 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:46:31.389584 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:46:31.440865 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:46:31.446159 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:46:31.473098 (systemd)[2263]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:46:31.479150 systemd-logind[1977]: New session c1 of user core. Jul 9 23:46:31.802487 systemd[2263]: Queued start job for default target default.target. Jul 9 23:46:31.824980 systemd[2263]: Created slice app.slice - User Application Slice. Jul 9 23:46:31.825231 systemd[2263]: Reached target paths.target - Paths. Jul 9 23:46:31.825406 systemd[2263]: Reached target timers.target - Timers. Jul 9 23:46:31.828220 systemd[2263]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:46:31.863849 systemd[2263]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:46:31.864278 systemd[2263]: Reached target sockets.target - Sockets. Jul 9 23:46:31.864559 systemd[2263]: Reached target basic.target - Basic System. Jul 9 23:46:31.864834 systemd[2263]: Reached target default.target - Main User Target. Jul 9 23:46:31.864886 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:46:31.865086 systemd[2263]: Startup finished in 369ms. Jul 9 23:46:31.875790 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:46:32.030080 systemd[1]: Started sshd@1-172.31.27.216:22-139.178.89.65:55462.service - OpenSSH per-connection server daemon (139.178.89.65:55462). Jul 9 23:46:32.229704 sshd[2274]: Accepted publickey for core from 139.178.89.65 port 55462 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:46:32.233064 sshd-session[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:32.242612 systemd-logind[1977]: New session 2 of user core. Jul 9 23:46:32.249849 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:46:32.385682 sshd[2276]: Connection closed by 139.178.89.65 port 55462 Jul 9 23:46:32.386606 sshd-session[2274]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:32.394228 systemd[1]: sshd@1-172.31.27.216:22-139.178.89.65:55462.service: Deactivated successfully. Jul 9 23:46:32.397774 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 23:46:32.399661 systemd-logind[1977]: Session 2 logged out. Waiting for processes to exit. Jul 9 23:46:32.403370 systemd-logind[1977]: Removed session 2. Jul 9 23:46:32.426261 systemd[1]: Started sshd@2-172.31.27.216:22-139.178.89.65:55468.service - OpenSSH per-connection server daemon (139.178.89.65:55468). Jul 9 23:46:32.628237 sshd[2282]: Accepted publickey for core from 139.178.89.65 port 55468 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:46:32.631253 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:32.641618 systemd-logind[1977]: New session 3 of user core. Jul 9 23:46:32.650811 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:46:32.771558 sshd[2284]: Connection closed by 139.178.89.65 port 55468 Jul 9 23:46:32.772362 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:32.778929 systemd-logind[1977]: Session 3 logged out. Waiting for processes to exit. Jul 9 23:46:32.779130 systemd[1]: sshd@2-172.31.27.216:22-139.178.89.65:55468.service: Deactivated successfully. Jul 9 23:46:32.783952 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 23:46:32.789546 systemd-logind[1977]: Removed session 3. Jul 9 23:46:32.809975 systemd[1]: Started sshd@3-172.31.27.216:22-139.178.89.65:55482.service - OpenSSH per-connection server daemon (139.178.89.65:55482). Jul 9 23:46:33.012571 sshd[2290]: Accepted publickey for core from 139.178.89.65 port 55482 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:46:33.016050 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:33.027074 systemd-logind[1977]: New session 4 of user core. Jul 9 23:46:33.034911 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:46:33.166009 sshd[2292]: Connection closed by 139.178.89.65 port 55482 Jul 9 23:46:33.165115 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:33.172050 systemd[1]: sshd@3-172.31.27.216:22-139.178.89.65:55482.service: Deactivated successfully. Jul 9 23:46:33.175812 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:46:33.179848 systemd-logind[1977]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:46:33.182865 systemd-logind[1977]: Removed session 4. Jul 9 23:46:33.199648 systemd[1]: Started sshd@4-172.31.27.216:22-139.178.89.65:55484.service - OpenSSH per-connection server daemon (139.178.89.65:55484). Jul 9 23:46:33.390100 sshd[2298]: Accepted publickey for core from 139.178.89.65 port 55484 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:46:33.393181 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:33.403614 systemd-logind[1977]: New session 5 of user core. Jul 9 23:46:33.410830 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:46:33.561852 sudo[2301]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:46:33.562486 sudo[2301]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:33.582107 sudo[2301]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:33.605590 sshd[2300]: Connection closed by 139.178.89.65 port 55484 Jul 9 23:46:33.606629 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:33.612978 systemd[1]: sshd@4-172.31.27.216:22-139.178.89.65:55484.service: Deactivated successfully. Jul 9 23:46:33.615739 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:46:33.620234 systemd-logind[1977]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:46:33.623278 systemd-logind[1977]: Removed session 5. Jul 9 23:46:33.638522 systemd[1]: Started sshd@5-172.31.27.216:22-139.178.89.65:55488.service - OpenSSH per-connection server daemon (139.178.89.65:55488). Jul 9 23:46:33.832103 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 55488 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:46:33.835126 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:33.845061 systemd-logind[1977]: New session 6 of user core. Jul 9 23:46:33.855872 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:46:33.961338 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:46:33.962489 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:33.970625 sudo[2311]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:33.980781 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:46:33.981402 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:34.000133 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:46:34.061884 augenrules[2333]: No rules Jul 9 23:46:34.064822 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:46:34.065803 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:46:34.069579 sudo[2310]: pam_unix(sudo:session): session closed for user root Jul 9 23:46:34.093168 sshd[2309]: Connection closed by 139.178.89.65 port 55488 Jul 9 23:46:34.094009 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Jul 9 23:46:34.100947 systemd[1]: sshd@5-172.31.27.216:22-139.178.89.65:55488.service: Deactivated successfully. Jul 9 23:46:34.104476 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:46:34.106380 systemd-logind[1977]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:46:34.109576 systemd-logind[1977]: Removed session 6. Jul 9 23:46:34.131812 systemd[1]: Started sshd@6-172.31.27.216:22-139.178.89.65:55490.service - OpenSSH per-connection server daemon (139.178.89.65:55490). Jul 9 23:46:34.335534 sshd[2342]: Accepted publickey for core from 139.178.89.65 port 55490 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:46:34.337984 sshd-session[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:46:34.346937 systemd-logind[1977]: New session 7 of user core. Jul 9 23:46:34.366810 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:46:34.472389 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:46:34.473146 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:46:35.555751 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:46:35.572121 (dockerd)[2362]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:46:36.151167 dockerd[2362]: time="2025-07-09T23:46:36.151070427Z" level=info msg="Starting up" Jul 9 23:46:36.153536 dockerd[2362]: time="2025-07-09T23:46:36.153430439Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:46:36.327182 dockerd[2362]: time="2025-07-09T23:46:36.326976087Z" level=info msg="Loading containers: start." Jul 9 23:46:36.351536 kernel: Initializing XFRM netlink socket Jul 9 23:46:36.703951 (udev-worker)[2384]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:46:36.781764 systemd-networkd[1888]: docker0: Link UP Jul 9 23:46:36.786952 dockerd[2362]: time="2025-07-09T23:46:36.786793113Z" level=info msg="Loading containers: done." Jul 9 23:46:36.812950 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1964277822-merged.mount: Deactivated successfully. Jul 9 23:46:36.821534 dockerd[2362]: time="2025-07-09T23:46:36.821428570Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:46:36.821747 dockerd[2362]: time="2025-07-09T23:46:36.821588630Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:46:36.821805 dockerd[2362]: time="2025-07-09T23:46:36.821783197Z" level=info msg="Initializing buildkit" Jul 9 23:46:36.863715 dockerd[2362]: time="2025-07-09T23:46:36.863609358Z" level=info msg="Completed buildkit initialization" Jul 9 23:46:36.876540 dockerd[2362]: time="2025-07-09T23:46:36.876104587Z" level=info msg="Daemon has completed initialization" Jul 9 23:46:36.876540 dockerd[2362]: time="2025-07-09T23:46:36.876205157Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:46:36.877100 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:46:38.040403 containerd[1986]: time="2025-07-09T23:46:38.040312911Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 9 23:46:38.621062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276259453.mount: Deactivated successfully. Jul 9 23:46:39.913577 containerd[1986]: time="2025-07-09T23:46:39.913467435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:39.915535 containerd[1986]: time="2025-07-09T23:46:39.915463332Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 9 23:46:39.916614 containerd[1986]: time="2025-07-09T23:46:39.916559800Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:39.924389 containerd[1986]: time="2025-07-09T23:46:39.924318695Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.883932081s" Jul 9 23:46:39.924389 containerd[1986]: time="2025-07-09T23:46:39.924384829Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 9 23:46:39.924602 containerd[1986]: time="2025-07-09T23:46:39.924520853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:39.928583 containerd[1986]: time="2025-07-09T23:46:39.928521332Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 9 23:46:41.250819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:46:41.257031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:41.450602 containerd[1986]: time="2025-07-09T23:46:41.450202540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:41.451541 containerd[1986]: time="2025-07-09T23:46:41.451291235Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 9 23:46:41.460548 containerd[1986]: time="2025-07-09T23:46:41.458918004Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:41.467122 containerd[1986]: time="2025-07-09T23:46:41.467056688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:41.469361 containerd[1986]: time="2025-07-09T23:46:41.469270625Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.540683181s" Jul 9 23:46:41.469361 containerd[1986]: time="2025-07-09T23:46:41.469344964Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 9 23:46:41.470272 containerd[1986]: time="2025-07-09T23:46:41.470202583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 9 23:46:41.672444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:41.687260 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:41.760946 kubelet[2633]: E0709 23:46:41.760874 2633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:41.768647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:41.768956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:41.769912 systemd[1]: kubelet.service: Consumed 354ms CPU time, 107.4M memory peak. Jul 9 23:46:42.687658 containerd[1986]: time="2025-07-09T23:46:42.687585910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:42.689556 containerd[1986]: time="2025-07-09T23:46:42.689459085Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 9 23:46:42.690632 containerd[1986]: time="2025-07-09T23:46:42.690462431Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:42.702046 containerd[1986]: time="2025-07-09T23:46:42.701901637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:42.705689 containerd[1986]: time="2025-07-09T23:46:42.704628676Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.234106238s" Jul 9 23:46:42.705689 containerd[1986]: time="2025-07-09T23:46:42.704699417Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 9 23:46:42.706210 containerd[1986]: time="2025-07-09T23:46:42.706153582Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 9 23:46:44.005706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244326888.mount: Deactivated successfully. Jul 9 23:46:44.535102 containerd[1986]: time="2025-07-09T23:46:44.535009586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:44.536756 containerd[1986]: time="2025-07-09T23:46:44.536674569Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 9 23:46:44.538099 containerd[1986]: time="2025-07-09T23:46:44.538005039Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:44.541584 containerd[1986]: time="2025-07-09T23:46:44.541459467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:44.543401 containerd[1986]: time="2025-07-09T23:46:44.542925350Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.836541675s" Jul 9 23:46:44.543401 containerd[1986]: time="2025-07-09T23:46:44.542995946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 9 23:46:44.544538 containerd[1986]: time="2025-07-09T23:46:44.544143568Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 23:46:45.048030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82136834.mount: Deactivated successfully. Jul 9 23:46:46.229694 containerd[1986]: time="2025-07-09T23:46:46.229629893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:46.232318 containerd[1986]: time="2025-07-09T23:46:46.231806552Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 9 23:46:46.234587 containerd[1986]: time="2025-07-09T23:46:46.234482845Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:46.243859 containerd[1986]: time="2025-07-09T23:46:46.243798525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:46.246399 containerd[1986]: time="2025-07-09T23:46:46.246183136Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.701981445s" Jul 9 23:46:46.246399 containerd[1986]: time="2025-07-09T23:46:46.246251298Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 9 23:46:46.247676 containerd[1986]: time="2025-07-09T23:46:46.247272455Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:46:46.699939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205614271.mount: Deactivated successfully. Jul 9 23:46:46.707667 containerd[1986]: time="2025-07-09T23:46:46.707614254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:46.709066 containerd[1986]: time="2025-07-09T23:46:46.709024137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 9 23:46:46.709740 containerd[1986]: time="2025-07-09T23:46:46.709702254Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:46.713079 containerd[1986]: time="2025-07-09T23:46:46.713029677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:46:46.714563 containerd[1986]: time="2025-07-09T23:46:46.714488400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 467.158481ms" Jul 9 23:46:46.714679 containerd[1986]: time="2025-07-09T23:46:46.714565245Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:46:46.715349 containerd[1986]: time="2025-07-09T23:46:46.715289203Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 9 23:46:47.207100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100151824.mount: Deactivated successfully. Jul 9 23:46:49.236646 containerd[1986]: time="2025-07-09T23:46:49.236563854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:49.239359 containerd[1986]: time="2025-07-09T23:46:49.239280220Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 9 23:46:49.241962 containerd[1986]: time="2025-07-09T23:46:49.241835313Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:49.250927 containerd[1986]: time="2025-07-09T23:46:49.250813542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:46:49.253396 containerd[1986]: time="2025-07-09T23:46:49.253141086Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.537791421s" Jul 9 23:46:49.253396 containerd[1986]: time="2025-07-09T23:46:49.253213637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 9 23:46:52.019685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 23:46:52.025868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:52.388737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:52.401122 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:46:52.480529 kubelet[2788]: E0709 23:46:52.480447 2788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:46:52.485906 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:46:52.486428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:46:52.487462 systemd[1]: kubelet.service: Consumed 300ms CPU time, 105.6M memory peak. Jul 9 23:46:55.935937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:55.936557 systemd[1]: kubelet.service: Consumed 300ms CPU time, 105.6M memory peak. Jul 9 23:46:55.940316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:55.989775 systemd[1]: Reload requested from client PID 2803 ('systemctl') (unit session-7.scope)... Jul 9 23:46:55.989807 systemd[1]: Reloading... Jul 9 23:46:56.249562 zram_generator::config[2851]: No configuration found. Jul 9 23:46:56.457395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:46:56.719244 systemd[1]: Reloading finished in 728 ms. Jul 9 23:46:56.810397 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:46:56.810605 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:46:56.811098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:56.811210 systemd[1]: kubelet.service: Consumed 223ms CPU time, 94.8M memory peak. Jul 9 23:46:56.814240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:46:57.147410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:46:57.164024 (kubelet)[2911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:46:57.237629 kubelet[2911]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:57.237629 kubelet[2911]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:46:57.237629 kubelet[2911]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:46:57.238304 kubelet[2911]: I0709 23:46:57.238223 2911 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:46:58.271579 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 9 23:46:58.729732 kubelet[2911]: I0709 23:46:58.729279 2911 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:46:58.729732 kubelet[2911]: I0709 23:46:58.729331 2911 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:46:58.730670 kubelet[2911]: I0709 23:46:58.729801 2911 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:46:58.793962 kubelet[2911]: E0709 23:46:58.793588 2911 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.216:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:58.803286 kubelet[2911]: I0709 23:46:58.803159 2911 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:46:58.819629 kubelet[2911]: I0709 23:46:58.819586 2911 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:46:58.827376 kubelet[2911]: I0709 23:46:58.827263 2911 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:46:58.828062 kubelet[2911]: I0709 23:46:58.828016 2911 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:46:58.828443 kubelet[2911]: I0709 23:46:58.828380 2911 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:46:58.828759 kubelet[2911]: I0709 23:46:58.828433 2911 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-216","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:46:58.828936 kubelet[2911]: I0709 23:46:58.828888 2911 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:46:58.828936 kubelet[2911]: I0709 23:46:58.828909 2911 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:46:58.829265 kubelet[2911]: I0709 23:46:58.829220 2911 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:58.834282 kubelet[2911]: I0709 23:46:58.834007 2911 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:46:58.834282 kubelet[2911]: I0709 23:46:58.834057 2911 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:46:58.834282 kubelet[2911]: I0709 23:46:58.834092 2911 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:46:58.834282 kubelet[2911]: I0709 23:46:58.834253 2911 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:46:58.838016 kubelet[2911]: W0709 23:46:58.837939 2911 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-216&limit=500&resourceVersion=0": dial tcp 172.31.27.216:6443: connect: connection refused Jul 9 23:46:58.838577 kubelet[2911]: E0709 23:46:58.838230 2911 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-216&limit=500&resourceVersion=0\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:58.842106 kubelet[2911]: W0709 23:46:58.842021 2911 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.216:6443: connect: connection refused Jul 9 23:46:58.842279 kubelet[2911]: E0709 23:46:58.842117 2911 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:58.842572 kubelet[2911]: I0709 23:46:58.842532 2911 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:46:58.843734 kubelet[2911]: I0709 23:46:58.843693 2911 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:46:58.843947 kubelet[2911]: W0709 23:46:58.843914 2911 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:46:58.847878 kubelet[2911]: I0709 23:46:58.847834 2911 server.go:1274] "Started kubelet" Jul 9 23:46:58.864206 kubelet[2911]: I0709 23:46:58.863219 2911 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:46:58.865973 kubelet[2911]: E0709 23:46:58.863847 2911 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.216:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.216:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-216.1850ba0cb8ac4cab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-216,UID:ip-172-31-27-216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-216,},FirstTimestamp:2025-07-09 23:46:58.847796395 +0000 UTC m=+1.677959323,LastTimestamp:2025-07-09 23:46:58.847796395 +0000 UTC m=+1.677959323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-216,}" Jul 9 23:46:58.868435 kubelet[2911]: I0709 23:46:58.868279 2911 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:46:58.871810 kubelet[2911]: I0709 23:46:58.871762 2911 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:46:58.872312 kubelet[2911]: E0709 23:46:58.872262 2911 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-216\" not found" Jul 9 23:46:58.873192 kubelet[2911]: I0709 23:46:58.873133 2911 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:46:58.873324 kubelet[2911]: I0709 23:46:58.873239 2911 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:46:58.875621 kubelet[2911]: W0709 23:46:58.875482 2911 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.216:6443: connect: connection refused Jul 9 23:46:58.875777 kubelet[2911]: E0709 23:46:58.875634 2911 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:58.875777 kubelet[2911]: E0709 23:46:58.875756 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-216?timeout=10s\": dial tcp 172.31.27.216:6443: connect: connection refused" interval="200ms" Jul 9 23:46:58.876285 kubelet[2911]: I0709 23:46:58.876231 2911 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:46:58.876447 kubelet[2911]: I0709 23:46:58.876404 2911 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:46:58.879406 kubelet[2911]: I0709 23:46:58.879347 2911 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:46:58.881470 kubelet[2911]: I0709 23:46:58.881412 2911 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:46:58.884576 kubelet[2911]: I0709 23:46:58.883518 2911 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:46:58.885439 kubelet[2911]: I0709 23:46:58.885364 2911 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:46:58.885926 kubelet[2911]: I0709 23:46:58.885899 2911 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:46:58.904791 kubelet[2911]: I0709 23:46:58.904712 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:46:58.908710 kubelet[2911]: I0709 23:46:58.908656 2911 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:46:58.908850 kubelet[2911]: I0709 23:46:58.908732 2911 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:46:58.908850 kubelet[2911]: I0709 23:46:58.908768 2911 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:46:58.908947 kubelet[2911]: E0709 23:46:58.908867 2911 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:46:58.918447 kubelet[2911]: E0709 23:46:58.918362 2911 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:46:58.919118 kubelet[2911]: W0709 23:46:58.919036 2911 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.216:6443: connect: connection refused Jul 9 23:46:58.919250 kubelet[2911]: E0709 23:46:58.919131 2911 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.216:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:58.928223 kubelet[2911]: I0709 23:46:58.928191 2911 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:46:58.928477 kubelet[2911]: I0709 23:46:58.928430 2911 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:46:58.928792 kubelet[2911]: I0709 23:46:58.928708 2911 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:46:58.932650 kubelet[2911]: I0709 23:46:58.932616 2911 policy_none.go:49] "None policy: Start" Jul 9 23:46:58.934850 kubelet[2911]: I0709 23:46:58.934770 2911 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:46:58.935111 kubelet[2911]: I0709 23:46:58.935078 2911 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:46:58.947166 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:46:58.964138 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:46:58.973408 kubelet[2911]: E0709 23:46:58.972334 2911 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-216\" not found" Jul 9 23:46:58.972961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:46:58.985666 kubelet[2911]: I0709 23:46:58.984010 2911 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:46:58.986156 kubelet[2911]: I0709 23:46:58.986124 2911 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:46:58.986942 kubelet[2911]: I0709 23:46:58.986484 2911 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:46:58.989981 kubelet[2911]: I0709 23:46:58.989903 2911 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:46:58.993572 kubelet[2911]: E0709 23:46:58.993522 2911 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-216\" not found" Jul 9 23:46:59.032059 systemd[1]: Created slice kubepods-burstable-podadef679db5de8a7bf4348b43c0336f60.slice - libcontainer container kubepods-burstable-podadef679db5de8a7bf4348b43c0336f60.slice. Jul 9 23:46:59.071673 systemd[1]: Created slice kubepods-burstable-podb5ddb53b8fb0ea437e07dab8649fdfd5.slice - libcontainer container kubepods-burstable-podb5ddb53b8fb0ea437e07dab8649fdfd5.slice. Jul 9 23:46:59.074258 kubelet[2911]: I0709 23:46:59.073736 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adef679db5de8a7bf4348b43c0336f60-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-216\" (UID: \"adef679db5de8a7bf4348b43c0336f60\") " pod="kube-system/kube-scheduler-ip-172-31-27-216" Jul 9 23:46:59.075918 kubelet[2911]: I0709 23:46:59.075860 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5ddb53b8fb0ea437e07dab8649fdfd5-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-216\" (UID: \"b5ddb53b8fb0ea437e07dab8649fdfd5\") " pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:46:59.076036 kubelet[2911]: I0709 23:46:59.075948 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:46:59.076036 kubelet[2911]: I0709 23:46:59.075992 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:46:59.076194 kubelet[2911]: I0709 23:46:59.076058 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:46:59.076194 kubelet[2911]: I0709 23:46:59.076098 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:46:59.076194 kubelet[2911]: I0709 23:46:59.076170 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5ddb53b8fb0ea437e07dab8649fdfd5-ca-certs\") pod \"kube-apiserver-ip-172-31-27-216\" (UID: \"b5ddb53b8fb0ea437e07dab8649fdfd5\") " pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:46:59.076349 kubelet[2911]: I0709 23:46:59.076243 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5ddb53b8fb0ea437e07dab8649fdfd5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-216\" (UID: \"b5ddb53b8fb0ea437e07dab8649fdfd5\") " pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:46:59.076349 kubelet[2911]: I0709 23:46:59.076318 2911 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:46:59.077750 kubelet[2911]: E0709 23:46:59.077663 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-216?timeout=10s\": dial tcp 172.31.27.216:6443: connect: connection refused" interval="400ms" Jul 9 23:46:59.093283 kubelet[2911]: I0709 23:46:59.093225 2911 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-216" Jul 9 23:46:59.095134 systemd[1]: Created slice kubepods-burstable-podb48f4020f7a781c71fea03f4fae7645e.slice - libcontainer container kubepods-burstable-podb48f4020f7a781c71fea03f4fae7645e.slice. Jul 9 23:46:59.096781 kubelet[2911]: E0709 23:46:59.095604 2911 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.216:6443/api/v1/nodes\": dial tcp 172.31.27.216:6443: connect: connection refused" node="ip-172-31-27-216" Jul 9 23:46:59.300876 kubelet[2911]: I0709 23:46:59.300731 2911 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-216" Jul 9 23:46:59.301634 kubelet[2911]: E0709 23:46:59.301344 2911 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.216:6443/api/v1/nodes\": dial tcp 172.31.27.216:6443: connect: connection refused" node="ip-172-31-27-216" Jul 9 23:46:59.364796 containerd[1986]: time="2025-07-09T23:46:59.364711601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-216,Uid:adef679db5de8a7bf4348b43c0336f60,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:59.382583 containerd[1986]: time="2025-07-09T23:46:59.382459747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-216,Uid:b5ddb53b8fb0ea437e07dab8649fdfd5,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:59.408802 containerd[1986]: time="2025-07-09T23:46:59.408672922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-216,Uid:b48f4020f7a781c71fea03f4fae7645e,Namespace:kube-system,Attempt:0,}" Jul 9 23:46:59.415570 containerd[1986]: time="2025-07-09T23:46:59.415124854Z" level=info msg="connecting to shim 473af44ba1ca3e98b9734df56696079d2800096946b32e4474714bcaa3ee24ae" address="unix:///run/containerd/s/0770a05ed00797342dd83b82e1efc2d487c472a70882cf22b26e55cd40a1ca96" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:59.480150 kubelet[2911]: E0709 23:46:59.480066 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-216?timeout=10s\": dial tcp 172.31.27.216:6443: connect: connection refused" interval="800ms" Jul 9 23:46:59.486108 containerd[1986]: time="2025-07-09T23:46:59.483726061Z" level=info msg="connecting to shim b4fb400ca83ff0fff10f8423fd48225d8c4a03e9dc44e540a2dcd3f92d7913cd" address="unix:///run/containerd/s/88fdacff5d82d5f6212b050789a284b110acb19794b4ed51845f60ef7510598d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:59.486137 systemd[1]: Started cri-containerd-473af44ba1ca3e98b9734df56696079d2800096946b32e4474714bcaa3ee24ae.scope - libcontainer container 473af44ba1ca3e98b9734df56696079d2800096946b32e4474714bcaa3ee24ae. Jul 9 23:46:59.533960 containerd[1986]: time="2025-07-09T23:46:59.533881131Z" level=info msg="connecting to shim 2e01aa5d8cfaec315ca3be00f36f464f436fe621dfc30a013f6f090963547e27" address="unix:///run/containerd/s/e6d4ed4f52aeb768e32948f465a9d7523be72571f399f076d687d0f9752da6a1" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:46:59.572842 systemd[1]: Started cri-containerd-b4fb400ca83ff0fff10f8423fd48225d8c4a03e9dc44e540a2dcd3f92d7913cd.scope - libcontainer container b4fb400ca83ff0fff10f8423fd48225d8c4a03e9dc44e540a2dcd3f92d7913cd. Jul 9 23:46:59.610917 systemd[1]: Started cri-containerd-2e01aa5d8cfaec315ca3be00f36f464f436fe621dfc30a013f6f090963547e27.scope - libcontainer container 2e01aa5d8cfaec315ca3be00f36f464f436fe621dfc30a013f6f090963547e27. Jul 9 23:46:59.708940 kubelet[2911]: I0709 23:46:59.708904 2911 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-216" Jul 9 23:46:59.712685 kubelet[2911]: E0709 23:46:59.712595 2911 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.216:6443/api/v1/nodes\": dial tcp 172.31.27.216:6443: connect: connection refused" node="ip-172-31-27-216" Jul 9 23:46:59.731190 containerd[1986]: time="2025-07-09T23:46:59.731123361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-216,Uid:adef679db5de8a7bf4348b43c0336f60,Namespace:kube-system,Attempt:0,} returns sandbox id \"473af44ba1ca3e98b9734df56696079d2800096946b32e4474714bcaa3ee24ae\"" Jul 9 23:46:59.745362 containerd[1986]: time="2025-07-09T23:46:59.745282086Z" level=info msg="CreateContainer within sandbox \"473af44ba1ca3e98b9734df56696079d2800096946b32e4474714bcaa3ee24ae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:46:59.749734 containerd[1986]: time="2025-07-09T23:46:59.749604460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-216,Uid:b5ddb53b8fb0ea437e07dab8649fdfd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4fb400ca83ff0fff10f8423fd48225d8c4a03e9dc44e540a2dcd3f92d7913cd\"" Jul 9 23:46:59.762592 containerd[1986]: time="2025-07-09T23:46:59.761680787Z" level=info msg="CreateContainer within sandbox \"b4fb400ca83ff0fff10f8423fd48225d8c4a03e9dc44e540a2dcd3f92d7913cd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:46:59.768723 containerd[1986]: time="2025-07-09T23:46:59.768659268Z" level=info msg="Container 9f47948821a7d0f624b95e3fb478b8d4c62700a1a03d06ae45a2e71f0db41841: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:59.783959 containerd[1986]: time="2025-07-09T23:46:59.783911462Z" level=info msg="Container 80ff3c6681502cc05c2eff89939da3e8313ea6616a856825d30e2002907a9fa7: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:59.791701 containerd[1986]: time="2025-07-09T23:46:59.791473703Z" level=info msg="CreateContainer within sandbox \"473af44ba1ca3e98b9734df56696079d2800096946b32e4474714bcaa3ee24ae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f47948821a7d0f624b95e3fb478b8d4c62700a1a03d06ae45a2e71f0db41841\"" Jul 9 23:46:59.793282 containerd[1986]: time="2025-07-09T23:46:59.793170626Z" level=info msg="StartContainer for \"9f47948821a7d0f624b95e3fb478b8d4c62700a1a03d06ae45a2e71f0db41841\"" Jul 9 23:46:59.797139 containerd[1986]: time="2025-07-09T23:46:59.796963441Z" level=info msg="connecting to shim 9f47948821a7d0f624b95e3fb478b8d4c62700a1a03d06ae45a2e71f0db41841" address="unix:///run/containerd/s/0770a05ed00797342dd83b82e1efc2d487c472a70882cf22b26e55cd40a1ca96" protocol=ttrpc version=3 Jul 9 23:46:59.809619 containerd[1986]: time="2025-07-09T23:46:59.809549944Z" level=info msg="CreateContainer within sandbox \"b4fb400ca83ff0fff10f8423fd48225d8c4a03e9dc44e540a2dcd3f92d7913cd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"80ff3c6681502cc05c2eff89939da3e8313ea6616a856825d30e2002907a9fa7\"" Jul 9 23:46:59.812431 containerd[1986]: time="2025-07-09T23:46:59.812036360Z" level=info msg="StartContainer for \"80ff3c6681502cc05c2eff89939da3e8313ea6616a856825d30e2002907a9fa7\"" Jul 9 23:46:59.820258 containerd[1986]: time="2025-07-09T23:46:59.820122727Z" level=info msg="connecting to shim 80ff3c6681502cc05c2eff89939da3e8313ea6616a856825d30e2002907a9fa7" address="unix:///run/containerd/s/88fdacff5d82d5f6212b050789a284b110acb19794b4ed51845f60ef7510598d" protocol=ttrpc version=3 Jul 9 23:46:59.831551 containerd[1986]: time="2025-07-09T23:46:59.831335354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-216,Uid:b48f4020f7a781c71fea03f4fae7645e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e01aa5d8cfaec315ca3be00f36f464f436fe621dfc30a013f6f090963547e27\"" Jul 9 23:46:59.840852 systemd[1]: Started cri-containerd-9f47948821a7d0f624b95e3fb478b8d4c62700a1a03d06ae45a2e71f0db41841.scope - libcontainer container 9f47948821a7d0f624b95e3fb478b8d4c62700a1a03d06ae45a2e71f0db41841. Jul 9 23:46:59.843289 kubelet[2911]: W0709 23:46:59.843217 2911 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.216:6443: connect: connection refused Jul 9 23:46:59.844711 kubelet[2911]: E0709 23:46:59.843912 2911 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.216:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:59.846888 containerd[1986]: time="2025-07-09T23:46:59.846756556Z" level=info msg="CreateContainer within sandbox \"2e01aa5d8cfaec315ca3be00f36f464f436fe621dfc30a013f6f090963547e27\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:46:59.882244 containerd[1986]: time="2025-07-09T23:46:59.882055737Z" level=info msg="Container 821718b5ad02b50bc55bf34f912d9a66a7e5feb79996aa83911e3d3d322e99d3: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:46:59.903768 containerd[1986]: time="2025-07-09T23:46:59.903675726Z" level=info msg="CreateContainer within sandbox \"2e01aa5d8cfaec315ca3be00f36f464f436fe621dfc30a013f6f090963547e27\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"821718b5ad02b50bc55bf34f912d9a66a7e5feb79996aa83911e3d3d322e99d3\"" Jul 9 23:46:59.904119 systemd[1]: Started cri-containerd-80ff3c6681502cc05c2eff89939da3e8313ea6616a856825d30e2002907a9fa7.scope - libcontainer container 80ff3c6681502cc05c2eff89939da3e8313ea6616a856825d30e2002907a9fa7. Jul 9 23:46:59.906910 containerd[1986]: time="2025-07-09T23:46:59.906705986Z" level=info msg="StartContainer for \"821718b5ad02b50bc55bf34f912d9a66a7e5feb79996aa83911e3d3d322e99d3\"" Jul 9 23:46:59.916710 containerd[1986]: time="2025-07-09T23:46:59.916590781Z" level=info msg="connecting to shim 821718b5ad02b50bc55bf34f912d9a66a7e5feb79996aa83911e3d3d322e99d3" address="unix:///run/containerd/s/e6d4ed4f52aeb768e32948f465a9d7523be72571f399f076d687d0f9752da6a1" protocol=ttrpc version=3 Jul 9 23:46:59.970571 kubelet[2911]: W0709 23:46:59.970256 2911 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.216:6443: connect: connection refused Jul 9 23:46:59.973704 kubelet[2911]: E0709 23:46:59.973581 2911 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.216:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:46:59.994151 systemd[1]: Started cri-containerd-821718b5ad02b50bc55bf34f912d9a66a7e5feb79996aa83911e3d3d322e99d3.scope - libcontainer container 821718b5ad02b50bc55bf34f912d9a66a7e5feb79996aa83911e3d3d322e99d3. Jul 9 23:47:00.023414 containerd[1986]: time="2025-07-09T23:47:00.023311763Z" level=info msg="StartContainer for \"9f47948821a7d0f624b95e3fb478b8d4c62700a1a03d06ae45a2e71f0db41841\" returns successfully" Jul 9 23:47:00.110228 containerd[1986]: time="2025-07-09T23:47:00.109730104Z" level=info msg="StartContainer for \"80ff3c6681502cc05c2eff89939da3e8313ea6616a856825d30e2002907a9fa7\" returns successfully" Jul 9 23:47:00.180286 kubelet[2911]: W0709 23:47:00.180195 2911 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-216&limit=500&resourceVersion=0": dial tcp 172.31.27.216:6443: connect: connection refused Jul 9 23:47:00.180600 kubelet[2911]: E0709 23:47:00.180564 2911 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.216:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-216&limit=500&resourceVersion=0\": dial tcp 172.31.27.216:6443: connect: connection refused" logger="UnhandledError" Jul 9 23:47:00.201254 containerd[1986]: time="2025-07-09T23:47:00.201190795Z" level=info msg="StartContainer for \"821718b5ad02b50bc55bf34f912d9a66a7e5feb79996aa83911e3d3d322e99d3\" returns successfully" Jul 9 23:47:00.281118 kubelet[2911]: E0709 23:47:00.281031 2911 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.216:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-216?timeout=10s\": dial tcp 172.31.27.216:6443: connect: connection refused" interval="1.6s" Jul 9 23:47:00.518201 kubelet[2911]: I0709 23:47:00.517704 2911 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-216" Jul 9 23:47:03.841987 kubelet[2911]: I0709 23:47:03.841641 2911 apiserver.go:52] "Watching apiserver" Jul 9 23:47:03.974433 kubelet[2911]: I0709 23:47:03.974352 2911 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:47:03.995360 kubelet[2911]: E0709 23:47:03.995291 2911 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-216\" not found" node="ip-172-31-27-216" Jul 9 23:47:04.014613 kubelet[2911]: I0709 23:47:04.014552 2911 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-216" Jul 9 23:47:04.014771 kubelet[2911]: E0709 23:47:04.014636 2911 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-27-216\": node \"ip-172-31-27-216\" not found" Jul 9 23:47:04.091787 kubelet[2911]: E0709 23:47:04.091613 2911 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-216.1850ba0cb8ac4cab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-216,UID:ip-172-31-27-216,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-216,},FirstTimestamp:2025-07-09 23:46:58.847796395 +0000 UTC m=+1.677959323,LastTimestamp:2025-07-09 23:46:58.847796395 +0000 UTC m=+1.677959323,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-216,}" Jul 9 23:47:05.653953 systemd[1]: Reload requested from client PID 3183 ('systemctl') (unit session-7.scope)... Jul 9 23:47:05.653988 systemd[1]: Reloading... Jul 9 23:47:05.986594 zram_generator::config[3227]: No configuration found. Jul 9 23:47:06.220325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:47:06.545230 systemd[1]: Reloading finished in 890 ms. Jul 9 23:47:06.599985 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:47:06.615316 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:47:06.615845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:47:06.615938 systemd[1]: kubelet.service: Consumed 2.441s CPU time, 128.9M memory peak. Jul 9 23:47:06.620428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:47:07.007635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:47:07.025294 (kubelet)[3288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:47:07.131620 kubelet[3288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:47:07.131620 kubelet[3288]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 9 23:47:07.131620 kubelet[3288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:47:07.132147 kubelet[3288]: I0709 23:47:07.131716 3288 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:47:07.147086 kubelet[3288]: I0709 23:47:07.147005 3288 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 9 23:47:07.147086 kubelet[3288]: I0709 23:47:07.147065 3288 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:47:07.147623 kubelet[3288]: I0709 23:47:07.147568 3288 server.go:934] "Client rotation is on, will bootstrap in background" Jul 9 23:47:07.163175 kubelet[3288]: I0709 23:47:07.161591 3288 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 23:47:07.172551 kubelet[3288]: I0709 23:47:07.171897 3288 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:47:07.186402 kubelet[3288]: I0709 23:47:07.186369 3288 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:47:07.198342 kubelet[3288]: I0709 23:47:07.198303 3288 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:47:07.198755 kubelet[3288]: I0709 23:47:07.198730 3288 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 9 23:47:07.199251 kubelet[3288]: I0709 23:47:07.199183 3288 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:47:07.201546 kubelet[3288]: I0709 23:47:07.199431 3288 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-216","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:47:07.201917 kubelet[3288]: I0709 23:47:07.201884 3288 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:47:07.202073 kubelet[3288]: I0709 23:47:07.202052 3288 container_manager_linux.go:300] "Creating device plugin manager" Jul 9 23:47:07.202243 kubelet[3288]: I0709 23:47:07.202223 3288 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:47:07.202611 kubelet[3288]: I0709 23:47:07.202573 3288 kubelet.go:408] "Attempting to sync node with API server" Jul 9 23:47:07.203483 kubelet[3288]: I0709 23:47:07.203447 3288 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:47:07.206020 kubelet[3288]: I0709 23:47:07.203680 3288 kubelet.go:314] "Adding apiserver pod source" Jul 9 23:47:07.206020 kubelet[3288]: I0709 23:47:07.203733 3288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:47:07.206471 kubelet[3288]: I0709 23:47:07.206436 3288 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:47:07.207442 kubelet[3288]: I0709 23:47:07.207394 3288 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 23:47:07.211545 kubelet[3288]: I0709 23:47:07.210313 3288 server.go:1274] "Started kubelet" Jul 9 23:47:07.223405 sudo[3303]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:47:07.225191 sudo[3303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:47:07.228814 kubelet[3288]: I0709 23:47:07.228212 3288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:47:07.233430 kubelet[3288]: I0709 23:47:07.233112 3288 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:47:07.241559 kubelet[3288]: I0709 23:47:07.239732 3288 server.go:449] "Adding debug handlers to kubelet server" Jul 9 23:47:07.252643 kubelet[3288]: I0709 23:47:07.251114 3288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:47:07.252643 kubelet[3288]: I0709 23:47:07.251752 3288 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:47:07.256301 kubelet[3288]: I0709 23:47:07.254469 3288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:47:07.331933 kubelet[3288]: I0709 23:47:07.330362 3288 factory.go:221] Registration of the systemd container factory successfully Jul 9 23:47:07.332129 kubelet[3288]: I0709 23:47:07.332078 3288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:47:07.335528 kubelet[3288]: I0709 23:47:07.259217 3288 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 9 23:47:07.342226 kubelet[3288]: I0709 23:47:07.259237 3288 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 9 23:47:07.342928 kubelet[3288]: E0709 23:47:07.259516 3288 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-216\" not found" Jul 9 23:47:07.346274 kubelet[3288]: I0709 23:47:07.346239 3288 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:47:07.357612 kubelet[3288]: I0709 23:47:07.357320 3288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 23:47:07.361280 kubelet[3288]: I0709 23:47:07.361021 3288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 23:47:07.361807 kubelet[3288]: I0709 23:47:07.361773 3288 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 9 23:47:07.363590 kubelet[3288]: I0709 23:47:07.362026 3288 kubelet.go:2321] "Starting kubelet main sync loop" Jul 9 23:47:07.363948 kubelet[3288]: E0709 23:47:07.363909 3288 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:47:07.372790 kubelet[3288]: I0709 23:47:07.372735 3288 factory.go:221] Registration of the containerd container factory successfully Jul 9 23:47:07.394474 kubelet[3288]: E0709 23:47:07.394266 3288 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:47:07.464183 kubelet[3288]: E0709 23:47:07.464120 3288 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:47:07.534524 kubelet[3288]: I0709 23:47:07.532891 3288 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 9 23:47:07.534524 kubelet[3288]: I0709 23:47:07.532925 3288 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 9 23:47:07.534524 kubelet[3288]: I0709 23:47:07.532959 3288 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:47:07.534524 kubelet[3288]: I0709 23:47:07.533246 3288 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:47:07.534524 kubelet[3288]: I0709 23:47:07.533274 3288 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:47:07.534524 kubelet[3288]: I0709 23:47:07.533309 3288 policy_none.go:49] "None policy: Start" Jul 9 23:47:07.535818 kubelet[3288]: I0709 23:47:07.535775 3288 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 9 23:47:07.535946 kubelet[3288]: I0709 23:47:07.535829 3288 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:47:07.536125 kubelet[3288]: I0709 23:47:07.536089 3288 state_mem.go:75] "Updated machine memory state" Jul 9 23:47:07.551937 kubelet[3288]: I0709 23:47:07.551900 3288 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 23:47:07.553455 kubelet[3288]: I0709 23:47:07.552813 3288 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:47:07.554926 kubelet[3288]: I0709 23:47:07.554358 3288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:47:07.557442 kubelet[3288]: I0709 23:47:07.556588 3288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:47:07.682344 kubelet[3288]: I0709 23:47:07.682266 3288 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-216" Jul 9 23:47:07.683758 kubelet[3288]: E0709 23:47:07.683724 3288 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-216\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:47:07.703183 kubelet[3288]: I0709 23:47:07.703141 3288 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-27-216" Jul 9 23:47:07.703670 kubelet[3288]: I0709 23:47:07.703572 3288 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-216" Jul 9 23:47:07.749072 kubelet[3288]: I0709 23:47:07.749002 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5ddb53b8fb0ea437e07dab8649fdfd5-ca-certs\") pod \"kube-apiserver-ip-172-31-27-216\" (UID: \"b5ddb53b8fb0ea437e07dab8649fdfd5\") " pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:47:07.749540 kubelet[3288]: I0709 23:47:07.749312 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5ddb53b8fb0ea437e07dab8649fdfd5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-216\" (UID: \"b5ddb53b8fb0ea437e07dab8649fdfd5\") " pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:47:07.751558 kubelet[3288]: I0709 23:47:07.749764 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:47:07.751558 kubelet[3288]: I0709 23:47:07.750097 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/adef679db5de8a7bf4348b43c0336f60-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-216\" (UID: \"adef679db5de8a7bf4348b43c0336f60\") " pod="kube-system/kube-scheduler-ip-172-31-27-216" Jul 9 23:47:07.751558 kubelet[3288]: I0709 23:47:07.750368 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:47:07.751558 kubelet[3288]: I0709 23:47:07.750539 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5ddb53b8fb0ea437e07dab8649fdfd5-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-216\" (UID: \"b5ddb53b8fb0ea437e07dab8649fdfd5\") " pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:47:07.751558 kubelet[3288]: I0709 23:47:07.750583 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:47:07.751986 kubelet[3288]: I0709 23:47:07.750841 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:47:07.751986 kubelet[3288]: I0709 23:47:07.751000 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b48f4020f7a781c71fea03f4fae7645e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-216\" (UID: \"b48f4020f7a781c71fea03f4fae7645e\") " pod="kube-system/kube-controller-manager-ip-172-31-27-216" Jul 9 23:47:08.206034 kubelet[3288]: I0709 23:47:08.205231 3288 apiserver.go:52] "Watching apiserver" Jul 9 23:47:08.243372 kubelet[3288]: I0709 23:47:08.243192 3288 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 9 23:47:08.253015 sudo[3303]: pam_unix(sudo:session): session closed for user root Jul 9 23:47:08.502040 kubelet[3288]: E0709 23:47:08.501364 3288 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-216\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-216" Jul 9 23:47:08.528694 kubelet[3288]: I0709 23:47:08.528579 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-216" podStartSLOduration=1.528558501 podStartE2EDuration="1.528558501s" podCreationTimestamp="2025-07-09 23:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:08.528211946 +0000 UTC m=+1.495526864" watchObservedRunningTime="2025-07-09 23:47:08.528558501 +0000 UTC m=+1.495873395" Jul 9 23:47:08.555042 kubelet[3288]: I0709 23:47:08.554767 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-216" podStartSLOduration=4.554741319 podStartE2EDuration="4.554741319s" podCreationTimestamp="2025-07-09 23:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:08.554223598 +0000 UTC m=+1.521538516" watchObservedRunningTime="2025-07-09 23:47:08.554741319 +0000 UTC m=+1.522056237" Jul 9 23:47:08.571978 kubelet[3288]: I0709 23:47:08.571871 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-216" podStartSLOduration=1.571850352 podStartE2EDuration="1.571850352s" podCreationTimestamp="2025-07-09 23:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:08.571696541 +0000 UTC m=+1.539011483" watchObservedRunningTime="2025-07-09 23:47:08.571850352 +0000 UTC m=+1.539165258" Jul 9 23:47:10.826404 sudo[2345]: pam_unix(sudo:session): session closed for user root Jul 9 23:47:10.850104 sshd[2344]: Connection closed by 139.178.89.65 port 55490 Jul 9 23:47:10.851685 sshd-session[2342]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:10.862942 systemd[1]: sshd@6-172.31.27.216:22-139.178.89.65:55490.service: Deactivated successfully. Jul 9 23:47:10.869870 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:47:10.871272 systemd[1]: session-7.scope: Consumed 10.340s CPU time, 271.1M memory peak. Jul 9 23:47:10.876105 systemd-logind[1977]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:47:10.881418 systemd-logind[1977]: Removed session 7. Jul 9 23:47:11.851112 update_engine[1979]: I20250709 23:47:11.850953 1979 update_attempter.cc:509] Updating boot flags... Jul 9 23:47:12.424282 kubelet[3288]: I0709 23:47:12.424213 3288 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:47:12.427084 containerd[1986]: time="2025-07-09T23:47:12.426956926Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:47:12.428289 kubelet[3288]: I0709 23:47:12.428228 3288 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:47:12.858645 systemd[1]: Created slice kubepods-besteffort-pod9a0ed309_2256_49d0_89e9_89dc8f91196a.slice - libcontainer container kubepods-besteffort-pod9a0ed309_2256_49d0_89e9_89dc8f91196a.slice. Jul 9 23:47:12.866355 kubelet[3288]: W0709 23:47:12.865335 3288 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-27-216" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-216' and this object Jul 9 23:47:12.866355 kubelet[3288]: E0709 23:47:12.865425 3288 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-27-216\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-27-216' and this object" logger="UnhandledError" Jul 9 23:47:12.881140 kubelet[3288]: I0709 23:47:12.880877 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a0ed309-2256-49d0-89e9-89dc8f91196a-kube-proxy\") pod \"kube-proxy-8m6sh\" (UID: \"9a0ed309-2256-49d0-89e9-89dc8f91196a\") " pod="kube-system/kube-proxy-8m6sh" Jul 9 23:47:12.881829 kubelet[3288]: I0709 23:47:12.881695 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a0ed309-2256-49d0-89e9-89dc8f91196a-xtables-lock\") pod \"kube-proxy-8m6sh\" (UID: \"9a0ed309-2256-49d0-89e9-89dc8f91196a\") " pod="kube-system/kube-proxy-8m6sh" Jul 9 23:47:12.881984 kubelet[3288]: I0709 23:47:12.881871 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a0ed309-2256-49d0-89e9-89dc8f91196a-lib-modules\") pod \"kube-proxy-8m6sh\" (UID: \"9a0ed309-2256-49d0-89e9-89dc8f91196a\") " pod="kube-system/kube-proxy-8m6sh" Jul 9 23:47:12.883167 kubelet[3288]: I0709 23:47:12.882970 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw74r\" (UniqueName: \"kubernetes.io/projected/9a0ed309-2256-49d0-89e9-89dc8f91196a-kube-api-access-nw74r\") pod \"kube-proxy-8m6sh\" (UID: \"9a0ed309-2256-49d0-89e9-89dc8f91196a\") " pod="kube-system/kube-proxy-8m6sh" Jul 9 23:47:12.920031 systemd[1]: Created slice kubepods-burstable-pod169680eb_0ab6_4f2b_92d3_ed15f994deed.slice - libcontainer container kubepods-burstable-pod169680eb_0ab6_4f2b_92d3_ed15f994deed.slice. Jul 9 23:47:12.984190 kubelet[3288]: I0709 23:47:12.983988 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-etc-cni-netd\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.984190 kubelet[3288]: I0709 23:47:12.984072 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-lib-modules\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.984900 kubelet[3288]: I0709 23:47:12.984545 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cni-path\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.984900 kubelet[3288]: I0709 23:47:12.984643 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-cgroup\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.984900 kubelet[3288]: I0709 23:47:12.984721 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-xtables-lock\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.984900 kubelet[3288]: I0709 23:47:12.984758 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-net\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.984900 kubelet[3288]: I0709 23:47:12.984793 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-kernel\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.984900 kubelet[3288]: I0709 23:47:12.984828 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-config-path\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.985355 kubelet[3288]: I0709 23:47:12.984901 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-hubble-tls\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.985701 kubelet[3288]: I0709 23:47:12.985658 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-run\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.986469 kubelet[3288]: I0709 23:47:12.986142 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-hostproc\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.986469 kubelet[3288]: I0709 23:47:12.986253 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpkn\" (UniqueName: \"kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-kube-api-access-vvpkn\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.986469 kubelet[3288]: I0709 23:47:12.986301 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-bpf-maps\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.986469 kubelet[3288]: I0709 23:47:12.986339 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/169680eb-0ab6-4f2b-92d3-ed15f994deed-clustermesh-secrets\") pod \"cilium-jlcgn\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " pod="kube-system/cilium-jlcgn" Jul 9 23:47:12.995798 kubelet[3288]: E0709 23:47:12.995569 3288 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:47:12.995798 kubelet[3288]: E0709 23:47:12.995621 3288 projected.go:194] Error preparing data for projected volume kube-api-access-nw74r for pod kube-system/kube-proxy-8m6sh: configmap "kube-root-ca.crt" not found Jul 9 23:47:12.995798 kubelet[3288]: E0709 23:47:12.995752 3288 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a0ed309-2256-49d0-89e9-89dc8f91196a-kube-api-access-nw74r podName:9a0ed309-2256-49d0-89e9-89dc8f91196a nodeName:}" failed. No retries permitted until 2025-07-09 23:47:13.495698099 +0000 UTC m=+6.463012993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nw74r" (UniqueName: "kubernetes.io/projected/9a0ed309-2256-49d0-89e9-89dc8f91196a-kube-api-access-nw74r") pod "kube-proxy-8m6sh" (UID: "9a0ed309-2256-49d0-89e9-89dc8f91196a") : configmap "kube-root-ca.crt" not found Jul 9 23:47:13.126328 kubelet[3288]: E0709 23:47:13.126066 3288 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 9 23:47:13.126328 kubelet[3288]: E0709 23:47:13.126120 3288 projected.go:194] Error preparing data for projected volume kube-api-access-vvpkn for pod kube-system/cilium-jlcgn: configmap "kube-root-ca.crt" not found Jul 9 23:47:13.126328 kubelet[3288]: E0709 23:47:13.126223 3288 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-kube-api-access-vvpkn podName:169680eb-0ab6-4f2b-92d3-ed15f994deed nodeName:}" failed. No retries permitted until 2025-07-09 23:47:13.626189161 +0000 UTC m=+6.593504079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vvpkn" (UniqueName: "kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-kube-api-access-vvpkn") pod "cilium-jlcgn" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed") : configmap "kube-root-ca.crt" not found Jul 9 23:47:13.522546 systemd[1]: Created slice kubepods-besteffort-pod2bcd70c9_43d5_4e77_b127_0ea4b49f865c.slice - libcontainer container kubepods-besteffort-pod2bcd70c9_43d5_4e77_b127_0ea4b49f865c.slice. Jul 9 23:47:13.591332 kubelet[3288]: I0709 23:47:13.590546 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-cilium-config-path\") pod \"cilium-operator-5d85765b45-g487r\" (UID: \"2bcd70c9-43d5-4e77-b127-0ea4b49f865c\") " pod="kube-system/cilium-operator-5d85765b45-g487r" Jul 9 23:47:13.591332 kubelet[3288]: I0709 23:47:13.590624 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppsg\" (UniqueName: \"kubernetes.io/projected/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-kube-api-access-rppsg\") pod \"cilium-operator-5d85765b45-g487r\" (UID: \"2bcd70c9-43d5-4e77-b127-0ea4b49f865c\") " pod="kube-system/cilium-operator-5d85765b45-g487r" Jul 9 23:47:13.832815 containerd[1986]: time="2025-07-09T23:47:13.832661587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlcgn,Uid:169680eb-0ab6-4f2b-92d3-ed15f994deed,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:13.835261 containerd[1986]: time="2025-07-09T23:47:13.835165838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-g487r,Uid:2bcd70c9-43d5-4e77-b127-0ea4b49f865c,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:13.896622 containerd[1986]: time="2025-07-09T23:47:13.896531484Z" level=info msg="connecting to shim 3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c" address="unix:///run/containerd/s/c90361c6908fd63193e5cad72bcb377572631f680e668ea984ceb4d8334774b1" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:13.899436 containerd[1986]: time="2025-07-09T23:47:13.899309054Z" level=info msg="connecting to shim c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704" address="unix:///run/containerd/s/a98f79c0d09035732382b5b2fd7847c1c71fcc5784442ea7a8d86f26944e8414" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:13.961892 systemd[1]: Started cri-containerd-3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c.scope - libcontainer container 3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c. Jul 9 23:47:13.965832 systemd[1]: Started cri-containerd-c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704.scope - libcontainer container c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704. Jul 9 23:47:13.987596 kubelet[3288]: E0709 23:47:13.986812 3288 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 9 23:47:13.987596 kubelet[3288]: E0709 23:47:13.986959 3288 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9a0ed309-2256-49d0-89e9-89dc8f91196a-kube-proxy podName:9a0ed309-2256-49d0-89e9-89dc8f91196a nodeName:}" failed. No retries permitted until 2025-07-09 23:47:14.486928451 +0000 UTC m=+7.454243357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9a0ed309-2256-49d0-89e9-89dc8f91196a-kube-proxy") pod "kube-proxy-8m6sh" (UID: "9a0ed309-2256-49d0-89e9-89dc8f91196a") : failed to sync configmap cache: timed out waiting for the condition Jul 9 23:47:14.050990 containerd[1986]: time="2025-07-09T23:47:14.050815212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jlcgn,Uid:169680eb-0ab6-4f2b-92d3-ed15f994deed,Namespace:kube-system,Attempt:0,} returns sandbox id \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\"" Jul 9 23:47:14.056038 containerd[1986]: time="2025-07-09T23:47:14.055469675Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:47:14.107735 containerd[1986]: time="2025-07-09T23:47:14.105878874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-g487r,Uid:2bcd70c9-43d5-4e77-b127-0ea4b49f865c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\"" Jul 9 23:47:14.679377 containerd[1986]: time="2025-07-09T23:47:14.679305264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8m6sh,Uid:9a0ed309-2256-49d0-89e9-89dc8f91196a,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:14.732748 containerd[1986]: time="2025-07-09T23:47:14.732639271Z" level=info msg="connecting to shim 016e7d75f573843c4a41b41bae51f0d591b2541d1077937346a8ca41d38b70ed" address="unix:///run/containerd/s/61447c2e220cf027cf086280d0dd30b5a76e254cc7560abc0c3ec2cec72f524a" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:14.779867 systemd[1]: Started cri-containerd-016e7d75f573843c4a41b41bae51f0d591b2541d1077937346a8ca41d38b70ed.scope - libcontainer container 016e7d75f573843c4a41b41bae51f0d591b2541d1077937346a8ca41d38b70ed. Jul 9 23:47:14.842535 containerd[1986]: time="2025-07-09T23:47:14.842343976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8m6sh,Uid:9a0ed309-2256-49d0-89e9-89dc8f91196a,Namespace:kube-system,Attempt:0,} returns sandbox id \"016e7d75f573843c4a41b41bae51f0d591b2541d1077937346a8ca41d38b70ed\"" Jul 9 23:47:14.853109 containerd[1986]: time="2025-07-09T23:47:14.852659152Z" level=info msg="CreateContainer within sandbox \"016e7d75f573843c4a41b41bae51f0d591b2541d1077937346a8ca41d38b70ed\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:47:14.877806 containerd[1986]: time="2025-07-09T23:47:14.877743140Z" level=info msg="Container d87884d5a1a00f79836f4925461ac9f8f9e9a976b993fd72af1a02ce46390ed9: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:14.888196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4246839914.mount: Deactivated successfully. Jul 9 23:47:14.906584 containerd[1986]: time="2025-07-09T23:47:14.906451162Z" level=info msg="CreateContainer within sandbox \"016e7d75f573843c4a41b41bae51f0d591b2541d1077937346a8ca41d38b70ed\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d87884d5a1a00f79836f4925461ac9f8f9e9a976b993fd72af1a02ce46390ed9\"" Jul 9 23:47:14.907584 containerd[1986]: time="2025-07-09T23:47:14.907474682Z" level=info msg="StartContainer for \"d87884d5a1a00f79836f4925461ac9f8f9e9a976b993fd72af1a02ce46390ed9\"" Jul 9 23:47:14.911063 containerd[1986]: time="2025-07-09T23:47:14.910929674Z" level=info msg="connecting to shim d87884d5a1a00f79836f4925461ac9f8f9e9a976b993fd72af1a02ce46390ed9" address="unix:///run/containerd/s/61447c2e220cf027cf086280d0dd30b5a76e254cc7560abc0c3ec2cec72f524a" protocol=ttrpc version=3 Jul 9 23:47:14.949879 systemd[1]: Started cri-containerd-d87884d5a1a00f79836f4925461ac9f8f9e9a976b993fd72af1a02ce46390ed9.scope - libcontainer container d87884d5a1a00f79836f4925461ac9f8f9e9a976b993fd72af1a02ce46390ed9. Jul 9 23:47:15.051441 containerd[1986]: time="2025-07-09T23:47:15.050821370Z" level=info msg="StartContainer for \"d87884d5a1a00f79836f4925461ac9f8f9e9a976b993fd72af1a02ce46390ed9\" returns successfully" Jul 9 23:47:15.584086 kubelet[3288]: I0709 23:47:15.583978 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8m6sh" podStartSLOduration=3.583953268 podStartE2EDuration="3.583953268s" podCreationTimestamp="2025-07-09 23:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:15.549615849 +0000 UTC m=+8.516930839" watchObservedRunningTime="2025-07-09 23:47:15.583953268 +0000 UTC m=+8.551268174" Jul 9 23:47:19.985190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60278922.mount: Deactivated successfully. Jul 9 23:47:22.918330 containerd[1986]: time="2025-07-09T23:47:22.918261825Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:22.920620 containerd[1986]: time="2025-07-09T23:47:22.920562322Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 23:47:22.922930 containerd[1986]: time="2025-07-09T23:47:22.922824115Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:22.926535 containerd[1986]: time="2025-07-09T23:47:22.926314920Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.87025887s" Jul 9 23:47:22.926535 containerd[1986]: time="2025-07-09T23:47:22.926395748Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 23:47:22.930837 containerd[1986]: time="2025-07-09T23:47:22.929353612Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:47:22.938218 containerd[1986]: time="2025-07-09T23:47:22.936308381Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:47:22.965900 containerd[1986]: time="2025-07-09T23:47:22.965822005Z" level=info msg="Container 5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:22.980544 containerd[1986]: time="2025-07-09T23:47:22.980443471Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\"" Jul 9 23:47:22.981394 containerd[1986]: time="2025-07-09T23:47:22.981336604Z" level=info msg="StartContainer for \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\"" Jul 9 23:47:22.987196 containerd[1986]: time="2025-07-09T23:47:22.986959703Z" level=info msg="connecting to shim 5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2" address="unix:///run/containerd/s/c90361c6908fd63193e5cad72bcb377572631f680e668ea984ceb4d8334774b1" protocol=ttrpc version=3 Jul 9 23:47:23.032840 systemd[1]: Started cri-containerd-5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2.scope - libcontainer container 5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2. Jul 9 23:47:23.101307 containerd[1986]: time="2025-07-09T23:47:23.101202601Z" level=info msg="StartContainer for \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" returns successfully" Jul 9 23:47:23.133144 systemd[1]: cri-containerd-5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2.scope: Deactivated successfully. Jul 9 23:47:23.140144 containerd[1986]: time="2025-07-09T23:47:23.139976337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" id:\"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" pid:3808 exited_at:{seconds:1752104843 nanos:138846598}" Jul 9 23:47:23.140144 containerd[1986]: time="2025-07-09T23:47:23.140072985Z" level=info msg="received exit event container_id:\"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" id:\"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" pid:3808 exited_at:{seconds:1752104843 nanos:138846598}" Jul 9 23:47:23.191286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2-rootfs.mount: Deactivated successfully. Jul 9 23:47:24.570398 containerd[1986]: time="2025-07-09T23:47:24.570262460Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:47:24.597912 containerd[1986]: time="2025-07-09T23:47:24.595962868Z" level=info msg="Container 56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:24.615538 containerd[1986]: time="2025-07-09T23:47:24.615440932Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\"" Jul 9 23:47:24.617032 containerd[1986]: time="2025-07-09T23:47:24.616963582Z" level=info msg="StartContainer for \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\"" Jul 9 23:47:24.620231 containerd[1986]: time="2025-07-09T23:47:24.620135863Z" level=info msg="connecting to shim 56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc" address="unix:///run/containerd/s/c90361c6908fd63193e5cad72bcb377572631f680e668ea984ceb4d8334774b1" protocol=ttrpc version=3 Jul 9 23:47:24.667851 systemd[1]: Started cri-containerd-56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc.scope - libcontainer container 56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc. Jul 9 23:47:24.735649 containerd[1986]: time="2025-07-09T23:47:24.735579913Z" level=info msg="StartContainer for \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" returns successfully" Jul 9 23:47:24.762452 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:47:24.763751 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:47:24.764490 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:47:24.768656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:47:24.776820 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:47:24.786029 systemd[1]: cri-containerd-56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc.scope: Deactivated successfully. Jul 9 23:47:24.789616 containerd[1986]: time="2025-07-09T23:47:24.789318394Z" level=info msg="received exit event container_id:\"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" id:\"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" pid:3851 exited_at:{seconds:1752104844 nanos:785114869}" Jul 9 23:47:24.790444 containerd[1986]: time="2025-07-09T23:47:24.790363455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" id:\"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" pid:3851 exited_at:{seconds:1752104844 nanos:785114869}" Jul 9 23:47:24.825829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:47:25.586626 containerd[1986]: time="2025-07-09T23:47:25.586237634Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:47:25.595981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc-rootfs.mount: Deactivated successfully. Jul 9 23:47:25.648227 containerd[1986]: time="2025-07-09T23:47:25.646082716Z" level=info msg="Container 43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:25.646999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056282901.mount: Deactivated successfully. Jul 9 23:47:25.684754 containerd[1986]: time="2025-07-09T23:47:25.684668134Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\"" Jul 9 23:47:25.686953 containerd[1986]: time="2025-07-09T23:47:25.686881915Z" level=info msg="StartContainer for \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\"" Jul 9 23:47:25.692644 containerd[1986]: time="2025-07-09T23:47:25.692277967Z" level=info msg="connecting to shim 43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d" address="unix:///run/containerd/s/c90361c6908fd63193e5cad72bcb377572631f680e668ea984ceb4d8334774b1" protocol=ttrpc version=3 Jul 9 23:47:25.766360 systemd[1]: Started cri-containerd-43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d.scope - libcontainer container 43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d. Jul 9 23:47:25.900869 containerd[1986]: time="2025-07-09T23:47:25.900725820Z" level=info msg="StartContainer for \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" returns successfully" Jul 9 23:47:25.907478 systemd[1]: cri-containerd-43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d.scope: Deactivated successfully. Jul 9 23:47:25.921189 containerd[1986]: time="2025-07-09T23:47:25.920921305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" id:\"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" pid:3911 exited_at:{seconds:1752104845 nanos:920309803}" Jul 9 23:47:25.921189 containerd[1986]: time="2025-07-09T23:47:25.921066900Z" level=info msg="received exit event container_id:\"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" id:\"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" pid:3911 exited_at:{seconds:1752104845 nanos:920309803}" Jul 9 23:47:26.014823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d-rootfs.mount: Deactivated successfully. Jul 9 23:47:26.222949 containerd[1986]: time="2025-07-09T23:47:26.222064127Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:26.223740 containerd[1986]: time="2025-07-09T23:47:26.223667641Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 23:47:26.224887 containerd[1986]: time="2025-07-09T23:47:26.224755197Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:47:26.227946 containerd[1986]: time="2025-07-09T23:47:26.227702135Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.296539227s" Jul 9 23:47:26.227946 containerd[1986]: time="2025-07-09T23:47:26.227774675Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 23:47:26.235709 containerd[1986]: time="2025-07-09T23:47:26.235579290Z" level=info msg="CreateContainer within sandbox \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:47:26.249561 containerd[1986]: time="2025-07-09T23:47:26.248909830Z" level=info msg="Container 94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:26.267211 containerd[1986]: time="2025-07-09T23:47:26.267019690Z" level=info msg="CreateContainer within sandbox \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\"" Jul 9 23:47:26.268555 containerd[1986]: time="2025-07-09T23:47:26.268140386Z" level=info msg="StartContainer for \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\"" Jul 9 23:47:26.270421 containerd[1986]: time="2025-07-09T23:47:26.270337807Z" level=info msg="connecting to shim 94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f" address="unix:///run/containerd/s/a98f79c0d09035732382b5b2fd7847c1c71fcc5784442ea7a8d86f26944e8414" protocol=ttrpc version=3 Jul 9 23:47:26.315838 systemd[1]: Started cri-containerd-94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f.scope - libcontainer container 94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f. Jul 9 23:47:26.381420 containerd[1986]: time="2025-07-09T23:47:26.381210151Z" level=info msg="StartContainer for \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" returns successfully" Jul 9 23:47:26.633849 containerd[1986]: time="2025-07-09T23:47:26.631467840Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:47:26.663436 containerd[1986]: time="2025-07-09T23:47:26.662084192Z" level=info msg="Container 1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:26.670662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832826037.mount: Deactivated successfully. Jul 9 23:47:26.681285 kubelet[3288]: I0709 23:47:26.681160 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-g487r" podStartSLOduration=1.564788315 podStartE2EDuration="13.681137441s" podCreationTimestamp="2025-07-09 23:47:13 +0000 UTC" firstStartedPulling="2025-07-09 23:47:14.113718645 +0000 UTC m=+7.081033551" lastFinishedPulling="2025-07-09 23:47:26.230067771 +0000 UTC m=+19.197382677" observedRunningTime="2025-07-09 23:47:26.679279535 +0000 UTC m=+19.646594477" watchObservedRunningTime="2025-07-09 23:47:26.681137441 +0000 UTC m=+19.648452347" Jul 9 23:47:26.689201 containerd[1986]: time="2025-07-09T23:47:26.688996102Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\"" Jul 9 23:47:26.691598 containerd[1986]: time="2025-07-09T23:47:26.691420833Z" level=info msg="StartContainer for \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\"" Jul 9 23:47:26.695218 containerd[1986]: time="2025-07-09T23:47:26.695089714Z" level=info msg="connecting to shim 1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1" address="unix:///run/containerd/s/c90361c6908fd63193e5cad72bcb377572631f680e668ea984ceb4d8334774b1" protocol=ttrpc version=3 Jul 9 23:47:26.776153 systemd[1]: Started cri-containerd-1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1.scope - libcontainer container 1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1. Jul 9 23:47:26.868777 systemd[1]: cri-containerd-1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1.scope: Deactivated successfully. Jul 9 23:47:26.870035 containerd[1986]: time="2025-07-09T23:47:26.869974528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" id:\"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" pid:3984 exited_at:{seconds:1752104846 nanos:869407751}" Jul 9 23:47:26.878108 containerd[1986]: time="2025-07-09T23:47:26.878013159Z" level=info msg="received exit event container_id:\"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" id:\"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" pid:3984 exited_at:{seconds:1752104846 nanos:869407751}" Jul 9 23:47:26.906324 containerd[1986]: time="2025-07-09T23:47:26.906198855Z" level=info msg="StartContainer for \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" returns successfully" Jul 9 23:47:26.947430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1-rootfs.mount: Deactivated successfully. Jul 9 23:47:27.654049 containerd[1986]: time="2025-07-09T23:47:27.653982154Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:47:27.707540 containerd[1986]: time="2025-07-09T23:47:27.702827252Z" level=info msg="Container d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:27.704407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141916626.mount: Deactivated successfully. Jul 9 23:47:27.715201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660383892.mount: Deactivated successfully. Jul 9 23:47:27.766351 containerd[1986]: time="2025-07-09T23:47:27.766145377Z" level=info msg="CreateContainer within sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\"" Jul 9 23:47:27.769855 containerd[1986]: time="2025-07-09T23:47:27.769780147Z" level=info msg="StartContainer for \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\"" Jul 9 23:47:27.781006 containerd[1986]: time="2025-07-09T23:47:27.779752762Z" level=info msg="connecting to shim d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f" address="unix:///run/containerd/s/c90361c6908fd63193e5cad72bcb377572631f680e668ea984ceb4d8334774b1" protocol=ttrpc version=3 Jul 9 23:47:27.879985 systemd[1]: Started cri-containerd-d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f.scope - libcontainer container d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f. Jul 9 23:47:28.074020 containerd[1986]: time="2025-07-09T23:47:28.073848210Z" level=info msg="StartContainer for \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" returns successfully" Jul 9 23:47:28.419381 containerd[1986]: time="2025-07-09T23:47:28.419120088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" id:\"424406630930ce6934d0005cc2a07559b27a28bf9db81929dddfdaf15d4b55ab\" pid:4049 exited_at:{seconds:1752104848 nanos:418661450}" Jul 9 23:47:28.490232 kubelet[3288]: I0709 23:47:28.490162 3288 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 9 23:47:28.697140 systemd[1]: Created slice kubepods-burstable-pod2fdd68f8_f1b4_4cf1_b553_9e755553d90c.slice - libcontainer container kubepods-burstable-pod2fdd68f8_f1b4_4cf1_b553_9e755553d90c.slice. Jul 9 23:47:28.736019 kubelet[3288]: I0709 23:47:28.735817 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fdd68f8-f1b4-4cf1-b553-9e755553d90c-config-volume\") pod \"coredns-7c65d6cfc9-25lsd\" (UID: \"2fdd68f8-f1b4-4cf1-b553-9e755553d90c\") " pod="kube-system/coredns-7c65d6cfc9-25lsd" Jul 9 23:47:28.736529 kubelet[3288]: I0709 23:47:28.736301 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g482m\" (UniqueName: \"kubernetes.io/projected/2fdd68f8-f1b4-4cf1-b553-9e755553d90c-kube-api-access-g482m\") pod \"coredns-7c65d6cfc9-25lsd\" (UID: \"2fdd68f8-f1b4-4cf1-b553-9e755553d90c\") " pod="kube-system/coredns-7c65d6cfc9-25lsd" Jul 9 23:47:28.757054 systemd[1]: Created slice kubepods-burstable-podfe46d2ed_f55e_452e_bd26_2615a906a38d.slice - libcontainer container kubepods-burstable-podfe46d2ed_f55e_452e_bd26_2615a906a38d.slice. Jul 9 23:47:28.807350 kubelet[3288]: I0709 23:47:28.807256 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jlcgn" podStartSLOduration=7.932725782 podStartE2EDuration="16.807231884s" podCreationTimestamp="2025-07-09 23:47:12 +0000 UTC" firstStartedPulling="2025-07-09 23:47:14.054369166 +0000 UTC m=+7.021684060" lastFinishedPulling="2025-07-09 23:47:22.928875172 +0000 UTC m=+15.896190162" observedRunningTime="2025-07-09 23:47:28.804472461 +0000 UTC m=+21.771787391" watchObservedRunningTime="2025-07-09 23:47:28.807231884 +0000 UTC m=+21.774546778" Jul 9 23:47:28.838013 kubelet[3288]: I0709 23:47:28.837944 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe46d2ed-f55e-452e-bd26-2615a906a38d-config-volume\") pod \"coredns-7c65d6cfc9-cfvtw\" (UID: \"fe46d2ed-f55e-452e-bd26-2615a906a38d\") " pod="kube-system/coredns-7c65d6cfc9-cfvtw" Jul 9 23:47:28.838214 kubelet[3288]: I0709 23:47:28.838051 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpd5n\" (UniqueName: \"kubernetes.io/projected/fe46d2ed-f55e-452e-bd26-2615a906a38d-kube-api-access-bpd5n\") pod \"coredns-7c65d6cfc9-cfvtw\" (UID: \"fe46d2ed-f55e-452e-bd26-2615a906a38d\") " pod="kube-system/coredns-7c65d6cfc9-cfvtw" Jul 9 23:47:29.010151 containerd[1986]: time="2025-07-09T23:47:29.009861052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-25lsd,Uid:2fdd68f8-f1b4-4cf1-b553-9e755553d90c,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:29.068792 containerd[1986]: time="2025-07-09T23:47:29.068725085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cfvtw,Uid:fe46d2ed-f55e-452e-bd26-2615a906a38d,Namespace:kube-system,Attempt:0,}" Jul 9 23:47:31.791171 (udev-worker)[4114]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:47:31.793473 (udev-worker)[4112]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:47:31.794961 systemd-networkd[1888]: cilium_host: Link UP Jul 9 23:47:31.795391 systemd-networkd[1888]: cilium_net: Link UP Jul 9 23:47:31.795870 systemd-networkd[1888]: cilium_host: Gained carrier Jul 9 23:47:31.796274 systemd-networkd[1888]: cilium_net: Gained carrier Jul 9 23:47:31.873737 systemd-networkd[1888]: cilium_net: Gained IPv6LL Jul 9 23:47:32.016177 systemd-networkd[1888]: cilium_vxlan: Link UP Jul 9 23:47:32.016198 systemd-networkd[1888]: cilium_vxlan: Gained carrier Jul 9 23:47:32.473958 systemd-networkd[1888]: cilium_host: Gained IPv6LL Jul 9 23:47:32.682747 kernel: NET: Registered PF_ALG protocol family Jul 9 23:47:33.818025 systemd-networkd[1888]: cilium_vxlan: Gained IPv6LL Jul 9 23:47:34.198953 (udev-worker)[4158]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:47:34.201382 systemd-networkd[1888]: lxc_health: Link UP Jul 9 23:47:34.219047 systemd-networkd[1888]: lxc_health: Gained carrier Jul 9 23:47:34.602584 kernel: eth0: renamed from tmpa8b6c Jul 9 23:47:34.602795 systemd-networkd[1888]: lxc805cef67d6b5: Link UP Jul 9 23:47:34.607343 systemd-networkd[1888]: lxc805cef67d6b5: Gained carrier Jul 9 23:47:34.675594 systemd-networkd[1888]: lxc25fdc5cbf2c8: Link UP Jul 9 23:47:34.694588 kernel: eth0: renamed from tmpca6c2 Jul 9 23:47:34.700604 systemd-networkd[1888]: lxc25fdc5cbf2c8: Gained carrier Jul 9 23:47:36.186844 systemd-networkd[1888]: lxc_health: Gained IPv6LL Jul 9 23:47:36.249861 systemd-networkd[1888]: lxc25fdc5cbf2c8: Gained IPv6LL Jul 9 23:47:36.378012 systemd-networkd[1888]: lxc805cef67d6b5: Gained IPv6LL Jul 9 23:47:38.444413 ntpd[1971]: Listen normally on 7 cilium_host 192.168.0.88:123 Jul 9 23:47:38.445960 ntpd[1971]: 9 Jul 23:47:38 ntpd[1971]: Listen normally on 7 cilium_host 192.168.0.88:123 Jul 9 23:47:38.445960 ntpd[1971]: 9 Jul 23:47:38 ntpd[1971]: Listen normally on 8 cilium_net [fe80::e495:e0ff:fe4b:6fa9%4]:123 Jul 9 23:47:38.445960 ntpd[1971]: 9 Jul 23:47:38 ntpd[1971]: Listen normally on 9 cilium_host [fe80::dc4b:c2ff:fe1b:5f26%5]:123 Jul 9 23:47:38.445960 ntpd[1971]: 9 Jul 23:47:38 ntpd[1971]: Listen normally on 10 cilium_vxlan [fe80::38a3:5cff:fea0:b663%6]:123 Jul 9 23:47:38.445960 ntpd[1971]: 9 Jul 23:47:38 ntpd[1971]: Listen normally on 11 lxc_health [fe80::9c19:ddff:fe33:a46a%8]:123 Jul 9 23:47:38.445960 ntpd[1971]: 9 Jul 23:47:38 ntpd[1971]: Listen normally on 12 lxc805cef67d6b5 [fe80::1c99:afff:fef7:51fe%10]:123 Jul 9 23:47:38.445960 ntpd[1971]: 9 Jul 23:47:38 ntpd[1971]: Listen normally on 13 lxc25fdc5cbf2c8 [fe80::a4b0:59ff:feda:a60b%12]:123 Jul 9 23:47:38.444641 ntpd[1971]: Listen normally on 8 cilium_net [fe80::e495:e0ff:fe4b:6fa9%4]:123 Jul 9 23:47:38.444737 ntpd[1971]: Listen normally on 9 cilium_host [fe80::dc4b:c2ff:fe1b:5f26%5]:123 Jul 9 23:47:38.444804 ntpd[1971]: Listen normally on 10 cilium_vxlan [fe80::38a3:5cff:fea0:b663%6]:123 Jul 9 23:47:38.444871 ntpd[1971]: Listen normally on 11 lxc_health [fe80::9c19:ddff:fe33:a46a%8]:123 Jul 9 23:47:38.444938 ntpd[1971]: Listen normally on 12 lxc805cef67d6b5 [fe80::1c99:afff:fef7:51fe%10]:123 Jul 9 23:47:38.445041 ntpd[1971]: Listen normally on 13 lxc25fdc5cbf2c8 [fe80::a4b0:59ff:feda:a60b%12]:123 Jul 9 23:47:44.257552 containerd[1986]: time="2025-07-09T23:47:44.257288506Z" level=info msg="connecting to shim ca6c2d48a076513303a0921279feb1c9a8913be01be566e86e9352f340f8840f" address="unix:///run/containerd/s/b0dd87c3d426c47584cbcde575092816c638f0bfc7c1e78782c688e009876df3" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:44.348287 containerd[1986]: time="2025-07-09T23:47:44.348152003Z" level=info msg="connecting to shim a8b6c7922607e33e219f82a960f00c7364158c619928e073227b79f3ce8b1b77" address="unix:///run/containerd/s/5c56b4198fa4991c32f69917b74ffacbe230e3a5d5f37d4e0896209de63c0f4c" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:47:44.366999 systemd[1]: Started cri-containerd-ca6c2d48a076513303a0921279feb1c9a8913be01be566e86e9352f340f8840f.scope - libcontainer container ca6c2d48a076513303a0921279feb1c9a8913be01be566e86e9352f340f8840f. Jul 9 23:47:44.428646 systemd[1]: Started cri-containerd-a8b6c7922607e33e219f82a960f00c7364158c619928e073227b79f3ce8b1b77.scope - libcontainer container a8b6c7922607e33e219f82a960f00c7364158c619928e073227b79f3ce8b1b77. Jul 9 23:47:44.559860 containerd[1986]: time="2025-07-09T23:47:44.559098398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cfvtw,Uid:fe46d2ed-f55e-452e-bd26-2615a906a38d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca6c2d48a076513303a0921279feb1c9a8913be01be566e86e9352f340f8840f\"" Jul 9 23:47:44.575854 containerd[1986]: time="2025-07-09T23:47:44.575797632Z" level=info msg="CreateContainer within sandbox \"ca6c2d48a076513303a0921279feb1c9a8913be01be566e86e9352f340f8840f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:47:44.607170 containerd[1986]: time="2025-07-09T23:47:44.606115934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-25lsd,Uid:2fdd68f8-f1b4-4cf1-b553-9e755553d90c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8b6c7922607e33e219f82a960f00c7364158c619928e073227b79f3ce8b1b77\"" Jul 9 23:47:44.608612 containerd[1986]: time="2025-07-09T23:47:44.607597157Z" level=info msg="Container 91aab0d237f11bb1c6e3e8bca925d5e0049f600bec11342daa5130f5ba512efe: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:44.617194 containerd[1986]: time="2025-07-09T23:47:44.617103397Z" level=info msg="CreateContainer within sandbox \"a8b6c7922607e33e219f82a960f00c7364158c619928e073227b79f3ce8b1b77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:47:44.641541 containerd[1986]: time="2025-07-09T23:47:44.641260501Z" level=info msg="CreateContainer within sandbox \"ca6c2d48a076513303a0921279feb1c9a8913be01be566e86e9352f340f8840f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91aab0d237f11bb1c6e3e8bca925d5e0049f600bec11342daa5130f5ba512efe\"" Jul 9 23:47:44.643561 containerd[1986]: time="2025-07-09T23:47:44.642462959Z" level=info msg="StartContainer for \"91aab0d237f11bb1c6e3e8bca925d5e0049f600bec11342daa5130f5ba512efe\"" Jul 9 23:47:44.644632 containerd[1986]: time="2025-07-09T23:47:44.644008578Z" level=info msg="Container 9f1553a67acb2aa64b2a52f478693f71343529de6f63b8a5a99e5c44cf94d7b1: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:47:44.644906 containerd[1986]: time="2025-07-09T23:47:44.644774119Z" level=info msg="connecting to shim 91aab0d237f11bb1c6e3e8bca925d5e0049f600bec11342daa5130f5ba512efe" address="unix:///run/containerd/s/b0dd87c3d426c47584cbcde575092816c638f0bfc7c1e78782c688e009876df3" protocol=ttrpc version=3 Jul 9 23:47:44.667526 containerd[1986]: time="2025-07-09T23:47:44.665987331Z" level=info msg="CreateContainer within sandbox \"a8b6c7922607e33e219f82a960f00c7364158c619928e073227b79f3ce8b1b77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f1553a67acb2aa64b2a52f478693f71343529de6f63b8a5a99e5c44cf94d7b1\"" Jul 9 23:47:44.670614 containerd[1986]: time="2025-07-09T23:47:44.670533345Z" level=info msg="StartContainer for \"9f1553a67acb2aa64b2a52f478693f71343529de6f63b8a5a99e5c44cf94d7b1\"" Jul 9 23:47:44.676408 containerd[1986]: time="2025-07-09T23:47:44.676280294Z" level=info msg="connecting to shim 9f1553a67acb2aa64b2a52f478693f71343529de6f63b8a5a99e5c44cf94d7b1" address="unix:///run/containerd/s/5c56b4198fa4991c32f69917b74ffacbe230e3a5d5f37d4e0896209de63c0f4c" protocol=ttrpc version=3 Jul 9 23:47:44.724252 systemd[1]: Started cri-containerd-91aab0d237f11bb1c6e3e8bca925d5e0049f600bec11342daa5130f5ba512efe.scope - libcontainer container 91aab0d237f11bb1c6e3e8bca925d5e0049f600bec11342daa5130f5ba512efe. Jul 9 23:47:44.746167 systemd[1]: Started cri-containerd-9f1553a67acb2aa64b2a52f478693f71343529de6f63b8a5a99e5c44cf94d7b1.scope - libcontainer container 9f1553a67acb2aa64b2a52f478693f71343529de6f63b8a5a99e5c44cf94d7b1. Jul 9 23:47:44.839349 containerd[1986]: time="2025-07-09T23:47:44.839147888Z" level=info msg="StartContainer for \"91aab0d237f11bb1c6e3e8bca925d5e0049f600bec11342daa5130f5ba512efe\" returns successfully" Jul 9 23:47:44.869217 containerd[1986]: time="2025-07-09T23:47:44.869046591Z" level=info msg="StartContainer for \"9f1553a67acb2aa64b2a52f478693f71343529de6f63b8a5a99e5c44cf94d7b1\" returns successfully" Jul 9 23:47:45.817634 kubelet[3288]: I0709 23:47:45.817456 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-25lsd" podStartSLOduration=32.817394687 podStartE2EDuration="32.817394687s" podCreationTimestamp="2025-07-09 23:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:45.817117805 +0000 UTC m=+38.784432723" watchObservedRunningTime="2025-07-09 23:47:45.817394687 +0000 UTC m=+38.784709689" Jul 9 23:47:48.614363 systemd[1]: Started sshd@7-172.31.27.216:22-139.178.89.65:37430.service - OpenSSH per-connection server daemon (139.178.89.65:37430). Jul 9 23:47:48.814577 sshd[4696]: Accepted publickey for core from 139.178.89.65 port 37430 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:47:48.817400 sshd-session[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:48.826634 systemd-logind[1977]: New session 8 of user core. Jul 9 23:47:48.835822 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:47:49.139990 sshd[4698]: Connection closed by 139.178.89.65 port 37430 Jul 9 23:47:49.140485 sshd-session[4696]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:49.149407 systemd[1]: sshd@7-172.31.27.216:22-139.178.89.65:37430.service: Deactivated successfully. Jul 9 23:47:49.154851 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:47:49.158441 systemd-logind[1977]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:47:49.163025 systemd-logind[1977]: Removed session 8. Jul 9 23:47:54.179952 systemd[1]: Started sshd@8-172.31.27.216:22-139.178.89.65:58720.service - OpenSSH per-connection server daemon (139.178.89.65:58720). Jul 9 23:47:54.374935 sshd[4710]: Accepted publickey for core from 139.178.89.65 port 58720 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:47:54.377589 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:54.386155 systemd-logind[1977]: New session 9 of user core. Jul 9 23:47:54.394804 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:47:54.636735 sshd[4712]: Connection closed by 139.178.89.65 port 58720 Jul 9 23:47:54.637600 sshd-session[4710]: pam_unix(sshd:session): session closed for user core Jul 9 23:47:54.644743 systemd-logind[1977]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:47:54.645910 systemd[1]: sshd@8-172.31.27.216:22-139.178.89.65:58720.service: Deactivated successfully. Jul 9 23:47:54.650759 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:47:54.658426 systemd-logind[1977]: Removed session 9. Jul 9 23:47:59.677277 systemd[1]: Started sshd@9-172.31.27.216:22-139.178.89.65:46246.service - OpenSSH per-connection server daemon (139.178.89.65:46246). Jul 9 23:47:59.872702 sshd[4726]: Accepted publickey for core from 139.178.89.65 port 46246 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:47:59.874573 sshd-session[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:47:59.883229 systemd-logind[1977]: New session 10 of user core. Jul 9 23:47:59.893862 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:48:00.170099 sshd[4728]: Connection closed by 139.178.89.65 port 46246 Jul 9 23:48:00.170609 sshd-session[4726]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:00.178760 systemd[1]: sshd@9-172.31.27.216:22-139.178.89.65:46246.service: Deactivated successfully. Jul 9 23:48:00.182382 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:48:00.186159 systemd-logind[1977]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:48:00.189119 systemd-logind[1977]: Removed session 10. Jul 9 23:48:05.206483 systemd[1]: Started sshd@10-172.31.27.216:22-139.178.89.65:46262.service - OpenSSH per-connection server daemon (139.178.89.65:46262). Jul 9 23:48:05.402749 sshd[4741]: Accepted publickey for core from 139.178.89.65 port 46262 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:05.405453 sshd-session[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:05.414171 systemd-logind[1977]: New session 11 of user core. Jul 9 23:48:05.425817 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:48:05.675280 sshd[4743]: Connection closed by 139.178.89.65 port 46262 Jul 9 23:48:05.676545 sshd-session[4741]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:05.681912 systemd[1]: sshd@10-172.31.27.216:22-139.178.89.65:46262.service: Deactivated successfully. Jul 9 23:48:05.686542 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:48:05.692465 systemd-logind[1977]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:48:05.696873 systemd-logind[1977]: Removed session 11. Jul 9 23:48:05.713202 systemd[1]: Started sshd@11-172.31.27.216:22-139.178.89.65:46278.service - OpenSSH per-connection server daemon (139.178.89.65:46278). Jul 9 23:48:05.915196 sshd[4756]: Accepted publickey for core from 139.178.89.65 port 46278 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:05.918434 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:05.928612 systemd-logind[1977]: New session 12 of user core. Jul 9 23:48:05.934805 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:48:06.267332 sshd[4758]: Connection closed by 139.178.89.65 port 46278 Jul 9 23:48:06.269824 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:06.280801 systemd[1]: sshd@11-172.31.27.216:22-139.178.89.65:46278.service: Deactivated successfully. Jul 9 23:48:06.292673 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:48:06.297121 systemd-logind[1977]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:48:06.327045 systemd[1]: Started sshd@12-172.31.27.216:22-139.178.89.65:46290.service - OpenSSH per-connection server daemon (139.178.89.65:46290). Jul 9 23:48:06.330461 systemd-logind[1977]: Removed session 12. Jul 9 23:48:06.529593 sshd[4769]: Accepted publickey for core from 139.178.89.65 port 46290 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:06.531809 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:06.539913 systemd-logind[1977]: New session 13 of user core. Jul 9 23:48:06.549780 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:48:06.817210 sshd[4771]: Connection closed by 139.178.89.65 port 46290 Jul 9 23:48:06.818286 sshd-session[4769]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:06.826272 systemd[1]: sshd@12-172.31.27.216:22-139.178.89.65:46290.service: Deactivated successfully. Jul 9 23:48:06.831490 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:48:06.835596 systemd-logind[1977]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:48:06.838226 systemd-logind[1977]: Removed session 13. Jul 9 23:48:11.857708 systemd[1]: Started sshd@13-172.31.27.216:22-139.178.89.65:40790.service - OpenSSH per-connection server daemon (139.178.89.65:40790). Jul 9 23:48:12.055220 sshd[4786]: Accepted publickey for core from 139.178.89.65 port 40790 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:12.058304 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:12.069146 systemd-logind[1977]: New session 14 of user core. Jul 9 23:48:12.082832 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:48:12.332956 sshd[4788]: Connection closed by 139.178.89.65 port 40790 Jul 9 23:48:12.334170 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:12.341997 systemd[1]: sshd@13-172.31.27.216:22-139.178.89.65:40790.service: Deactivated successfully. Jul 9 23:48:12.347268 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:48:12.351704 systemd-logind[1977]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:48:12.354465 systemd-logind[1977]: Removed session 14. Jul 9 23:48:17.370592 systemd[1]: Started sshd@14-172.31.27.216:22-139.178.89.65:40800.service - OpenSSH per-connection server daemon (139.178.89.65:40800). Jul 9 23:48:17.576544 sshd[4803]: Accepted publickey for core from 139.178.89.65 port 40800 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:17.581284 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:17.597964 systemd-logind[1977]: New session 15 of user core. Jul 9 23:48:17.604839 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:48:17.849571 sshd[4805]: Connection closed by 139.178.89.65 port 40800 Jul 9 23:48:17.849954 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:17.857836 systemd[1]: sshd@14-172.31.27.216:22-139.178.89.65:40800.service: Deactivated successfully. Jul 9 23:48:17.862290 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:48:17.864395 systemd-logind[1977]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:48:17.868437 systemd-logind[1977]: Removed session 15. Jul 9 23:48:22.888347 systemd[1]: Started sshd@15-172.31.27.216:22-139.178.89.65:45628.service - OpenSSH per-connection server daemon (139.178.89.65:45628). Jul 9 23:48:23.095150 sshd[4817]: Accepted publickey for core from 139.178.89.65 port 45628 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:23.097989 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:23.107197 systemd-logind[1977]: New session 16 of user core. Jul 9 23:48:23.115794 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:48:23.367490 sshd[4819]: Connection closed by 139.178.89.65 port 45628 Jul 9 23:48:23.367148 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:23.380490 systemd[1]: sshd@15-172.31.27.216:22-139.178.89.65:45628.service: Deactivated successfully. Jul 9 23:48:23.390367 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:48:23.395926 systemd-logind[1977]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:48:23.416693 systemd[1]: Started sshd@16-172.31.27.216:22-139.178.89.65:45630.service - OpenSSH per-connection server daemon (139.178.89.65:45630). Jul 9 23:48:23.420807 systemd-logind[1977]: Removed session 16. Jul 9 23:48:23.616667 sshd[4831]: Accepted publickey for core from 139.178.89.65 port 45630 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:23.619213 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:23.628224 systemd-logind[1977]: New session 17 of user core. Jul 9 23:48:23.635798 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:48:23.964812 sshd[4833]: Connection closed by 139.178.89.65 port 45630 Jul 9 23:48:23.965656 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:23.972317 systemd[1]: sshd@16-172.31.27.216:22-139.178.89.65:45630.service: Deactivated successfully. Jul 9 23:48:23.979369 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:48:23.981833 systemd-logind[1977]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:48:23.985726 systemd-logind[1977]: Removed session 17. Jul 9 23:48:24.002554 systemd[1]: Started sshd@17-172.31.27.216:22-139.178.89.65:45644.service - OpenSSH per-connection server daemon (139.178.89.65:45644). Jul 9 23:48:24.201420 sshd[4842]: Accepted publickey for core from 139.178.89.65 port 45644 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:24.204164 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:24.214606 systemd-logind[1977]: New session 18 of user core. Jul 9 23:48:24.220785 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:48:26.943080 sshd[4844]: Connection closed by 139.178.89.65 port 45644 Jul 9 23:48:26.944205 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:26.955092 systemd[1]: sshd@17-172.31.27.216:22-139.178.89.65:45644.service: Deactivated successfully. Jul 9 23:48:26.966892 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:48:26.969672 systemd-logind[1977]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:48:26.996013 systemd[1]: Started sshd@18-172.31.27.216:22-139.178.89.65:45654.service - OpenSSH per-connection server daemon (139.178.89.65:45654). Jul 9 23:48:27.000033 systemd-logind[1977]: Removed session 18. Jul 9 23:48:27.198748 sshd[4861]: Accepted publickey for core from 139.178.89.65 port 45654 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:27.201145 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:27.212102 systemd-logind[1977]: New session 19 of user core. Jul 9 23:48:27.222794 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:48:27.778747 sshd[4863]: Connection closed by 139.178.89.65 port 45654 Jul 9 23:48:27.779601 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:27.789397 systemd[1]: sshd@18-172.31.27.216:22-139.178.89.65:45654.service: Deactivated successfully. Jul 9 23:48:27.794488 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:48:27.797105 systemd-logind[1977]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:48:27.801840 systemd-logind[1977]: Removed session 19. Jul 9 23:48:27.818250 systemd[1]: Started sshd@19-172.31.27.216:22-139.178.89.65:45666.service - OpenSSH per-connection server daemon (139.178.89.65:45666). Jul 9 23:48:28.014969 sshd[4873]: Accepted publickey for core from 139.178.89.65 port 45666 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:28.018054 sshd-session[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:28.026389 systemd-logind[1977]: New session 20 of user core. Jul 9 23:48:28.033814 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:48:28.276084 sshd[4875]: Connection closed by 139.178.89.65 port 45666 Jul 9 23:48:28.277027 sshd-session[4873]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:28.284415 systemd[1]: sshd@19-172.31.27.216:22-139.178.89.65:45666.service: Deactivated successfully. Jul 9 23:48:28.290107 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:48:28.292571 systemd-logind[1977]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:48:28.295809 systemd-logind[1977]: Removed session 20. Jul 9 23:48:33.318802 systemd[1]: Started sshd@20-172.31.27.216:22-139.178.89.65:56446.service - OpenSSH per-connection server daemon (139.178.89.65:56446). Jul 9 23:48:33.530899 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 56446 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:33.534044 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:33.544387 systemd-logind[1977]: New session 21 of user core. Jul 9 23:48:33.553799 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:48:33.807566 sshd[4891]: Connection closed by 139.178.89.65 port 56446 Jul 9 23:48:33.808444 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:33.816404 systemd[1]: sshd@20-172.31.27.216:22-139.178.89.65:56446.service: Deactivated successfully. Jul 9 23:48:33.823642 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:48:33.826116 systemd-logind[1977]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:48:33.830765 systemd-logind[1977]: Removed session 21. Jul 9 23:48:38.847214 systemd[1]: Started sshd@21-172.31.27.216:22-139.178.89.65:56450.service - OpenSSH per-connection server daemon (139.178.89.65:56450). Jul 9 23:48:39.042852 sshd[4905]: Accepted publickey for core from 139.178.89.65 port 56450 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:39.045365 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:39.054421 systemd-logind[1977]: New session 22 of user core. Jul 9 23:48:39.065880 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:48:39.314378 sshd[4907]: Connection closed by 139.178.89.65 port 56450 Jul 9 23:48:39.315326 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:39.322867 systemd-logind[1977]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:48:39.323287 systemd[1]: sshd@21-172.31.27.216:22-139.178.89.65:56450.service: Deactivated successfully. Jul 9 23:48:39.329158 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:48:39.336263 systemd-logind[1977]: Removed session 22. Jul 9 23:48:44.350329 systemd[1]: Started sshd@22-172.31.27.216:22-139.178.89.65:51526.service - OpenSSH per-connection server daemon (139.178.89.65:51526). Jul 9 23:48:44.546324 sshd[4921]: Accepted publickey for core from 139.178.89.65 port 51526 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:44.549022 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:44.558778 systemd-logind[1977]: New session 23 of user core. Jul 9 23:48:44.567837 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:48:44.813310 sshd[4924]: Connection closed by 139.178.89.65 port 51526 Jul 9 23:48:44.814242 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:44.823738 systemd[1]: sshd@22-172.31.27.216:22-139.178.89.65:51526.service: Deactivated successfully. Jul 9 23:48:44.828307 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:48:44.833594 systemd-logind[1977]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:48:44.838668 systemd-logind[1977]: Removed session 23. Jul 9 23:48:49.850858 systemd[1]: Started sshd@23-172.31.27.216:22-139.178.89.65:49706.service - OpenSSH per-connection server daemon (139.178.89.65:49706). Jul 9 23:48:50.052028 sshd[4938]: Accepted publickey for core from 139.178.89.65 port 49706 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:50.055239 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:50.064700 systemd-logind[1977]: New session 24 of user core. Jul 9 23:48:50.073797 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:48:50.322636 sshd[4940]: Connection closed by 139.178.89.65 port 49706 Jul 9 23:48:50.323801 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:50.332274 systemd[1]: sshd@23-172.31.27.216:22-139.178.89.65:49706.service: Deactivated successfully. Jul 9 23:48:50.336814 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:48:50.339625 systemd-logind[1977]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:48:50.358067 systemd-logind[1977]: Removed session 24. Jul 9 23:48:50.362078 systemd[1]: Started sshd@24-172.31.27.216:22-139.178.89.65:49712.service - OpenSSH per-connection server daemon (139.178.89.65:49712). Jul 9 23:48:50.571745 sshd[4951]: Accepted publickey for core from 139.178.89.65 port 49712 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:50.574293 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:50.583446 systemd-logind[1977]: New session 25 of user core. Jul 9 23:48:50.595916 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:48:53.435222 kubelet[3288]: I0709 23:48:53.435119 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cfvtw" podStartSLOduration=100.435097576 podStartE2EDuration="1m40.435097576s" podCreationTimestamp="2025-07-09 23:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:47:45.927173478 +0000 UTC m=+38.894488384" watchObservedRunningTime="2025-07-09 23:48:53.435097576 +0000 UTC m=+106.402412482" Jul 9 23:48:53.488755 containerd[1986]: time="2025-07-09T23:48:53.487351188Z" level=info msg="StopContainer for \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" with timeout 30 (s)" Jul 9 23:48:53.492963 containerd[1986]: time="2025-07-09T23:48:53.492624002Z" level=info msg="Stop container \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" with signal terminated" Jul 9 23:48:53.514356 containerd[1986]: time="2025-07-09T23:48:53.514301778Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:48:53.522968 systemd[1]: cri-containerd-94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f.scope: Deactivated successfully. Jul 9 23:48:53.525695 containerd[1986]: time="2025-07-09T23:48:53.523978501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" id:\"c5d1bfe591818f2378335dc2aa22ad9ae293f46d35f9f7462f15b08fdab3e056\" pid:4972 exited_at:{seconds:1752104933 nanos:522990495}" Jul 9 23:48:53.530703 containerd[1986]: time="2025-07-09T23:48:53.530642500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" id:\"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" pid:3952 exited_at:{seconds:1752104933 nanos:529830614}" Jul 9 23:48:53.530830 containerd[1986]: time="2025-07-09T23:48:53.530712149Z" level=info msg="received exit event container_id:\"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" id:\"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" pid:3952 exited_at:{seconds:1752104933 nanos:529830614}" Jul 9 23:48:53.533682 containerd[1986]: time="2025-07-09T23:48:53.533319560Z" level=info msg="StopContainer for \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" with timeout 2 (s)" Jul 9 23:48:53.534657 containerd[1986]: time="2025-07-09T23:48:53.534613797Z" level=info msg="Stop container \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" with signal terminated" Jul 9 23:48:53.554945 systemd-networkd[1888]: lxc_health: Link DOWN Jul 9 23:48:53.554965 systemd-networkd[1888]: lxc_health: Lost carrier Jul 9 23:48:53.591765 systemd[1]: cri-containerd-d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f.scope: Deactivated successfully. Jul 9 23:48:53.593114 systemd[1]: cri-containerd-d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f.scope: Consumed 16.255s CPU time, 125.4M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:48:53.604527 containerd[1986]: time="2025-07-09T23:48:53.604445840Z" level=info msg="received exit event container_id:\"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" id:\"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" pid:4023 exited_at:{seconds:1752104933 nanos:603804688}" Jul 9 23:48:53.605946 containerd[1986]: time="2025-07-09T23:48:53.605593965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" id:\"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" pid:4023 exited_at:{seconds:1752104933 nanos:603804688}" Jul 9 23:48:53.613846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f-rootfs.mount: Deactivated successfully. Jul 9 23:48:53.644648 containerd[1986]: time="2025-07-09T23:48:53.644577344Z" level=info msg="StopContainer for \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" returns successfully" Jul 9 23:48:53.646180 containerd[1986]: time="2025-07-09T23:48:53.645959017Z" level=info msg="StopPodSandbox for \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\"" Jul 9 23:48:53.646180 containerd[1986]: time="2025-07-09T23:48:53.646062597Z" level=info msg="Container to stop \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:53.666101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f-rootfs.mount: Deactivated successfully. Jul 9 23:48:53.678451 systemd[1]: cri-containerd-c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704.scope: Deactivated successfully. Jul 9 23:48:53.687047 containerd[1986]: time="2025-07-09T23:48:53.686882341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" id:\"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" pid:3539 exit_status:137 exited_at:{seconds:1752104933 nanos:686045748}" Jul 9 23:48:53.694946 containerd[1986]: time="2025-07-09T23:48:53.694869997Z" level=info msg="StopContainer for \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" returns successfully" Jul 9 23:48:53.696317 containerd[1986]: time="2025-07-09T23:48:53.695697367Z" level=info msg="StopPodSandbox for \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\"" Jul 9 23:48:53.696317 containerd[1986]: time="2025-07-09T23:48:53.695795634Z" level=info msg="Container to stop \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:53.696317 containerd[1986]: time="2025-07-09T23:48:53.695822381Z" level=info msg="Container to stop \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:53.696317 containerd[1986]: time="2025-07-09T23:48:53.695848636Z" level=info msg="Container to stop \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:53.696317 containerd[1986]: time="2025-07-09T23:48:53.695872264Z" level=info msg="Container to stop \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:53.696317 containerd[1986]: time="2025-07-09T23:48:53.695893121Z" level=info msg="Container to stop \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:48:53.708875 systemd[1]: cri-containerd-3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c.scope: Deactivated successfully. Jul 9 23:48:53.768189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704-rootfs.mount: Deactivated successfully. Jul 9 23:48:53.776308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c-rootfs.mount: Deactivated successfully. Jul 9 23:48:53.778526 containerd[1986]: time="2025-07-09T23:48:53.778433503Z" level=info msg="shim disconnected" id=c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704 namespace=k8s.io Jul 9 23:48:53.778697 containerd[1986]: time="2025-07-09T23:48:53.778530186Z" level=warning msg="cleaning up after shim disconnected" id=c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704 namespace=k8s.io Jul 9 23:48:53.779522 containerd[1986]: time="2025-07-09T23:48:53.779069461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:48:53.779522 containerd[1986]: time="2025-07-09T23:48:53.779204633Z" level=info msg="shim disconnected" id=3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c namespace=k8s.io Jul 9 23:48:53.781849 containerd[1986]: time="2025-07-09T23:48:53.779460357Z" level=warning msg="cleaning up after shim disconnected" id=3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c namespace=k8s.io Jul 9 23:48:53.782999 containerd[1986]: time="2025-07-09T23:48:53.781839583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:48:53.807946 containerd[1986]: time="2025-07-09T23:48:53.807866635Z" level=info msg="received exit event sandbox_id:\"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" exit_status:137 exited_at:{seconds:1752104933 nanos:686045748}" Jul 9 23:48:53.808433 containerd[1986]: time="2025-07-09T23:48:53.808382821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" id:\"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" pid:3531 exit_status:137 exited_at:{seconds:1752104933 nanos:712305195}" Jul 9 23:48:53.809941 containerd[1986]: time="2025-07-09T23:48:53.809892745Z" level=info msg="received exit event sandbox_id:\"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" exit_status:137 exited_at:{seconds:1752104933 nanos:712305195}" Jul 9 23:48:53.811925 containerd[1986]: time="2025-07-09T23:48:53.810526676Z" level=info msg="TearDown network for sandbox \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" successfully" Jul 9 23:48:53.811925 containerd[1986]: time="2025-07-09T23:48:53.811732481Z" level=info msg="StopPodSandbox for \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" returns successfully" Jul 9 23:48:53.814861 containerd[1986]: time="2025-07-09T23:48:53.814771509Z" level=info msg="TearDown network for sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" successfully" Jul 9 23:48:53.814861 containerd[1986]: time="2025-07-09T23:48:53.814848954Z" level=info msg="StopPodSandbox for \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" returns successfully" Jul 9 23:48:53.815082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704-shm.mount: Deactivated successfully. Jul 9 23:48:53.928715 kubelet[3288]: I0709 23:48:53.928649 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-xtables-lock\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.928908 kubelet[3288]: I0709 23:48:53.928739 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-cgroup\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.928908 kubelet[3288]: I0709 23:48:53.928810 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-hubble-tls\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.928908 kubelet[3288]: I0709 23:48:53.928857 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-cilium-config-path\") pod \"2bcd70c9-43d5-4e77-b127-0ea4b49f865c\" (UID: \"2bcd70c9-43d5-4e77-b127-0ea4b49f865c\") " Jul 9 23:48:53.929065 kubelet[3288]: I0709 23:48:53.928919 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-net\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929065 kubelet[3288]: I0709 23:48:53.928953 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-run\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929065 kubelet[3288]: I0709 23:48:53.929018 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rppsg\" (UniqueName: \"kubernetes.io/projected/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-kube-api-access-rppsg\") pod \"2bcd70c9-43d5-4e77-b127-0ea4b49f865c\" (UID: \"2bcd70c9-43d5-4e77-b127-0ea4b49f865c\") " Jul 9 23:48:53.929230 kubelet[3288]: I0709 23:48:53.929076 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-etc-cni-netd\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929230 kubelet[3288]: I0709 23:48:53.929111 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-hostproc\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929230 kubelet[3288]: I0709 23:48:53.929182 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/169680eb-0ab6-4f2b-92d3-ed15f994deed-clustermesh-secrets\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929384 kubelet[3288]: I0709 23:48:53.929246 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpkn\" (UniqueName: \"kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-kube-api-access-vvpkn\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929384 kubelet[3288]: I0709 23:48:53.929288 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-lib-modules\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929384 kubelet[3288]: I0709 23:48:53.929344 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cni-path\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929572 kubelet[3288]: I0709 23:48:53.929387 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-config-path\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929572 kubelet[3288]: I0709 23:48:53.929448 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-kernel\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929572 kubelet[3288]: I0709 23:48:53.929482 3288 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-bpf-maps\") pod \"169680eb-0ab6-4f2b-92d3-ed15f994deed\" (UID: \"169680eb-0ab6-4f2b-92d3-ed15f994deed\") " Jul 9 23:48:53.929731 kubelet[3288]: I0709 23:48:53.929642 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.929787 kubelet[3288]: I0709 23:48:53.929723 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.929842 kubelet[3288]: I0709 23:48:53.929782 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.931541 kubelet[3288]: I0709 23:48:53.929942 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-hostproc" (OuterVolumeSpecName: "hostproc") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.935091 kubelet[3288]: I0709 23:48:53.934970 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.935216 kubelet[3288]: I0709 23:48:53.935121 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.936819 kubelet[3288]: I0709 23:48:53.936768 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2bcd70c9-43d5-4e77-b127-0ea4b49f865c" (UID: "2bcd70c9-43d5-4e77-b127-0ea4b49f865c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 23:48:53.939892 kubelet[3288]: I0709 23:48:53.937152 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.940367 kubelet[3288]: I0709 23:48:53.937220 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cni-path" (OuterVolumeSpecName: "cni-path") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.940367 kubelet[3288]: I0709 23:48:53.937248 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.940367 kubelet[3288]: I0709 23:48:53.937694 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 9 23:48:53.940367 kubelet[3288]: I0709 23:48:53.940202 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:48:53.947015 kubelet[3288]: I0709 23:48:53.946336 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169680eb-0ab6-4f2b-92d3-ed15f994deed-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 9 23:48:53.949866 kubelet[3288]: I0709 23:48:53.949814 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-kube-api-access-vvpkn" (OuterVolumeSpecName: "kube-api-access-vvpkn") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "kube-api-access-vvpkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:48:53.950225 kubelet[3288]: I0709 23:48:53.950163 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-kube-api-access-rppsg" (OuterVolumeSpecName: "kube-api-access-rppsg") pod "2bcd70c9-43d5-4e77-b127-0ea4b49f865c" (UID: "2bcd70c9-43d5-4e77-b127-0ea4b49f865c"). InnerVolumeSpecName "kube-api-access-rppsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 9 23:48:53.950469 kubelet[3288]: I0709 23:48:53.950419 3288 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "169680eb-0ab6-4f2b-92d3-ed15f994deed" (UID: "169680eb-0ab6-4f2b-92d3-ed15f994deed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 9 23:48:53.988193 kubelet[3288]: I0709 23:48:53.988021 3288 scope.go:117] "RemoveContainer" containerID="94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f" Jul 9 23:48:54.013464 containerd[1986]: time="2025-07-09T23:48:54.013085067Z" level=info msg="RemoveContainer for \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\"" Jul 9 23:48:54.015899 systemd[1]: Removed slice kubepods-besteffort-pod2bcd70c9_43d5_4e77_b127_0ea4b49f865c.slice - libcontainer container kubepods-besteffort-pod2bcd70c9_43d5_4e77_b127_0ea4b49f865c.slice. Jul 9 23:48:54.028511 containerd[1986]: time="2025-07-09T23:48:54.028364344Z" level=info msg="RemoveContainer for \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" returns successfully" Jul 9 23:48:54.030229 kubelet[3288]: I0709 23:48:54.030165 3288 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-cilium-config-path\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030229 kubelet[3288]: I0709 23:48:54.030227 3288 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-run\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030255 3288 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rppsg\" (UniqueName: \"kubernetes.io/projected/2bcd70c9-43d5-4e77-b127-0ea4b49f865c-kube-api-access-rppsg\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030280 3288 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-net\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030303 3288 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-etc-cni-netd\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030324 3288 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-hostproc\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030344 3288 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-lib-modules\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030363 3288 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cni-path\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030384 3288 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-config-path\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.030444 kubelet[3288]: I0709 23:48:54.030404 3288 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/169680eb-0ab6-4f2b-92d3-ed15f994deed-clustermesh-secrets\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.033110 kubelet[3288]: I0709 23:48:54.030423 3288 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpkn\" (UniqueName: \"kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-kube-api-access-vvpkn\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.033110 kubelet[3288]: I0709 23:48:54.030443 3288 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-host-proc-sys-kernel\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.033110 kubelet[3288]: I0709 23:48:54.030464 3288 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-bpf-maps\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.033110 kubelet[3288]: I0709 23:48:54.030482 3288 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-cilium-cgroup\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.033110 kubelet[3288]: I0709 23:48:54.030546 3288 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/169680eb-0ab6-4f2b-92d3-ed15f994deed-hubble-tls\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.033110 kubelet[3288]: I0709 23:48:54.030576 3288 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/169680eb-0ab6-4f2b-92d3-ed15f994deed-xtables-lock\") on node \"ip-172-31-27-216\" DevicePath \"\"" Jul 9 23:48:54.033110 kubelet[3288]: I0709 23:48:54.030977 3288 scope.go:117] "RemoveContainer" containerID="94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f" Jul 9 23:48:54.033110 kubelet[3288]: E0709 23:48:54.032194 3288 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\": not found" containerID="94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f" Jul 9 23:48:54.035281 containerd[1986]: time="2025-07-09T23:48:54.031961512Z" level=error msg="ContainerStatus for \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\": not found" Jul 9 23:48:54.036005 kubelet[3288]: I0709 23:48:54.032245 3288 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f"} err="failed to get container status \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"94d5e53b1abfd4cc2dbf1e715a243f4e221b28e05a8b41a9f1f6a9109f06df8f\": not found" Jul 9 23:48:54.036005 kubelet[3288]: I0709 23:48:54.032369 3288 scope.go:117] "RemoveContainer" containerID="d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f" Jul 9 23:48:54.038811 containerd[1986]: time="2025-07-09T23:48:54.038693169Z" level=info msg="RemoveContainer for \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\"" Jul 9 23:48:54.052575 containerd[1986]: time="2025-07-09T23:48:54.052212182Z" level=info msg="RemoveContainer for \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" returns successfully" Jul 9 23:48:54.053841 systemd[1]: Removed slice kubepods-burstable-pod169680eb_0ab6_4f2b_92d3_ed15f994deed.slice - libcontainer container kubepods-burstable-pod169680eb_0ab6_4f2b_92d3_ed15f994deed.slice. Jul 9 23:48:54.054093 systemd[1]: kubepods-burstable-pod169680eb_0ab6_4f2b_92d3_ed15f994deed.slice: Consumed 16.464s CPU time, 125.9M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:48:54.056013 kubelet[3288]: I0709 23:48:54.054625 3288 scope.go:117] "RemoveContainer" containerID="1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1" Jul 9 23:48:54.062996 containerd[1986]: time="2025-07-09T23:48:54.062949355Z" level=info msg="RemoveContainer for \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\"" Jul 9 23:48:54.074837 containerd[1986]: time="2025-07-09T23:48:54.074743439Z" level=info msg="RemoveContainer for \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" returns successfully" Jul 9 23:48:54.076475 kubelet[3288]: I0709 23:48:54.076411 3288 scope.go:117] "RemoveContainer" containerID="43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d" Jul 9 23:48:54.088066 containerd[1986]: time="2025-07-09T23:48:54.086753547Z" level=info msg="RemoveContainer for \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\"" Jul 9 23:48:54.108533 containerd[1986]: time="2025-07-09T23:48:54.108427005Z" level=info msg="RemoveContainer for \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" returns successfully" Jul 9 23:48:54.108970 kubelet[3288]: I0709 23:48:54.108801 3288 scope.go:117] "RemoveContainer" containerID="56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc" Jul 9 23:48:54.111382 containerd[1986]: time="2025-07-09T23:48:54.111318242Z" level=info msg="RemoveContainer for \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\"" Jul 9 23:48:54.120672 containerd[1986]: time="2025-07-09T23:48:54.120474702Z" level=info msg="RemoveContainer for \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" returns successfully" Jul 9 23:48:54.121083 kubelet[3288]: I0709 23:48:54.121050 3288 scope.go:117] "RemoveContainer" containerID="5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2" Jul 9 23:48:54.124299 containerd[1986]: time="2025-07-09T23:48:54.124252092Z" level=info msg="RemoveContainer for \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\"" Jul 9 23:48:54.131445 containerd[1986]: time="2025-07-09T23:48:54.131342741Z" level=info msg="RemoveContainer for \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" returns successfully" Jul 9 23:48:54.131763 kubelet[3288]: I0709 23:48:54.131667 3288 scope.go:117] "RemoveContainer" containerID="d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f" Jul 9 23:48:54.132232 containerd[1986]: time="2025-07-09T23:48:54.132172150Z" level=error msg="ContainerStatus for \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\": not found" Jul 9 23:48:54.132629 kubelet[3288]: E0709 23:48:54.132425 3288 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\": not found" containerID="d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f" Jul 9 23:48:54.132745 kubelet[3288]: I0709 23:48:54.132472 3288 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f"} err="failed to get container status \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d96976835c4d4c9488a5a4c6d9bfb6edd305a37f5d92ecbfd7c2cb25765be53f\": not found" Jul 9 23:48:54.132920 kubelet[3288]: I0709 23:48:54.132853 3288 scope.go:117] "RemoveContainer" containerID="1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1" Jul 9 23:48:54.133725 containerd[1986]: time="2025-07-09T23:48:54.133654357Z" level=error msg="ContainerStatus for \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\": not found" Jul 9 23:48:54.134082 kubelet[3288]: E0709 23:48:54.134021 3288 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\": not found" containerID="1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1" Jul 9 23:48:54.134174 kubelet[3288]: I0709 23:48:54.134079 3288 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1"} err="failed to get container status \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fb6a61d3da175292a8e7e3b5e1350679cac600b7eb01df151926e03a69114a1\": not found" Jul 9 23:48:54.134174 kubelet[3288]: I0709 23:48:54.134116 3288 scope.go:117] "RemoveContainer" containerID="43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d" Jul 9 23:48:54.134771 containerd[1986]: time="2025-07-09T23:48:54.134720611Z" level=error msg="ContainerStatus for \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\": not found" Jul 9 23:48:54.135254 kubelet[3288]: E0709 23:48:54.135212 3288 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\": not found" containerID="43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d" Jul 9 23:48:54.135356 kubelet[3288]: I0709 23:48:54.135264 3288 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d"} err="failed to get container status \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\": rpc error: code = NotFound desc = an error occurred when try to find container \"43767746dffcf50cac9663c1ee38709deb5882d357f75d0c67c098541d15336d\": not found" Jul 9 23:48:54.135356 kubelet[3288]: I0709 23:48:54.135298 3288 scope.go:117] "RemoveContainer" containerID="56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc" Jul 9 23:48:54.135916 containerd[1986]: time="2025-07-09T23:48:54.135824167Z" level=error msg="ContainerStatus for \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\": not found" Jul 9 23:48:54.136237 kubelet[3288]: E0709 23:48:54.136186 3288 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\": not found" containerID="56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc" Jul 9 23:48:54.136527 kubelet[3288]: I0709 23:48:54.136244 3288 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc"} err="failed to get container status \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"56ee90cc634c527e6cf5e8dbec366473353b0355d1e5ec52af03e3294c47f6dc\": not found" Jul 9 23:48:54.136527 kubelet[3288]: I0709 23:48:54.136281 3288 scope.go:117] "RemoveContainer" containerID="5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2" Jul 9 23:48:54.136944 containerd[1986]: time="2025-07-09T23:48:54.136896695Z" level=error msg="ContainerStatus for \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\": not found" Jul 9 23:48:54.137395 kubelet[3288]: E0709 23:48:54.137343 3288 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\": not found" containerID="5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2" Jul 9 23:48:54.137486 kubelet[3288]: I0709 23:48:54.137393 3288 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2"} err="failed to get container status \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"5460f1c7113c382330883905163ac396a20b706d8a3802fe142ce043ba1042d2\": not found" Jul 9 23:48:54.611195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c-shm.mount: Deactivated successfully. Jul 9 23:48:54.611369 systemd[1]: var-lib-kubelet-pods-2bcd70c9\x2d43d5\x2d4e77\x2db127\x2d0ea4b49f865c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drppsg.mount: Deactivated successfully. Jul 9 23:48:54.612153 systemd[1]: var-lib-kubelet-pods-169680eb\x2d0ab6\x2d4f2b\x2d92d3\x2ded15f994deed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvvpkn.mount: Deactivated successfully. Jul 9 23:48:54.612331 systemd[1]: var-lib-kubelet-pods-169680eb\x2d0ab6\x2d4f2b\x2d92d3\x2ded15f994deed-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 23:48:54.612466 systemd[1]: var-lib-kubelet-pods-169680eb\x2d0ab6\x2d4f2b\x2d92d3\x2ded15f994deed-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 23:48:55.370112 kubelet[3288]: I0709 23:48:55.370043 3288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169680eb-0ab6-4f2b-92d3-ed15f994deed" path="/var/lib/kubelet/pods/169680eb-0ab6-4f2b-92d3-ed15f994deed/volumes" Jul 9 23:48:55.371488 kubelet[3288]: I0709 23:48:55.371422 3288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bcd70c9-43d5-4e77-b127-0ea4b49f865c" path="/var/lib/kubelet/pods/2bcd70c9-43d5-4e77-b127-0ea4b49f865c/volumes" Jul 9 23:48:55.381536 sshd[4953]: Connection closed by 139.178.89.65 port 49712 Jul 9 23:48:55.382432 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:55.389964 systemd[1]: sshd@24-172.31.27.216:22-139.178.89.65:49712.service: Deactivated successfully. Jul 9 23:48:55.394924 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:48:55.396646 systemd[1]: session-25.scope: Consumed 2.102s CPU time, 22.9M memory peak. Jul 9 23:48:55.398647 systemd-logind[1977]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:48:55.402221 systemd-logind[1977]: Removed session 25. Jul 9 23:48:55.417989 systemd[1]: Started sshd@25-172.31.27.216:22-139.178.89.65:49714.service - OpenSSH per-connection server daemon (139.178.89.65:49714). Jul 9 23:48:55.620404 sshd[5107]: Accepted publickey for core from 139.178.89.65 port 49714 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:55.622889 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:55.631492 systemd-logind[1977]: New session 26 of user core. Jul 9 23:48:55.643826 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 23:48:56.444316 ntpd[1971]: Deleting interface #11 lxc_health, fe80::9c19:ddff:fe33:a46a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Jul 9 23:48:56.444907 ntpd[1971]: 9 Jul 23:48:56 ntpd[1971]: Deleting interface #11 lxc_health, fe80::9c19:ddff:fe33:a46a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Jul 9 23:48:57.600143 kubelet[3288]: E0709 23:48:57.600082 3288 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:48:57.842473 sshd[5109]: Connection closed by 139.178.89.65 port 49714 Jul 9 23:48:57.843056 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:57.854423 systemd[1]: sshd@25-172.31.27.216:22-139.178.89.65:49714.service: Deactivated successfully. Jul 9 23:48:57.854985 systemd-logind[1977]: Session 26 logged out. Waiting for processes to exit. Jul 9 23:48:57.859486 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 23:48:57.864162 systemd[1]: session-26.scope: Consumed 1.991s CPU time, 25.5M memory peak. Jul 9 23:48:57.888907 kubelet[3288]: E0709 23:48:57.888845 3288 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="169680eb-0ab6-4f2b-92d3-ed15f994deed" containerName="mount-cgroup" Jul 9 23:48:57.888907 kubelet[3288]: E0709 23:48:57.888899 3288 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="169680eb-0ab6-4f2b-92d3-ed15f994deed" containerName="mount-bpf-fs" Jul 9 23:48:57.888907 kubelet[3288]: E0709 23:48:57.888917 3288 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="169680eb-0ab6-4f2b-92d3-ed15f994deed" containerName="cilium-agent" Jul 9 23:48:57.888907 kubelet[3288]: E0709 23:48:57.888933 3288 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="169680eb-0ab6-4f2b-92d3-ed15f994deed" containerName="apply-sysctl-overwrites" Jul 9 23:48:57.889234 kubelet[3288]: E0709 23:48:57.888947 3288 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2bcd70c9-43d5-4e77-b127-0ea4b49f865c" containerName="cilium-operator" Jul 9 23:48:57.889234 kubelet[3288]: E0709 23:48:57.888961 3288 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="169680eb-0ab6-4f2b-92d3-ed15f994deed" containerName="clean-cilium-state" Jul 9 23:48:57.889234 kubelet[3288]: I0709 23:48:57.889009 3288 memory_manager.go:354] "RemoveStaleState removing state" podUID="169680eb-0ab6-4f2b-92d3-ed15f994deed" containerName="cilium-agent" Jul 9 23:48:57.889234 kubelet[3288]: I0709 23:48:57.889024 3288 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bcd70c9-43d5-4e77-b127-0ea4b49f865c" containerName="cilium-operator" Jul 9 23:48:57.896764 systemd-logind[1977]: Removed session 26. Jul 9 23:48:57.899790 systemd[1]: Started sshd@26-172.31.27.216:22-139.178.89.65:49730.service - OpenSSH per-connection server daemon (139.178.89.65:49730). Jul 9 23:48:57.929965 systemd[1]: Created slice kubepods-burstable-podfb8b0eeb_3067_4305_bbc7_4a0c9ea8fc49.slice - libcontainer container kubepods-burstable-podfb8b0eeb_3067_4305_bbc7_4a0c9ea8fc49.slice. Jul 9 23:48:57.957759 kubelet[3288]: I0709 23:48:57.957348 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-lib-modules\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.958035 kubelet[3288]: I0709 23:48:57.957626 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct9gw\" (UniqueName: \"kubernetes.io/projected/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-kube-api-access-ct9gw\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.959866 kubelet[3288]: I0709 23:48:57.958191 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-cilium-cgroup\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.959866 kubelet[3288]: I0709 23:48:57.959700 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-cilium-config-path\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.959866 kubelet[3288]: I0709 23:48:57.959765 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-host-proc-sys-net\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960089 kubelet[3288]: I0709 23:48:57.959986 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-cni-path\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960089 kubelet[3288]: I0709 23:48:57.960067 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-etc-cni-netd\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960203 kubelet[3288]: I0709 23:48:57.960103 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-host-proc-sys-kernel\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960203 kubelet[3288]: I0709 23:48:57.960137 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-hubble-tls\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960203 kubelet[3288]: I0709 23:48:57.960171 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-hostproc\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960347 kubelet[3288]: I0709 23:48:57.960203 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-clustermesh-secrets\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960347 kubelet[3288]: I0709 23:48:57.960242 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-cilium-run\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960347 kubelet[3288]: I0709 23:48:57.960276 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-xtables-lock\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.960347 kubelet[3288]: I0709 23:48:57.960326 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-bpf-maps\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:57.962672 kubelet[3288]: I0709 23:48:57.960361 3288 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49-cilium-ipsec-secrets\") pod \"cilium-ztw7k\" (UID: \"fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49\") " pod="kube-system/cilium-ztw7k" Jul 9 23:48:58.157701 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 49730 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:58.160409 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:58.170916 systemd-logind[1977]: New session 27 of user core. Jul 9 23:48:58.180978 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 23:48:58.250589 containerd[1986]: time="2025-07-09T23:48:58.250528734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztw7k,Uid:fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49,Namespace:kube-system,Attempt:0,}" Jul 9 23:48:58.304613 containerd[1986]: time="2025-07-09T23:48:58.304492031Z" level=info msg="connecting to shim 69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c" address="unix:///run/containerd/s/c152bf4d4e067fdb06ce45b6524107f757da16b356d98f1c6a409946d2b97861" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:48:58.307781 sshd[5125]: Connection closed by 139.178.89.65 port 49730 Jul 9 23:48:58.308452 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jul 9 23:48:58.319345 systemd[1]: sshd@26-172.31.27.216:22-139.178.89.65:49730.service: Deactivated successfully. Jul 9 23:48:58.332548 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 23:48:58.339727 systemd-logind[1977]: Session 27 logged out. Waiting for processes to exit. Jul 9 23:48:58.363603 systemd[1]: Started sshd@27-172.31.27.216:22-139.178.89.65:49740.service - OpenSSH per-connection server daemon (139.178.89.65:49740). Jul 9 23:48:58.370339 systemd-logind[1977]: Removed session 27. Jul 9 23:48:58.381399 systemd[1]: Started cri-containerd-69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c.scope - libcontainer container 69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c. Jul 9 23:48:58.445699 containerd[1986]: time="2025-07-09T23:48:58.444758131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztw7k,Uid:fb8b0eeb-3067-4305-bbc7-4a0c9ea8fc49,Namespace:kube-system,Attempt:0,} returns sandbox id \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\"" Jul 9 23:48:58.451733 containerd[1986]: time="2025-07-09T23:48:58.451659539Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:48:58.471381 containerd[1986]: time="2025-07-09T23:48:58.471296919Z" level=info msg="Container cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:58.484061 containerd[1986]: time="2025-07-09T23:48:58.483985336Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50\"" Jul 9 23:48:58.485156 containerd[1986]: time="2025-07-09T23:48:58.484890199Z" level=info msg="StartContainer for \"cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50\"" Jul 9 23:48:58.487423 containerd[1986]: time="2025-07-09T23:48:58.487370403Z" level=info msg="connecting to shim cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50" address="unix:///run/containerd/s/c152bf4d4e067fdb06ce45b6524107f757da16b356d98f1c6a409946d2b97861" protocol=ttrpc version=3 Jul 9 23:48:58.521862 systemd[1]: Started cri-containerd-cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50.scope - libcontainer container cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50. Jul 9 23:48:58.570875 sshd[5163]: Accepted publickey for core from 139.178.89.65 port 49740 ssh2: RSA SHA256:s7oSFd+Qq5vROIEdBeyPoThtjRwh4iL1nelP3j4DAnQ Jul 9 23:48:58.574625 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:48:58.590976 systemd-logind[1977]: New session 28 of user core. Jul 9 23:48:58.596068 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 9 23:48:58.604826 containerd[1986]: time="2025-07-09T23:48:58.604763873Z" level=info msg="StartContainer for \"cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50\" returns successfully" Jul 9 23:48:58.617946 systemd[1]: cri-containerd-cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50.scope: Deactivated successfully. Jul 9 23:48:58.623891 containerd[1986]: time="2025-07-09T23:48:58.623580516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50\" id:\"cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50\" pid:5192 exited_at:{seconds:1752104938 nanos:622987089}" Jul 9 23:48:58.623891 containerd[1986]: time="2025-07-09T23:48:58.623703514Z" level=info msg="received exit event container_id:\"cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50\" id:\"cefe6e9d3083960ab694a0f3b8ce135ef67a329c6f443ab79df838606f292d50\" pid:5192 exited_at:{seconds:1752104938 nanos:622987089}" Jul 9 23:48:59.054545 containerd[1986]: time="2025-07-09T23:48:59.053846221Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:48:59.078861 containerd[1986]: time="2025-07-09T23:48:59.078763240Z" level=info msg="Container 2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:48:59.093757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028678466.mount: Deactivated successfully. Jul 9 23:48:59.101968 containerd[1986]: time="2025-07-09T23:48:59.100259535Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86\"" Jul 9 23:48:59.103367 containerd[1986]: time="2025-07-09T23:48:59.103292421Z" level=info msg="StartContainer for \"2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86\"" Jul 9 23:48:59.107186 containerd[1986]: time="2025-07-09T23:48:59.107111743Z" level=info msg="connecting to shim 2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86" address="unix:///run/containerd/s/c152bf4d4e067fdb06ce45b6524107f757da16b356d98f1c6a409946d2b97861" protocol=ttrpc version=3 Jul 9 23:48:59.155822 systemd[1]: Started cri-containerd-2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86.scope - libcontainer container 2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86. Jul 9 23:48:59.233155 containerd[1986]: time="2025-07-09T23:48:59.233057942Z" level=info msg="StartContainer for \"2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86\" returns successfully" Jul 9 23:48:59.247604 systemd[1]: cri-containerd-2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86.scope: Deactivated successfully. Jul 9 23:48:59.251822 containerd[1986]: time="2025-07-09T23:48:59.251409698Z" level=info msg="received exit event container_id:\"2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86\" id:\"2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86\" pid:5245 exited_at:{seconds:1752104939 nanos:250882862}" Jul 9 23:48:59.253641 containerd[1986]: time="2025-07-09T23:48:59.251741464Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86\" id:\"2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86\" pid:5245 exited_at:{seconds:1752104939 nanos:250882862}" Jul 9 23:48:59.297112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fd89ea71a25eb4b4e0cb13a94ef1b3450145abba9ef0af728433b3838096c86-rootfs.mount: Deactivated successfully. Jul 9 23:49:00.065114 containerd[1986]: time="2025-07-09T23:49:00.064943458Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:49:00.092421 containerd[1986]: time="2025-07-09T23:49:00.092338810Z" level=info msg="Container 94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:00.111062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200531056.mount: Deactivated successfully. Jul 9 23:49:00.123815 containerd[1986]: time="2025-07-09T23:49:00.123760799Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8\"" Jul 9 23:49:00.125235 containerd[1986]: time="2025-07-09T23:49:00.124983059Z" level=info msg="StartContainer for \"94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8\"" Jul 9 23:49:00.128222 containerd[1986]: time="2025-07-09T23:49:00.128141152Z" level=info msg="connecting to shim 94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8" address="unix:///run/containerd/s/c152bf4d4e067fdb06ce45b6524107f757da16b356d98f1c6a409946d2b97861" protocol=ttrpc version=3 Jul 9 23:49:00.173798 systemd[1]: Started cri-containerd-94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8.scope - libcontainer container 94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8. Jul 9 23:49:00.265322 systemd[1]: cri-containerd-94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8.scope: Deactivated successfully. Jul 9 23:49:00.269246 containerd[1986]: time="2025-07-09T23:49:00.267824775Z" level=info msg="StartContainer for \"94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8\" returns successfully" Jul 9 23:49:00.269792 containerd[1986]: time="2025-07-09T23:49:00.269668805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8\" id:\"94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8\" pid:5288 exited_at:{seconds:1752104940 nanos:269186382}" Jul 9 23:49:00.271186 containerd[1986]: time="2025-07-09T23:49:00.271113674Z" level=info msg="received exit event container_id:\"94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8\" id:\"94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8\" pid:5288 exited_at:{seconds:1752104940 nanos:269186382}" Jul 9 23:49:00.319741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94036d0e549a2680f816651f912097e01cb2c4010647c1c4685f18c25c6eabc8-rootfs.mount: Deactivated successfully. Jul 9 23:49:00.462903 kubelet[3288]: I0709 23:49:00.462768 3288 setters.go:600] "Node became not ready" node="ip-172-31-27-216" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T23:49:00Z","lastTransitionTime":"2025-07-09T23:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 23:49:01.069664 containerd[1986]: time="2025-07-09T23:49:01.068728266Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:49:01.093793 containerd[1986]: time="2025-07-09T23:49:01.093718976Z" level=info msg="Container 0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:01.111334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422709764.mount: Deactivated successfully. Jul 9 23:49:01.118113 containerd[1986]: time="2025-07-09T23:49:01.117953549Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c\"" Jul 9 23:49:01.119684 containerd[1986]: time="2025-07-09T23:49:01.119130352Z" level=info msg="StartContainer for \"0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c\"" Jul 9 23:49:01.121638 containerd[1986]: time="2025-07-09T23:49:01.121568720Z" level=info msg="connecting to shim 0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c" address="unix:///run/containerd/s/c152bf4d4e067fdb06ce45b6524107f757da16b356d98f1c6a409946d2b97861" protocol=ttrpc version=3 Jul 9 23:49:01.170862 systemd[1]: Started cri-containerd-0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c.scope - libcontainer container 0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c. Jul 9 23:49:01.227602 systemd[1]: cri-containerd-0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c.scope: Deactivated successfully. Jul 9 23:49:01.233127 containerd[1986]: time="2025-07-09T23:49:01.232941478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c\" id:\"0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c\" pid:5332 exited_at:{seconds:1752104941 nanos:232420459}" Jul 9 23:49:01.236413 containerd[1986]: time="2025-07-09T23:49:01.236208344Z" level=info msg="received exit event container_id:\"0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c\" id:\"0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c\" pid:5332 exited_at:{seconds:1752104941 nanos:232420459}" Jul 9 23:49:01.251076 containerd[1986]: time="2025-07-09T23:49:01.251004119Z" level=info msg="StartContainer for \"0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c\" returns successfully" Jul 9 23:49:01.279708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d4234acf92e3ff3238a53afaec64b52adb06e18a2d39c771bb227d49f23a74c-rootfs.mount: Deactivated successfully. Jul 9 23:49:02.082765 containerd[1986]: time="2025-07-09T23:49:02.082425538Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:49:02.114243 containerd[1986]: time="2025-07-09T23:49:02.113733585Z" level=info msg="Container 805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:02.132470 containerd[1986]: time="2025-07-09T23:49:02.132313898Z" level=info msg="CreateContainer within sandbox \"69f9229e881bbd62bb80190ffa824b73091150e19d308e3e0f3906fcfc8b748c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\"" Jul 9 23:49:02.134515 containerd[1986]: time="2025-07-09T23:49:02.133808422Z" level=info msg="StartContainer for \"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\"" Jul 9 23:49:02.137991 containerd[1986]: time="2025-07-09T23:49:02.137636427Z" level=info msg="connecting to shim 805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2" address="unix:///run/containerd/s/c152bf4d4e067fdb06ce45b6524107f757da16b356d98f1c6a409946d2b97861" protocol=ttrpc version=3 Jul 9 23:49:02.185814 systemd[1]: Started cri-containerd-805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2.scope - libcontainer container 805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2. Jul 9 23:49:02.266898 containerd[1986]: time="2025-07-09T23:49:02.266806087Z" level=info msg="StartContainer for \"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\" returns successfully" Jul 9 23:49:02.423036 containerd[1986]: time="2025-07-09T23:49:02.422965544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\" id:\"a0afa844e9f0672286292f771b1da8181ebfbd864504ef037099d90009b07fe0\" pid:5401 exited_at:{seconds:1752104942 nanos:422308813}" Jul 9 23:49:03.152427 kubelet[3288]: I0709 23:49:03.152194 3288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ztw7k" podStartSLOduration=6.152168195 podStartE2EDuration="6.152168195s" podCreationTimestamp="2025-07-09 23:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:03.141900335 +0000 UTC m=+116.109215265" watchObservedRunningTime="2025-07-09 23:49:03.152168195 +0000 UTC m=+116.119483113" Jul 9 23:49:03.164564 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 23:49:05.328116 containerd[1986]: time="2025-07-09T23:49:05.328035028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\" id:\"261ce2a27e8fd1a4ee015984647e8762c8b43004c4812bb9d2e2d4c027a71a47\" pid:5544 exit_status:1 exited_at:{seconds:1752104945 nanos:327579903}" Jul 9 23:49:07.362363 containerd[1986]: time="2025-07-09T23:49:07.362146513Z" level=info msg="StopPodSandbox for \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\"" Jul 9 23:49:07.369132 containerd[1986]: time="2025-07-09T23:49:07.365606554Z" level=info msg="TearDown network for sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" successfully" Jul 9 23:49:07.369132 containerd[1986]: time="2025-07-09T23:49:07.365798650Z" level=info msg="StopPodSandbox for \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" returns successfully" Jul 9 23:49:07.369132 containerd[1986]: time="2025-07-09T23:49:07.368693090Z" level=info msg="RemovePodSandbox for \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\"" Jul 9 23:49:07.369132 containerd[1986]: time="2025-07-09T23:49:07.368751249Z" level=info msg="Forcibly stopping sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\"" Jul 9 23:49:07.369132 containerd[1986]: time="2025-07-09T23:49:07.368914319Z" level=info msg="TearDown network for sandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" successfully" Jul 9 23:49:07.379127 containerd[1986]: time="2025-07-09T23:49:07.379000998Z" level=info msg="Ensure that sandbox 3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c in task-service has been cleanup successfully" Jul 9 23:49:07.390316 containerd[1986]: time="2025-07-09T23:49:07.390250866Z" level=info msg="RemovePodSandbox \"3dc85277c2d410d197c08c1b4e710113a2cb165574f011562ba3a247fc0a624c\" returns successfully" Jul 9 23:49:07.391585 containerd[1986]: time="2025-07-09T23:49:07.391300269Z" level=info msg="StopPodSandbox for \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\"" Jul 9 23:49:07.391585 containerd[1986]: time="2025-07-09T23:49:07.391487651Z" level=info msg="TearDown network for sandbox \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" successfully" Jul 9 23:49:07.391585 containerd[1986]: time="2025-07-09T23:49:07.391563357Z" level=info msg="StopPodSandbox for \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" returns successfully" Jul 9 23:49:07.393077 containerd[1986]: time="2025-07-09T23:49:07.392774140Z" level=info msg="RemovePodSandbox for \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\"" Jul 9 23:49:07.393077 containerd[1986]: time="2025-07-09T23:49:07.392846140Z" level=info msg="Forcibly stopping sandbox \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\"" Jul 9 23:49:07.393077 containerd[1986]: time="2025-07-09T23:49:07.393005540Z" level=info msg="TearDown network for sandbox \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" successfully" Jul 9 23:49:07.396057 containerd[1986]: time="2025-07-09T23:49:07.395979800Z" level=info msg="Ensure that sandbox c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704 in task-service has been cleanup successfully" Jul 9 23:49:07.405073 containerd[1986]: time="2025-07-09T23:49:07.404828829Z" level=info msg="RemovePodSandbox \"c07b2ff931b63ffad0bb8465ed8ed271aae1f024cfbc699a6661bff63e21e704\" returns successfully" Jul 9 23:49:07.711455 containerd[1986]: time="2025-07-09T23:49:07.711276298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\" id:\"0c1271ccccdd83bd4753a1126c7c87749abe28866b38f9d28c4d49400a9bafe0\" pid:5883 exit_status:1 exited_at:{seconds:1752104947 nanos:710788982}" Jul 9 23:49:07.719710 kubelet[3288]: E0709 23:49:07.719644 3288 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39404->127.0.0.1:39813: write tcp 127.0.0.1:39404->127.0.0.1:39813: write: broken pipe Jul 9 23:49:07.740091 (udev-worker)[5912]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:49:07.750177 (udev-worker)[5913]: Network interface NamePolicy= disabled on kernel command line. Jul 9 23:49:07.758849 systemd-networkd[1888]: lxc_health: Link UP Jul 9 23:49:07.778100 systemd-networkd[1888]: lxc_health: Gained carrier Jul 9 23:49:09.690640 systemd-networkd[1888]: lxc_health: Gained IPv6LL Jul 9 23:49:10.098770 containerd[1986]: time="2025-07-09T23:49:10.098177597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\" id:\"434fd0345a6e685e6a135592ac8e8c0667254cd3535a1bf13946e7b582dd34fa\" pid:5947 exited_at:{seconds:1752104950 nanos:95084824}" Jul 9 23:49:12.372068 containerd[1986]: time="2025-07-09T23:49:12.371766925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\" id:\"f46ac55b4e98d166bc157a954ede663dbf9fa38bf1f36af7140d86fd832c31df\" pid:5976 exited_at:{seconds:1752104952 nanos:368777288}" Jul 9 23:49:12.444327 ntpd[1971]: Listen normally on 14 lxc_health [fe80::2cef:cfff:fe94:5087%14]:123 Jul 9 23:49:12.446015 ntpd[1971]: 9 Jul 23:49:12 ntpd[1971]: Listen normally on 14 lxc_health [fe80::2cef:cfff:fe94:5087%14]:123 Jul 9 23:49:14.718237 containerd[1986]: time="2025-07-09T23:49:14.718153847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"805fb86ca2ddf5fbe7beeb8fd0f9d6e3d9f70ab02edd185bbc68c936e53d6ee2\" id:\"7b9ec4ec0ba760acb45b932ff4c2de90f78c190962c951f1723d5c091b701925\" pid:6003 exited_at:{seconds:1752104954 nanos:717170375}" Jul 9 23:49:14.758101 sshd[5211]: Connection closed by 139.178.89.65 port 49740 Jul 9 23:49:14.759134 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:14.768667 systemd-logind[1977]: Session 28 logged out. Waiting for processes to exit. Jul 9 23:49:14.769268 systemd[1]: sshd@27-172.31.27.216:22-139.178.89.65:49740.service: Deactivated successfully. Jul 9 23:49:14.777347 systemd[1]: session-28.scope: Deactivated successfully. Jul 9 23:49:14.788105 systemd-logind[1977]: Removed session 28.