Jul 12 00:06:47.212606 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 12 00:06:47.212652 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:06:47.212677 kernel: KASLR disabled due to lack of seed Jul 12 00:06:47.212694 kernel: efi: EFI v2.7 by EDK II Jul 12 00:06:47.212709 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Jul 12 00:06:47.212725 kernel: ACPI: Early table checksum verification disabled Jul 12 00:06:47.212743 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 12 00:06:47.212758 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 12 00:06:47.212774 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 12 00:06:47.212790 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 12 00:06:47.212811 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 12 00:06:47.212834 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 12 00:06:47.212850 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 12 00:06:47.212866 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 12 00:06:47.212885 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 12 00:06:47.212908 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 12 00:06:47.212926 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 12 00:06:47.212943 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 12 00:06:47.212960 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 12 00:06:47.212977 kernel: printk: bootconsole [uart0] enabled Jul 12 00:06:47.212993 kernel: NUMA: Failed to initialise from firmware Jul 12 00:06:47.213010 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:06:47.213027 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jul 12 00:06:47.213043 kernel: Zone ranges: Jul 12 00:06:47.213060 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:06:47.213077 kernel: DMA32 empty Jul 12 00:06:47.213098 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 12 00:06:47.213115 kernel: Movable zone start for each node Jul 12 00:06:47.213132 kernel: Early memory node ranges Jul 12 00:06:47.213148 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 12 00:06:47.213164 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 12 00:06:47.213182 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 12 00:06:47.213198 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 12 00:06:47.213215 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 12 00:06:47.213231 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 12 00:06:47.213247 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 12 00:06:47.213294 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 12 00:06:47.213313 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 12 00:06:47.213337 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 12 00:06:47.213355 kernel: psci: probing for conduit method from ACPI. Jul 12 00:06:47.213379 kernel: psci: PSCIv1.0 detected in firmware. Jul 12 00:06:47.213398 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:06:47.213415 kernel: psci: Trusted OS migration not required Jul 12 00:06:47.213437 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:06:47.213455 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jul 12 00:06:47.213473 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:06:47.213491 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:06:47.213509 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:06:47.213526 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:06:47.213544 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:06:47.213561 kernel: CPU features: detected: Spectre-v2 Jul 12 00:06:47.213579 kernel: CPU features: detected: Spectre-v3a Jul 12 00:06:47.213596 kernel: CPU features: detected: Spectre-BHB Jul 12 00:06:47.213614 kernel: CPU features: detected: ARM erratum 1742098 Jul 12 00:06:47.213636 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 12 00:06:47.213654 kernel: alternatives: applying boot alternatives Jul 12 00:06:47.213674 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:06:47.213693 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:06:47.213710 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:06:47.213728 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:06:47.213745 kernel: Fallback order for Node 0: 0 Jul 12 00:06:47.213762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 12 00:06:47.213780 kernel: Policy zone: Normal Jul 12 00:06:47.213797 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:06:47.213814 kernel: software IO TLB: area num 2. Jul 12 00:06:47.213836 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 12 00:06:47.213855 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Jul 12 00:06:47.213873 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:06:47.213890 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:06:47.213909 kernel: rcu: RCU event tracing is enabled. Jul 12 00:06:47.213927 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:06:47.213945 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:06:47.213963 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:06:47.213981 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:06:47.213999 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:06:47.214016 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:06:47.214038 kernel: GICv3: 96 SPIs implemented Jul 12 00:06:47.214055 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:06:47.214073 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:06:47.214090 kernel: GICv3: GICv3 features: 16 PPIs Jul 12 00:06:47.214108 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 12 00:06:47.214125 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 12 00:06:47.214144 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:06:47.214162 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:06:47.214179 kernel: GICv3: using LPI property table @0x00000004000d0000 Jul 12 00:06:47.214197 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 12 00:06:47.214214 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jul 12 00:06:47.214231 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:06:47.214272 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 12 00:06:47.214296 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 12 00:06:47.214314 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 12 00:06:47.214331 kernel: Console: colour dummy device 80x25 Jul 12 00:06:47.214349 kernel: printk: console [tty1] enabled Jul 12 00:06:47.214367 kernel: ACPI: Core revision 20230628 Jul 12 00:06:47.214385 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 12 00:06:47.214403 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:06:47.214421 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:06:47.214444 kernel: landlock: Up and running. Jul 12 00:06:47.214462 kernel: SELinux: Initializing. Jul 12 00:06:47.214480 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.214498 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.214515 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:06:47.214533 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:06:47.214551 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:06:47.214568 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:06:47.214586 kernel: Platform MSI: ITS@0x10080000 domain created Jul 12 00:06:47.214608 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 12 00:06:47.214626 kernel: Remapping and enabling EFI services. Jul 12 00:06:47.214643 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:06:47.214660 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:06:47.214678 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 12 00:06:47.214696 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jul 12 00:06:47.214713 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 12 00:06:47.214731 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:06:47.214748 kernel: SMP: Total of 2 processors activated. Jul 12 00:06:47.214765 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:06:47.214788 kernel: CPU features: detected: 32-bit EL1 Support Jul 12 00:06:47.214806 kernel: CPU features: detected: CRC32 instructions Jul 12 00:06:47.214834 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:06:47.214877 kernel: alternatives: applying system-wide alternatives Jul 12 00:06:47.214897 kernel: devtmpfs: initialized Jul 12 00:06:47.214915 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:06:47.214934 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:06:47.214952 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:06:47.214971 kernel: SMBIOS 3.0.0 present. Jul 12 00:06:47.214995 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 12 00:06:47.215014 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:06:47.215032 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:06:47.215051 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:06:47.215069 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:06:47.215088 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:06:47.215106 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jul 12 00:06:47.215128 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:06:47.215147 kernel: cpuidle: using governor menu Jul 12 00:06:47.215165 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:06:47.215184 kernel: ASID allocator initialised with 65536 entries Jul 12 00:06:47.215202 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:06:47.215220 kernel: Serial: AMBA PL011 UART driver Jul 12 00:06:47.215238 kernel: Modules: 17488 pages in range for non-PLT usage Jul 12 00:06:47.215273 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:06:47.215295 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:06:47.215319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:06:47.218288 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:06:47.218320 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:06:47.218339 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:06:47.218358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:06:47.218376 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:06:47.218395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:06:47.218413 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:06:47.218431 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:06:47.218459 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:06:47.218478 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:06:47.218496 kernel: ACPI: Interpreter enabled Jul 12 00:06:47.218514 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:06:47.218533 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:06:47.218551 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 12 00:06:47.218856 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:06:47.219075 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:06:47.219314 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:06:47.219530 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 12 00:06:47.219741 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 12 00:06:47.219767 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 12 00:06:47.219787 kernel: acpiphp: Slot [1] registered Jul 12 00:06:47.219805 kernel: acpiphp: Slot [2] registered Jul 12 00:06:47.219826 kernel: acpiphp: Slot [3] registered Jul 12 00:06:47.219845 kernel: acpiphp: Slot [4] registered Jul 12 00:06:47.219871 kernel: acpiphp: Slot [5] registered Jul 12 00:06:47.219890 kernel: acpiphp: Slot [6] registered Jul 12 00:06:47.219909 kernel: acpiphp: Slot [7] registered Jul 12 00:06:47.219927 kernel: acpiphp: Slot [8] registered Jul 12 00:06:47.219945 kernel: acpiphp: Slot [9] registered Jul 12 00:06:47.219963 kernel: acpiphp: Slot [10] registered Jul 12 00:06:47.219982 kernel: acpiphp: Slot [11] registered Jul 12 00:06:47.220000 kernel: acpiphp: Slot [12] registered Jul 12 00:06:47.220018 kernel: acpiphp: Slot [13] registered Jul 12 00:06:47.220038 kernel: acpiphp: Slot [14] registered Jul 12 00:06:47.220061 kernel: acpiphp: Slot [15] registered Jul 12 00:06:47.220079 kernel: acpiphp: Slot [16] registered Jul 12 00:06:47.220098 kernel: acpiphp: Slot [17] registered Jul 12 00:06:47.220116 kernel: acpiphp: Slot [18] registered Jul 12 00:06:47.220134 kernel: acpiphp: Slot [19] registered Jul 12 00:06:47.220152 kernel: acpiphp: Slot [20] registered Jul 12 00:06:47.220170 kernel: acpiphp: Slot [21] registered Jul 12 00:06:47.220189 kernel: acpiphp: Slot [22] registered Jul 12 00:06:47.220207 kernel: acpiphp: Slot [23] registered Jul 12 00:06:47.220230 kernel: acpiphp: Slot [24] registered Jul 12 00:06:47.220249 kernel: acpiphp: Slot [25] registered Jul 12 00:06:47.220325 kernel: acpiphp: Slot [26] registered Jul 12 00:06:47.220344 kernel: acpiphp: Slot [27] registered Jul 12 00:06:47.220363 kernel: acpiphp: Slot [28] registered Jul 12 00:06:47.220381 kernel: acpiphp: Slot [29] registered Jul 12 00:06:47.220400 kernel: acpiphp: Slot [30] registered Jul 12 00:06:47.220418 kernel: acpiphp: Slot [31] registered Jul 12 00:06:47.220437 kernel: PCI host bridge to bus 0000:00 Jul 12 00:06:47.222462 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 12 00:06:47.222819 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:06:47.223120 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 12 00:06:47.223357 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 12 00:06:47.223598 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 12 00:06:47.223824 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 12 00:06:47.224033 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 12 00:06:47.225852 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 12 00:06:47.226134 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 12 00:06:47.227502 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:06:47.227753 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 12 00:06:47.227958 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 12 00:06:47.228161 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 12 00:06:47.228644 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 12 00:06:47.228858 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 12 00:06:47.229069 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 12 00:06:47.229300 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 12 00:06:47.229515 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 12 00:06:47.229723 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 12 00:06:47.229938 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 12 00:06:47.230132 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 12 00:06:47.230354 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:06:47.230545 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 12 00:06:47.230571 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:06:47.230591 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:06:47.230610 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:06:47.230629 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:06:47.230647 kernel: iommu: Default domain type: Translated Jul 12 00:06:47.230666 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:06:47.230691 kernel: efivars: Registered efivars operations Jul 12 00:06:47.230709 kernel: vgaarb: loaded Jul 12 00:06:47.230728 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:06:47.230746 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:06:47.230765 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:06:47.230807 kernel: pnp: PnP ACPI init Jul 12 00:06:47.231130 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 12 00:06:47.231162 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:06:47.231188 kernel: NET: Registered PF_INET protocol family Jul 12 00:06:47.231208 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:06:47.231227 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:06:47.231245 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:06:47.231293 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:06:47.231313 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:06:47.231332 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:06:47.231351 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.231369 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:06:47.231394 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:06:47.231412 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:06:47.231431 kernel: kvm [1]: HYP mode not available Jul 12 00:06:47.231449 kernel: Initialise system trusted keyrings Jul 12 00:06:47.231468 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:06:47.231486 kernel: Key type asymmetric registered Jul 12 00:06:47.231504 kernel: Asymmetric key parser 'x509' registered Jul 12 00:06:47.231522 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:06:47.231541 kernel: io scheduler mq-deadline registered Jul 12 00:06:47.231564 kernel: io scheduler kyber registered Jul 12 00:06:47.231582 kernel: io scheduler bfq registered Jul 12 00:06:47.231814 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 12 00:06:47.231841 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:06:47.231860 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:06:47.231879 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 12 00:06:47.231898 kernel: ACPI: button: Sleep Button [SLPB] Jul 12 00:06:47.231916 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:06:47.231940 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:06:47.232148 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 12 00:06:47.232174 kernel: printk: console [ttyS0] disabled Jul 12 00:06:47.232194 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 12 00:06:47.232212 kernel: printk: console [ttyS0] enabled Jul 12 00:06:47.232231 kernel: printk: bootconsole [uart0] disabled Jul 12 00:06:47.232249 kernel: thunder_xcv, ver 1.0 Jul 12 00:06:47.232309 kernel: thunder_bgx, ver 1.0 Jul 12 00:06:47.232330 kernel: nicpf, ver 1.0 Jul 12 00:06:47.232355 kernel: nicvf, ver 1.0 Jul 12 00:06:47.232572 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:06:47.232769 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:06:46 UTC (1752278806) Jul 12 00:06:47.232795 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:06:47.232814 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 12 00:06:47.232833 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:06:47.232851 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:06:47.232869 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:06:47.232893 kernel: Segment Routing with IPv6 Jul 12 00:06:47.232911 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:06:47.232930 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:06:47.232948 kernel: Key type dns_resolver registered Jul 12 00:06:47.232966 kernel: registered taskstats version 1 Jul 12 00:06:47.232985 kernel: Loading compiled-in X.509 certificates Jul 12 00:06:47.233004 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:06:47.233022 kernel: Key type .fscrypt registered Jul 12 00:06:47.233040 kernel: Key type fscrypt-provisioning registered Jul 12 00:06:47.233058 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:06:47.233081 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:06:47.233099 kernel: ima: No architecture policies found Jul 12 00:06:47.233118 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:06:47.233136 kernel: clk: Disabling unused clocks Jul 12 00:06:47.233154 kernel: Freeing unused kernel memory: 39424K Jul 12 00:06:47.233173 kernel: Run /init as init process Jul 12 00:06:47.233191 kernel: with arguments: Jul 12 00:06:47.233208 kernel: /init Jul 12 00:06:47.233226 kernel: with environment: Jul 12 00:06:47.233248 kernel: HOME=/ Jul 12 00:06:47.235366 kernel: TERM=linux Jul 12 00:06:47.235388 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:06:47.235413 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:06:47.235437 systemd[1]: Detected virtualization amazon. Jul 12 00:06:47.235459 systemd[1]: Detected architecture arm64. Jul 12 00:06:47.235478 systemd[1]: Running in initrd. Jul 12 00:06:47.235508 systemd[1]: No hostname configured, using default hostname. Jul 12 00:06:47.235528 systemd[1]: Hostname set to . Jul 12 00:06:47.235549 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:06:47.235570 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:06:47.235590 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:47.235610 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:47.235632 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:06:47.235652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:06:47.235678 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:06:47.235699 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:06:47.235722 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:06:47.235743 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:06:47.235764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:47.235784 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:06:47.235804 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:06:47.235828 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:06:47.235849 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:06:47.235869 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:06:47.235889 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:06:47.235909 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:06:47.235930 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:06:47.235950 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:06:47.235970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:47.235990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:47.236015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:47.236035 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:06:47.236056 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:06:47.236076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:06:47.236096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:06:47.236116 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:06:47.236136 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:06:47.236156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:06:47.236181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:47.236202 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:06:47.236222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:47.237400 systemd-journald[251]: Collecting audit messages is disabled. Jul 12 00:06:47.237478 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:06:47.237502 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:06:47.237523 systemd-journald[251]: Journal started Jul 12 00:06:47.237566 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20122a6a1465f9e1458dd030476ecf) is 8.0M, max 75.3M, 67.3M free. Jul 12 00:06:47.222937 systemd-modules-load[252]: Inserted module 'overlay' Jul 12 00:06:47.243721 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:06:47.261705 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:06:47.259207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:47.273372 systemd-modules-load[252]: Inserted module 'br_netfilter' Jul 12 00:06:47.275462 kernel: Bridge firewalling registered Jul 12 00:06:47.279739 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:06:47.287785 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:06:47.294119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:47.294556 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:06:47.321628 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:06:47.333885 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:06:47.354682 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:47.361403 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:47.374161 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:47.380695 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:06:47.386013 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:06:47.400816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:47.442781 dracut-cmdline[286]: dracut-dracut-053 Jul 12 00:06:47.451625 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:06:47.476809 systemd-resolved[287]: Positive Trust Anchors: Jul 12 00:06:47.478814 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:06:47.478895 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:06:47.621300 kernel: SCSI subsystem initialized Jul 12 00:06:47.629299 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:06:47.642302 kernel: iscsi: registered transport (tcp) Jul 12 00:06:47.664372 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:06:47.664443 kernel: QLogic iSCSI HBA Driver Jul 12 00:06:47.729299 kernel: random: crng init done Jul 12 00:06:47.728685 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 12 00:06:47.733034 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:06:47.738540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:06:47.760344 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:06:47.770557 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:06:47.809078 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:06:47.809168 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:06:47.809198 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:06:47.877305 kernel: raid6: neonx8 gen() 6667 MB/s Jul 12 00:06:47.894291 kernel: raid6: neonx4 gen() 6473 MB/s Jul 12 00:06:47.911290 kernel: raid6: neonx2 gen() 5408 MB/s Jul 12 00:06:47.928291 kernel: raid6: neonx1 gen() 3933 MB/s Jul 12 00:06:47.945292 kernel: raid6: int64x8 gen() 3798 MB/s Jul 12 00:06:47.962292 kernel: raid6: int64x4 gen() 3710 MB/s Jul 12 00:06:47.979290 kernel: raid6: int64x2 gen() 3590 MB/s Jul 12 00:06:47.997271 kernel: raid6: int64x1 gen() 2767 MB/s Jul 12 00:06:47.997311 kernel: raid6: using algorithm neonx8 gen() 6667 MB/s Jul 12 00:06:48.015243 kernel: raid6: .... xor() 4918 MB/s, rmw enabled Jul 12 00:06:48.015304 kernel: raid6: using neon recovery algorithm Jul 12 00:06:48.023294 kernel: xor: measuring software checksum speed Jul 12 00:06:48.023347 kernel: 8regs : 10259 MB/sec Jul 12 00:06:48.026939 kernel: 32regs : 11001 MB/sec Jul 12 00:06:48.026972 kernel: arm64_neon : 9523 MB/sec Jul 12 00:06:48.026997 kernel: xor: using function: 32regs (11001 MB/sec) Jul 12 00:06:48.113315 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:06:48.132634 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:06:48.142610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:48.182177 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jul 12 00:06:48.191931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:48.215577 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:06:48.244365 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Jul 12 00:06:48.303192 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:06:48.315581 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:06:48.446317 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:48.460883 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:06:48.509868 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:06:48.515182 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:06:48.521490 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:48.524146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:06:48.536572 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:06:48.587714 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:06:48.669554 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:06:48.669637 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 12 00:06:48.670765 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:06:48.674066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:48.695664 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 12 00:06:48.695992 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 12 00:06:48.696224 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:20:80:dc:10:4f Jul 12 00:06:48.679456 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:06:48.686449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:06:48.686762 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:48.691062 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:48.712841 (udev-worker)[532]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:06:48.718030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:06:48.742184 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:06:48.742291 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 12 00:06:48.752297 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 12 00:06:48.756432 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:48.768816 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:06:48.768897 kernel: GPT:9289727 != 16777215 Jul 12 00:06:48.768923 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:06:48.770688 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:06:48.777696 kernel: GPT:9289727 != 16777215 Jul 12 00:06:48.777734 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:06:48.777759 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:06:48.810686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:48.858091 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (523) Jul 12 00:06:48.906309 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (540) Jul 12 00:06:48.949913 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 12 00:06:49.001920 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 12 00:06:49.020187 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 12 00:06:49.034684 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 12 00:06:49.040997 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 12 00:06:49.053681 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:06:49.068821 disk-uuid[663]: Primary Header is updated. Jul 12 00:06:49.068821 disk-uuid[663]: Secondary Entries is updated. Jul 12 00:06:49.068821 disk-uuid[663]: Secondary Header is updated. Jul 12 00:06:49.082282 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:06:49.093351 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:06:50.098000 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 12 00:06:50.098074 disk-uuid[664]: The operation has completed successfully. Jul 12 00:06:50.283070 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:06:50.283329 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:06:50.330621 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:06:50.353166 sh[922]: Success Jul 12 00:06:50.379572 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:06:50.485276 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:06:50.505518 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:06:50.516058 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:06:50.543898 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:06:50.543963 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:50.545974 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:06:50.547404 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:06:50.548581 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:06:50.660292 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 12 00:06:50.683045 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:06:50.687602 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:06:50.702509 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:06:50.711819 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:06:50.736215 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:50.736315 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:50.737744 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:06:50.745325 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:06:50.766040 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:06:50.768778 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:50.781326 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:06:50.793587 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:06:50.903076 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:06:50.922783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:06:50.980617 systemd-networkd[1114]: lo: Link UP Jul 12 00:06:50.980640 systemd-networkd[1114]: lo: Gained carrier Jul 12 00:06:50.985889 systemd-networkd[1114]: Enumeration completed Jul 12 00:06:50.987858 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:06:50.990047 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:50.990056 systemd-networkd[1114]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:06:51.000354 systemd-networkd[1114]: eth0: Link UP Jul 12 00:06:51.000362 systemd-networkd[1114]: eth0: Gained carrier Jul 12 00:06:51.000382 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:06:51.000758 systemd[1]: Reached target network.target - Network. Jul 12 00:06:51.025355 systemd-networkd[1114]: eth0: DHCPv4 address 172.31.31.176/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:06:51.239808 ignition[1031]: Ignition 2.19.0 Jul 12 00:06:51.239838 ignition[1031]: Stage: fetch-offline Jul 12 00:06:51.244445 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.244490 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.247096 ignition[1031]: Ignition finished successfully Jul 12 00:06:51.253697 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:06:51.269620 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:06:51.299764 ignition[1124]: Ignition 2.19.0 Jul 12 00:06:51.301614 ignition[1124]: Stage: fetch Jul 12 00:06:51.303709 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.303758 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.305817 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.335051 ignition[1124]: PUT result: OK Jul 12 00:06:51.339719 ignition[1124]: parsed url from cmdline: "" Jul 12 00:06:51.339894 ignition[1124]: no config URL provided Jul 12 00:06:51.339917 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:06:51.340146 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:06:51.340198 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.349980 ignition[1124]: PUT result: OK Jul 12 00:06:51.350415 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 12 00:06:51.355709 ignition[1124]: GET result: OK Jul 12 00:06:51.355882 ignition[1124]: parsing config with SHA512: 76d57918a3aeb2a96ac711c8a2efe83f54857b4c5d4512071694668ca71d2fda214c9d8f0b3b382043199ef0c0796441eca8a8f552b0427f2d736990071f370b Jul 12 00:06:51.366377 unknown[1124]: fetched base config from "system" Jul 12 00:06:51.366928 unknown[1124]: fetched base config from "system" Jul 12 00:06:51.366944 unknown[1124]: fetched user config from "aws" Jul 12 00:06:51.374925 ignition[1124]: fetch: fetch complete Jul 12 00:06:51.374957 ignition[1124]: fetch: fetch passed Jul 12 00:06:51.375092 ignition[1124]: Ignition finished successfully Jul 12 00:06:51.379189 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:06:51.395574 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:06:51.427477 ignition[1131]: Ignition 2.19.0 Jul 12 00:06:51.428030 ignition[1131]: Stage: kargs Jul 12 00:06:51.429597 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.429626 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.429806 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.438526 ignition[1131]: PUT result: OK Jul 12 00:06:51.443758 ignition[1131]: kargs: kargs passed Jul 12 00:06:51.445572 ignition[1131]: Ignition finished successfully Jul 12 00:06:51.451515 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:06:51.464645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:06:51.495403 ignition[1137]: Ignition 2.19.0 Jul 12 00:06:51.496014 ignition[1137]: Stage: disks Jul 12 00:06:51.496796 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:51.496824 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:51.497001 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:51.507004 ignition[1137]: PUT result: OK Jul 12 00:06:51.512189 ignition[1137]: disks: disks passed Jul 12 00:06:51.512629 ignition[1137]: Ignition finished successfully Jul 12 00:06:51.521501 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:06:51.522100 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:06:51.524327 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:06:51.525075 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:06:51.525924 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:06:51.526728 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:06:51.545784 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:06:51.595304 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:06:51.601443 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:06:51.613869 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:06:51.715353 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:06:51.717231 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:06:51.721699 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:06:51.734476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:06:51.745218 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:06:51.751227 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:06:51.751375 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:06:51.751431 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:06:51.774297 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1164) Jul 12 00:06:51.781118 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:51.781203 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:51.781232 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:06:51.783213 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:06:51.796577 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:06:51.803295 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:06:51.806742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:06:52.240621 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:06:52.260330 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:06:52.269990 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:06:52.279863 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:06:52.394614 systemd-networkd[1114]: eth0: Gained IPv6LL Jul 12 00:06:52.685062 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:06:52.697492 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:06:52.702394 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:06:52.735840 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:06:52.739616 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:52.775386 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:06:52.787941 ignition[1277]: INFO : Ignition 2.19.0 Jul 12 00:06:52.787941 ignition[1277]: INFO : Stage: mount Jul 12 00:06:52.792454 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:52.792454 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:52.792454 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:52.800553 ignition[1277]: INFO : PUT result: OK Jul 12 00:06:52.805214 ignition[1277]: INFO : mount: mount passed Jul 12 00:06:52.807049 ignition[1277]: INFO : Ignition finished successfully Jul 12 00:06:52.811709 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:06:52.821459 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:06:52.836792 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:06:52.878306 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1288) Jul 12 00:06:52.882153 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:06:52.882228 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:06:52.882276 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 12 00:06:52.889311 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 12 00:06:52.892791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:06:52.943127 ignition[1305]: INFO : Ignition 2.19.0 Jul 12 00:06:52.945222 ignition[1305]: INFO : Stage: files Jul 12 00:06:52.945222 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:52.945222 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:52.945222 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:52.959520 ignition[1305]: INFO : PUT result: OK Jul 12 00:06:52.965798 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:06:52.972405 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:06:52.972405 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:06:53.006141 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:06:53.009845 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:06:53.013672 unknown[1305]: wrote ssh authorized keys file for user: core Jul 12 00:06:53.016502 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:06:53.020183 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:06:53.020183 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:06:53.250181 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:06:54.061581 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:06:54.066243 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:06:54.066243 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:06:54.335955 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:06:54.545978 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:06:54.545978 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:06:54.555390 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:06:54.555390 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:06:54.555390 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:06:54.555390 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:06:54.555390 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:06:54.555390 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:06:54.579845 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:06:54.579845 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:06:54.579845 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:06:54.579845 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:06:54.579845 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:06:54.579845 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:06:54.579845 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:06:55.257355 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:06:55.653765 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:06:55.653765 ignition[1305]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:06:55.661456 ignition[1305]: INFO : files: files passed Jul 12 00:06:55.661456 ignition[1305]: INFO : Ignition finished successfully Jul 12 00:06:55.687347 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:06:55.709511 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:06:55.715862 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:06:55.733982 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:06:55.734410 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:06:55.754036 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:55.754036 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:55.765049 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:06:55.771932 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:06:55.775597 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:06:55.790738 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:06:55.836563 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:06:55.836769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:06:55.845512 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:06:55.848355 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:06:55.854602 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:06:55.867070 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:06:55.893204 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:06:55.906577 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:06:55.931078 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:06:55.931813 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:55.932614 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:06:55.933244 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:06:55.933635 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:06:55.934869 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:06:55.936682 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:06:55.937637 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:06:55.938511 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:06:55.939249 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:06:55.940084 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:06:55.940954 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:06:55.941822 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:06:55.942674 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:06:55.943631 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:06:55.944536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:06:55.944866 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:06:55.946276 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:06:55.946584 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:55.946885 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:06:55.976421 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:55.977163 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:06:55.977510 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:06:55.993504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:06:55.995937 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:06:56.039026 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:06:56.039957 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:06:56.066584 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:06:56.068976 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:06:56.069999 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:56.086126 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:06:56.088133 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:06:56.091444 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:56.111511 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:06:56.111772 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:06:56.125152 ignition[1358]: INFO : Ignition 2.19.0 Jul 12 00:06:56.125152 ignition[1358]: INFO : Stage: umount Jul 12 00:06:56.125152 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:06:56.125152 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 12 00:06:56.125152 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 12 00:06:56.125152 ignition[1358]: INFO : PUT result: OK Jul 12 00:06:56.147290 ignition[1358]: INFO : umount: umount passed Jul 12 00:06:56.147290 ignition[1358]: INFO : Ignition finished successfully Jul 12 00:06:56.149874 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:06:56.152664 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:06:56.156248 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:06:56.156977 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:06:56.168248 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:06:56.168453 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:06:56.178283 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:06:56.178443 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:06:56.189653 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:06:56.189841 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:06:56.192543 systemd[1]: Stopped target network.target - Network. Jul 12 00:06:56.194681 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:06:56.194854 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:06:56.197647 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:06:56.200611 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:06:56.204901 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:56.210962 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:06:56.213024 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:06:56.217106 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:06:56.217213 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:06:56.224721 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:06:56.224811 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:06:56.227373 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:06:56.227487 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:06:56.236095 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:06:56.236200 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:06:56.239236 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:06:56.245501 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:06:56.249340 systemd-networkd[1114]: eth0: DHCPv6 lease lost Jul 12 00:06:56.252392 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:06:56.253552 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:06:56.253747 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:06:56.258698 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:06:56.258934 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:06:56.263090 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:06:56.263208 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:56.267547 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:06:56.268068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:06:56.283679 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:06:56.285662 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:06:56.287248 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:06:56.290727 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:06:56.296561 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:06:56.296803 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:06:56.313745 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:06:56.313908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:56.318993 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:06:56.319099 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:56.325471 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:06:56.325590 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:06:56.381096 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:06:56.381443 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:06:56.394428 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:06:56.394615 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:06:56.400644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:06:56.400795 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:56.407733 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:06:56.407817 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:56.410860 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:06:56.410971 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:06:56.413905 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:06:56.413989 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:06:56.431674 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:06:56.431808 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:06:56.448747 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:06:56.451189 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:06:56.451339 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:56.461970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:06:56.462097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:06:56.482400 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:06:56.482846 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:06:56.491332 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:06:56.502686 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:06:56.530019 systemd[1]: Switching root. Jul 12 00:06:56.574756 systemd-journald[251]: Journal stopped Jul 12 00:06:58.642046 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jul 12 00:06:58.642178 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:06:58.642224 kernel: SELinux: policy capability open_perms=1 Jul 12 00:06:58.644296 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:06:58.644350 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:06:58.644383 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:06:58.644415 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:06:58.644446 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:06:58.644482 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:06:58.644513 kernel: audit: type=1403 audit(1752278816.911:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:06:58.644556 systemd[1]: Successfully loaded SELinux policy in 52.830ms. Jul 12 00:06:58.644611 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.094ms. Jul 12 00:06:58.644648 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:06:58.644682 systemd[1]: Detected virtualization amazon. Jul 12 00:06:58.644715 systemd[1]: Detected architecture arm64. Jul 12 00:06:58.644746 systemd[1]: Detected first boot. Jul 12 00:06:58.644783 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:06:58.644815 zram_generator::config[1400]: No configuration found. Jul 12 00:06:58.644850 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:06:58.644895 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:06:58.644927 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:06:58.644958 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:06:58.644993 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:06:58.645028 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:06:58.645058 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:06:58.645093 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:06:58.645127 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:06:58.645161 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:06:58.645194 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:06:58.645227 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:06:58.647919 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:06:58.648001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:06:58.648036 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:06:58.648078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:06:58.648111 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:06:58.648145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:06:58.648175 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 12 00:06:58.648204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:06:58.648238 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:06:58.649368 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:06:58.649412 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:06:58.649452 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:06:58.649485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:06:58.649516 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:06:58.649549 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:06:58.649582 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:06:58.649614 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:06:58.649645 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:06:58.649677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:06:58.649706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:06:58.649740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:06:58.649771 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:06:58.649800 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:06:58.649830 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:06:58.649861 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:06:58.649890 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:06:58.649921 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:06:58.649952 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:06:58.649986 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:06:58.650020 systemd[1]: Reached target machines.target - Containers. Jul 12 00:06:58.650049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:06:58.650079 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:06:58.650108 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:06:58.650137 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:06:58.650166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:06:58.650197 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:06:58.650229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:06:58.650281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:06:58.650316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:06:58.650346 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:06:58.650388 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:06:58.650418 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:06:58.650447 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:06:58.650479 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:06:58.650510 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:06:58.650539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:06:58.650574 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:06:58.650603 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:06:58.650632 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:06:58.650669 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:06:58.650700 systemd[1]: Stopped verity-setup.service. Jul 12 00:06:58.650748 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:06:58.650786 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:06:58.650815 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:06:58.650844 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:06:58.650878 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:06:58.650909 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:06:58.650941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:06:58.650971 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:06:58.651002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:06:58.651035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:06:58.651064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:06:58.651093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:06:58.651124 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:06:58.652361 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:06:58.652409 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:06:58.652443 kernel: fuse: init (API version 7.39) Jul 12 00:06:58.652532 kernel: ACPI: bus type drm_connector registered Jul 12 00:06:58.656306 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:06:58.656361 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:06:58.656390 kernel: loop: module loaded Jul 12 00:06:58.656422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:06:58.656461 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:06:58.656494 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:06:58.656527 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:06:58.656603 systemd-journald[1482]: Collecting audit messages is disabled. Jul 12 00:06:58.656659 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:06:58.656695 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:06:58.656725 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:06:58.656755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:06:58.656784 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:06:58.656813 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:06:58.656844 systemd-journald[1482]: Journal started Jul 12 00:06:58.656897 systemd-journald[1482]: Runtime Journal (/run/log/journal/ec20122a6a1465f9e1458dd030476ecf) is 8.0M, max 75.3M, 67.3M free. Jul 12 00:06:57.949503 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:06:57.976966 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 12 00:06:57.977900 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:06:58.667316 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:06:58.715663 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:06:58.720476 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:06:58.720575 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:06:58.731817 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:06:58.746598 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:06:58.759820 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:06:58.762382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:06:58.780716 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:06:58.788519 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:06:58.795087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:06:58.806562 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:06:58.809070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:06:58.816681 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:06:58.823660 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:06:58.835933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:06:58.840711 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:06:58.847149 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:06:58.869626 systemd-journald[1482]: Time spent on flushing to /var/log/journal/ec20122a6a1465f9e1458dd030476ecf is 118.809ms for 910 entries. Jul 12 00:06:58.869626 systemd-journald[1482]: System Journal (/var/log/journal/ec20122a6a1465f9e1458dd030476ecf) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:06:58.997674 systemd-journald[1482]: Received client request to flush runtime journal. Jul 12 00:06:58.997808 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:06:58.916500 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:06:58.935687 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:06:58.947165 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:06:58.951043 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:06:58.965585 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:06:59.005473 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:06:59.012529 udevadm[1538]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:06:59.035661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:06:59.040918 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:06:59.057201 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:06:59.073572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:06:59.110029 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Jul 12 00:06:59.110613 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Jul 12 00:06:59.120465 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:06:59.192365 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:06:59.232728 kernel: loop1: detected capacity change from 0 to 114432 Jul 12 00:06:59.290324 kernel: loop2: detected capacity change from 0 to 114328 Jul 12 00:06:59.340316 kernel: loop3: detected capacity change from 0 to 52536 Jul 12 00:06:59.400808 kernel: loop4: detected capacity change from 0 to 203944 Jul 12 00:06:59.442330 kernel: loop5: detected capacity change from 0 to 114432 Jul 12 00:06:59.475366 kernel: loop6: detected capacity change from 0 to 114328 Jul 12 00:06:59.503547 kernel: loop7: detected capacity change from 0 to 52536 Jul 12 00:06:59.530131 (sd-merge)[1556]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 12 00:06:59.533083 (sd-merge)[1556]: Merged extensions into '/usr'. Jul 12 00:06:59.545576 systemd[1]: Reloading requested from client PID 1532 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:06:59.546083 systemd[1]: Reloading... Jul 12 00:06:59.717366 ldconfig[1528]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:06:59.777311 zram_generator::config[1588]: No configuration found. Jul 12 00:07:00.056485 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:00.174910 systemd[1]: Reloading finished in 627 ms. Jul 12 00:07:00.218363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:07:00.221699 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:07:00.226367 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:07:00.243631 systemd[1]: Starting ensure-sysext.service... Jul 12 00:07:00.249645 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:07:00.260659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:07:00.294411 systemd[1]: Reloading requested from client PID 1635 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:07:00.294459 systemd[1]: Reloading... Jul 12 00:07:00.310475 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:07:00.311189 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:07:00.314096 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:07:00.314808 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jul 12 00:07:00.314954 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jul 12 00:07:00.323811 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:00.324068 systemd-tmpfiles[1636]: Skipping /boot Jul 12 00:07:00.345627 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:07:00.345879 systemd-tmpfiles[1636]: Skipping /boot Jul 12 00:07:00.411695 systemd-udevd[1637]: Using default interface naming scheme 'v255'. Jul 12 00:07:00.509288 zram_generator::config[1666]: No configuration found. Jul 12 00:07:00.736713 (udev-worker)[1668]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:07:00.913306 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1683) Jul 12 00:07:00.922182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:01.107322 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 12 00:07:01.108183 systemd[1]: Reloading finished in 813 ms. Jul 12 00:07:01.140014 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:07:01.144134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:07:01.238572 systemd[1]: Finished ensure-sysext.service. Jul 12 00:07:01.245298 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:07:01.282372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 12 00:07:01.291610 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:01.304645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:07:01.307663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:07:01.312752 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:07:01.321442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:07:01.331494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:07:01.337592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:07:01.343601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:07:01.347623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:07:01.349642 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:07:01.356699 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:07:01.364710 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:07:01.376615 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:07:01.379234 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:07:01.387611 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:07:01.395623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:07:01.437721 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:07:01.450399 lvm[1834]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:01.528964 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:07:01.538943 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:07:01.539385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:07:01.554838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:07:01.555783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:07:01.558629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:07:01.566431 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:07:01.579012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:07:01.582847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:07:01.585377 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:07:01.588431 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:07:01.588719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:07:01.591729 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:07:01.596512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:07:01.635582 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:07:01.636103 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:07:01.649633 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:07:01.661557 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:07:01.664923 augenrules[1872]: No rules Jul 12 00:07:01.668159 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:07:01.672103 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:01.673662 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:07:01.710367 lvm[1873]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:07:01.725835 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:07:01.751356 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:07:01.758838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:07:01.854459 systemd-networkd[1842]: lo: Link UP Jul 12 00:07:01.854981 systemd-networkd[1842]: lo: Gained carrier Jul 12 00:07:01.858161 systemd-networkd[1842]: Enumeration completed Jul 12 00:07:01.859509 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:07:01.862394 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:01.862412 systemd-networkd[1842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:07:01.865191 systemd-networkd[1842]: eth0: Link UP Jul 12 00:07:01.865774 systemd-networkd[1842]: eth0: Gained carrier Jul 12 00:07:01.865919 systemd-networkd[1842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:07:01.870534 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:07:01.880405 systemd-networkd[1842]: eth0: DHCPv4 address 172.31.31.176/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 12 00:07:01.895111 systemd-resolved[1843]: Positive Trust Anchors: Jul 12 00:07:01.895162 systemd-resolved[1843]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:07:01.895227 systemd-resolved[1843]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:07:01.910811 systemd-resolved[1843]: Defaulting to hostname 'linux'. Jul 12 00:07:01.914546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:07:01.917245 systemd[1]: Reached target network.target - Network. Jul 12 00:07:01.919477 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:07:01.922142 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:07:01.924776 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:07:01.927673 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:07:01.930942 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:07:01.933887 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:07:01.936642 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:07:01.939727 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:07:01.939785 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:07:01.941877 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:07:01.945145 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:07:01.951399 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:07:01.961990 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:07:01.965650 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:07:01.968628 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:07:01.971175 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:07:01.973452 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:01.973509 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:07:01.984455 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:07:01.991700 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:07:02.000628 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:07:02.013682 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:07:02.025829 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:07:02.031014 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:07:02.041548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:07:02.047294 jq[1898]: false Jul 12 00:07:02.050008 systemd[1]: Started ntpd.service - Network Time Service. Jul 12 00:07:02.062867 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:07:02.068511 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 12 00:07:02.086692 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:07:02.095455 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:07:02.108635 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:07:02.112671 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:07:02.113898 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:07:02.137542 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:07:02.159668 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:07:02.167149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:07:02.170502 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:07:02.208095 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:07:02.208571 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:07:02.217918 dbus-daemon[1897]: [system] SELinux support is enabled Jul 12 00:07:02.231585 extend-filesystems[1899]: Found loop4 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found loop5 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found loop6 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found loop7 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1p1 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1p2 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1p3 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found usr Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1p4 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1p6 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1p7 Jul 12 00:07:02.231585 extend-filesystems[1899]: Found nvme0n1p9 Jul 12 00:07:02.231585 extend-filesystems[1899]: Checking size of /dev/nvme0n1p9 Jul 12 00:07:02.228855 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:07:02.265414 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:07:02.274534 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:07:02.274615 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:07:02.278971 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:07:02.279033 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:07:02.301301 jq[1912]: true Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: ntpd 4.2.8p17@1.4004-o Fri Jul 11 22:05:17 UTC 2025 (1): Starting Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: ---------------------------------------------------- Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: ntp-4 is maintained by Network Time Foundation, Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: corporation. Support and training for ntp-4 are Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: available at https://www.nwtime.org/support Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: ---------------------------------------------------- Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: proto: precision = 0.108 usec (-23) Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: basedate set to 2025-06-29 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: gps base set to 2025-06-29 (week 2373) Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Listen and drop on 0 v6wildcard [::]:123 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Listen normally on 2 lo 127.0.0.1:123 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Listen normally on 3 eth0 172.31.31.176:123 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Listen normally on 4 lo [::1]:123 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: bind(21) AF_INET6 fe80::420:80ff:fedc:104f%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: unable to create socket on eth0 (5) for fe80::420:80ff:fedc:104f%2#123 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: failed to init interface for address fe80::420:80ff:fedc:104f%2 Jul 12 00:07:02.322614 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: Listening on routing socket on fd #21 for interface updates Jul 12 00:07:02.296737 dbus-daemon[1897]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1842 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 12 00:07:02.298539 ntpd[1901]: ntpd 4.2.8p17@1.4004-o Fri Jul 11 22:05:17 UTC 2025 (1): Starting Jul 12 00:07:02.298592 ntpd[1901]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 12 00:07:02.336604 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 12 00:07:02.298613 ntpd[1901]: ---------------------------------------------------- Jul 12 00:07:02.298633 ntpd[1901]: ntp-4 is maintained by Network Time Foundation, Jul 12 00:07:02.298653 ntpd[1901]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 12 00:07:02.298693 ntpd[1901]: corporation. Support and training for ntp-4 are Jul 12 00:07:02.298716 ntpd[1901]: available at https://www.nwtime.org/support Jul 12 00:07:02.298736 ntpd[1901]: ---------------------------------------------------- Jul 12 00:07:02.307031 ntpd[1901]: proto: precision = 0.108 usec (-23) Jul 12 00:07:02.308063 ntpd[1901]: basedate set to 2025-06-29 Jul 12 00:07:02.308095 ntpd[1901]: gps base set to 2025-06-29 (week 2373) Jul 12 00:07:02.312711 ntpd[1901]: Listen and drop on 0 v6wildcard [::]:123 Jul 12 00:07:02.312797 ntpd[1901]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 12 00:07:02.314947 ntpd[1901]: Listen normally on 2 lo 127.0.0.1:123 Jul 12 00:07:02.315023 ntpd[1901]: Listen normally on 3 eth0 172.31.31.176:123 Jul 12 00:07:02.315090 ntpd[1901]: Listen normally on 4 lo [::1]:123 Jul 12 00:07:02.317052 ntpd[1901]: bind(21) AF_INET6 fe80::420:80ff:fedc:104f%2#123 flags 0x11 failed: Cannot assign requested address Jul 12 00:07:02.317109 ntpd[1901]: unable to create socket on eth0 (5) for fe80::420:80ff:fedc:104f%2#123 Jul 12 00:07:02.317138 ntpd[1901]: failed to init interface for address fe80::420:80ff:fedc:104f%2 Jul 12 00:07:02.317202 ntpd[1901]: Listening on routing socket on fd #21 for interface updates Jul 12 00:07:02.359576 extend-filesystems[1899]: Resized partition /dev/nvme0n1p9 Jul 12 00:07:02.365055 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:07:02.366528 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:07:02.380308 extend-filesystems[1945]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:07:02.382598 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:02.389651 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:02.389651 ntpd[1901]: 12 Jul 00:07:02 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:02.382648 ntpd[1901]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 12 00:07:02.397033 tar[1915]: linux-arm64/helm Jul 12 00:07:02.416293 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 12 00:07:02.436944 (ntainerd)[1942]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:07:02.462030 jq[1938]: true Jul 12 00:07:02.485389 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 12 00:07:02.494887 update_engine[1910]: I20250712 00:07:02.490970 1910 main.cc:92] Flatcar Update Engine starting Jul 12 00:07:02.499012 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:07:02.506425 update_engine[1910]: I20250712 00:07:02.506062 1910 update_check_scheduler.cc:74] Next update check in 3m24s Jul 12 00:07:02.510702 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:07:02.539862 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 12 00:07:02.552341 extend-filesystems[1945]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 12 00:07:02.552341 extend-filesystems[1945]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:07:02.552341 extend-filesystems[1945]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 12 00:07:02.571479 extend-filesystems[1899]: Resized filesystem in /dev/nvme0n1p9 Jul 12 00:07:02.601498 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:07:02.601949 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:07:02.625978 coreos-metadata[1896]: Jul 12 00:07:02.625 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:07:02.633732 coreos-metadata[1896]: Jul 12 00:07:02.633 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 12 00:07:02.641774 coreos-metadata[1896]: Jul 12 00:07:02.641 INFO Fetch successful Jul 12 00:07:02.641774 coreos-metadata[1896]: Jul 12 00:07:02.641 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 12 00:07:02.650285 coreos-metadata[1896]: Jul 12 00:07:02.648 INFO Fetch successful Jul 12 00:07:02.650285 coreos-metadata[1896]: Jul 12 00:07:02.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 12 00:07:02.652289 coreos-metadata[1896]: Jul 12 00:07:02.650 INFO Fetch successful Jul 12 00:07:02.652289 coreos-metadata[1896]: Jul 12 00:07:02.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 12 00:07:02.652289 coreos-metadata[1896]: Jul 12 00:07:02.651 INFO Fetch successful Jul 12 00:07:02.652289 coreos-metadata[1896]: Jul 12 00:07:02.651 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 12 00:07:02.654057 coreos-metadata[1896]: Jul 12 00:07:02.654 INFO Fetch failed with 404: resource not found Jul 12 00:07:02.654057 coreos-metadata[1896]: Jul 12 00:07:02.654 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 12 00:07:02.655760 coreos-metadata[1896]: Jul 12 00:07:02.655 INFO Fetch successful Jul 12 00:07:02.655760 coreos-metadata[1896]: Jul 12 00:07:02.655 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 12 00:07:02.666571 coreos-metadata[1896]: Jul 12 00:07:02.664 INFO Fetch successful Jul 12 00:07:02.666571 coreos-metadata[1896]: Jul 12 00:07:02.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 12 00:07:02.668087 coreos-metadata[1896]: Jul 12 00:07:02.668 INFO Fetch successful Jul 12 00:07:02.668087 coreos-metadata[1896]: Jul 12 00:07:02.668 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 12 00:07:02.669705 coreos-metadata[1896]: Jul 12 00:07:02.669 INFO Fetch successful Jul 12 00:07:02.669705 coreos-metadata[1896]: Jul 12 00:07:02.669 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 12 00:07:02.678030 coreos-metadata[1896]: Jul 12 00:07:02.677 INFO Fetch successful Jul 12 00:07:02.721586 bash[1976]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:07:02.818480 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1669) Jul 12 00:07:02.808825 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:07:02.821570 systemd[1]: Starting sshkeys.service... Jul 12 00:07:02.827707 systemd-logind[1909]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:07:02.827768 systemd-logind[1909]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 12 00:07:02.833591 systemd-logind[1909]: New seat seat0. Jul 12 00:07:02.839082 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:07:02.904605 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:07:02.914237 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 12 00:07:02.924428 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 12 00:07:02.929537 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:07:03.213400 systemd-networkd[1842]: eth0: Gained IPv6LL Jul 12 00:07:03.257907 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:07:03.263744 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:07:03.278907 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 12 00:07:03.298704 dbus-daemon[1897]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 12 00:07:03.294789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:03.302366 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:07:03.305760 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 12 00:07:03.320655 dbus-daemon[1897]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1939 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 12 00:07:03.336396 systemd[1]: Starting polkit.service - Authorization Manager... Jul 12 00:07:03.407702 locksmithd[1954]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:07:03.450459 polkitd[2085]: Started polkitd version 121 Jul 12 00:07:03.460292 containerd[1942]: time="2025-07-12T00:07:03.455675724Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:07:03.494282 coreos-metadata[2039]: Jul 12 00:07:03.492 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 12 00:07:03.494282 coreos-metadata[2039]: Jul 12 00:07:03.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 12 00:07:03.497638 coreos-metadata[2039]: Jul 12 00:07:03.495 INFO Fetch successful Jul 12 00:07:03.497638 coreos-metadata[2039]: Jul 12 00:07:03.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 12 00:07:03.498322 coreos-metadata[2039]: Jul 12 00:07:03.498 INFO Fetch successful Jul 12 00:07:03.504435 unknown[2039]: wrote ssh authorized keys file for user: core Jul 12 00:07:03.519538 polkitd[2085]: Loading rules from directory /etc/polkit-1/rules.d Jul 12 00:07:03.519687 polkitd[2085]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 12 00:07:03.523148 polkitd[2085]: Finished loading, compiling and executing 2 rules Jul 12 00:07:03.543746 dbus-daemon[1897]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 12 00:07:03.544496 systemd[1]: Started polkit.service - Authorization Manager. Jul 12 00:07:03.551847 polkitd[2085]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 12 00:07:03.563727 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:07:03.603397 update-ssh-keys[2107]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:07:03.607427 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 12 00:07:03.615907 systemd[1]: Finished sshkeys.service. Jul 12 00:07:03.639579 systemd-hostnamed[1939]: Hostname set to (transient) Jul 12 00:07:03.643902 systemd-resolved[1843]: System hostname changed to 'ip-172-31-31-176'. Jul 12 00:07:03.668776 amazon-ssm-agent[2079]: Initializing new seelog logger Jul 12 00:07:03.670309 amazon-ssm-agent[2079]: New Seelog Logger Creation Complete Jul 12 00:07:03.670309 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.670309 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.671416 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 processing appconfig overrides Jul 12 00:07:03.672138 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.672824 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.672824 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 processing appconfig overrides Jul 12 00:07:03.673135 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.673296 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.674914 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 processing appconfig overrides Jul 12 00:07:03.674914 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO Proxy environment variables: Jul 12 00:07:03.677676 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.677823 amazon-ssm-agent[2079]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 12 00:07:03.678107 amazon-ssm-agent[2079]: 2025/07/12 00:07:03 processing appconfig overrides Jul 12 00:07:03.691041 containerd[1942]: time="2025-07-12T00:07:03.690949225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:03.698216 containerd[1942]: time="2025-07-12T00:07:03.698135413Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:03.698493 containerd[1942]: time="2025-07-12T00:07:03.698447965Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:07:03.698642 containerd[1942]: time="2025-07-12T00:07:03.698606785Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:07:03.699754 containerd[1942]: time="2025-07-12T00:07:03.699556657Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:07:03.700397 containerd[1942]: time="2025-07-12T00:07:03.700036801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:03.700822 containerd[1942]: time="2025-07-12T00:07:03.700584097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:03.700822 containerd[1942]: time="2025-07-12T00:07:03.700635397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:03.702172 containerd[1942]: time="2025-07-12T00:07:03.701217121Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:03.702172 containerd[1942]: time="2025-07-12T00:07:03.701301049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:03.702172 containerd[1942]: time="2025-07-12T00:07:03.701340601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:03.702172 containerd[1942]: time="2025-07-12T00:07:03.701367073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:03.702172 containerd[1942]: time="2025-07-12T00:07:03.701605585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:03.702172 containerd[1942]: time="2025-07-12T00:07:03.702073009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:07:03.703156 containerd[1942]: time="2025-07-12T00:07:03.703100221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:07:03.703539 containerd[1942]: time="2025-07-12T00:07:03.703491781Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:07:03.703934 containerd[1942]: time="2025-07-12T00:07:03.703888105Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:07:03.704196 containerd[1942]: time="2025-07-12T00:07:03.704158873Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:07:03.709508 containerd[1942]: time="2025-07-12T00:07:03.709399969Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:07:03.710086 containerd[1942]: time="2025-07-12T00:07:03.709675501Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:07:03.710086 containerd[1942]: time="2025-07-12T00:07:03.710024569Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:07:03.710771 containerd[1942]: time="2025-07-12T00:07:03.710383309Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:07:03.710771 containerd[1942]: time="2025-07-12T00:07:03.710461033Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:07:03.711383 containerd[1942]: time="2025-07-12T00:07:03.711165001Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:07:03.712309 containerd[1942]: time="2025-07-12T00:07:03.712228069Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:07:03.712897 containerd[1942]: time="2025-07-12T00:07:03.712827469Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:07:03.713289 containerd[1942]: time="2025-07-12T00:07:03.713083705Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:07:03.713289 containerd[1942]: time="2025-07-12T00:07:03.713139577Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:07:03.713289 containerd[1942]: time="2025-07-12T00:07:03.713175757Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713289 containerd[1942]: time="2025-07-12T00:07:03.713212141Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713243701Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713574973Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713638633Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713671969Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713703445Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713731825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713776165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713810269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.713915 containerd[1942]: time="2025-07-12T00:07:03.713841721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.713874301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714621181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714695917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714730741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714780697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714813361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714851725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714885697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714925909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714958729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.714995293Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.715043713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.715075057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.715340 containerd[1942]: time="2025-07-12T00:07:03.715102933Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716208529Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716617273Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716655553Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716686345Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716711761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716776201Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716806069Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:07:03.717305 containerd[1942]: time="2025-07-12T00:07:03.716833129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:07:03.719673 containerd[1942]: time="2025-07-12T00:07:03.718395445Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:07:03.719673 containerd[1942]: time="2025-07-12T00:07:03.718539517Z" level=info msg="Connect containerd service" Jul 12 00:07:03.719673 containerd[1942]: time="2025-07-12T00:07:03.718616293Z" level=info msg="using legacy CRI server" Jul 12 00:07:03.719673 containerd[1942]: time="2025-07-12T00:07:03.718638541Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:07:03.719673 containerd[1942]: time="2025-07-12T00:07:03.718853929Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:07:03.721002 containerd[1942]: time="2025-07-12T00:07:03.720943993Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:07:03.721535 containerd[1942]: time="2025-07-12T00:07:03.721450681Z" level=info msg="Start subscribing containerd event" Jul 12 00:07:03.722304 containerd[1942]: time="2025-07-12T00:07:03.721696861Z" level=info msg="Start recovering state" Jul 12 00:07:03.722304 containerd[1942]: time="2025-07-12T00:07:03.721847989Z" level=info msg="Start event monitor" Jul 12 00:07:03.722304 containerd[1942]: time="2025-07-12T00:07:03.721877989Z" level=info msg="Start snapshots syncer" Jul 12 00:07:03.722304 containerd[1942]: time="2025-07-12T00:07:03.721900225Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:07:03.722304 containerd[1942]: time="2025-07-12T00:07:03.721919437Z" level=info msg="Start streaming server" Jul 12 00:07:03.723759 containerd[1942]: time="2025-07-12T00:07:03.723703873Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:07:03.724624 containerd[1942]: time="2025-07-12T00:07:03.724577353Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:07:03.734064 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:07:03.738626 containerd[1942]: time="2025-07-12T00:07:03.738571681Z" level=info msg="containerd successfully booted in 0.291112s" Jul 12 00:07:03.778226 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO https_proxy: Jul 12 00:07:03.880060 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO http_proxy: Jul 12 00:07:03.981481 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO no_proxy: Jul 12 00:07:04.080957 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO Checking if agent identity type OnPrem can be assumed Jul 12 00:07:04.179279 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO Checking if agent identity type EC2 can be assumed Jul 12 00:07:04.279711 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO Agent will take identity from EC2 Jul 12 00:07:04.378551 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:04.479398 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:04.578720 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 12 00:07:04.616130 tar[1915]: linux-arm64/LICENSE Jul 12 00:07:04.618641 tar[1915]: linux-arm64/README.md Jul 12 00:07:04.658898 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:07:04.678197 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 12 00:07:04.746344 sshd_keygen[1932]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [amazon-ssm-agent] Starting Core Agent Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [Registrar] Starting registrar module Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:04 INFO [EC2Identity] EC2 registration was successful. Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:04 INFO [CredentialRefresher] credentialRefresher has started Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:04 INFO [CredentialRefresher] Starting credentials refresher loop Jul 12 00:07:04.755652 amazon-ssm-agent[2079]: 2025-07-12 00:07:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 12 00:07:04.777628 amazon-ssm-agent[2079]: 2025-07-12 00:07:04 INFO [CredentialRefresher] Next credential rotation will be in 31.616654401966667 minutes Jul 12 00:07:04.796088 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:07:04.810448 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:07:04.824800 systemd[1]: Started sshd@0-172.31.31.176:22-139.178.89.65:59126.service - OpenSSH per-connection server daemon (139.178.89.65:59126). Jul 12 00:07:04.839806 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:07:04.840469 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:07:04.856872 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:07:04.910450 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:07:04.926068 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:07:04.935504 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 12 00:07:04.938570 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:07:05.042248 sshd[2134]: Accepted publickey for core from 139.178.89.65 port 59126 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:05.048141 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:05.067995 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:07:05.078062 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:07:05.091421 systemd-logind[1909]: New session 1 of user core. Jul 12 00:07:05.128081 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:07:05.142092 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:07:05.161429 (systemd)[2145]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:07:05.299695 ntpd[1901]: Listen normally on 6 eth0 [fe80::420:80ff:fedc:104f%2]:123 Jul 12 00:07:05.301089 ntpd[1901]: 12 Jul 00:07:05 ntpd[1901]: Listen normally on 6 eth0 [fe80::420:80ff:fedc:104f%2]:123 Jul 12 00:07:05.429081 systemd[2145]: Queued start job for default target default.target. Jul 12 00:07:05.438658 systemd[2145]: Created slice app.slice - User Application Slice. Jul 12 00:07:05.438742 systemd[2145]: Reached target paths.target - Paths. Jul 12 00:07:05.438777 systemd[2145]: Reached target timers.target - Timers. Jul 12 00:07:05.449652 systemd[2145]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:07:05.469859 systemd[2145]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:07:05.470215 systemd[2145]: Reached target sockets.target - Sockets. Jul 12 00:07:05.470286 systemd[2145]: Reached target basic.target - Basic System. Jul 12 00:07:05.470422 systemd[2145]: Reached target default.target - Main User Target. Jul 12 00:07:05.470504 systemd[2145]: Startup finished in 293ms. Jul 12 00:07:05.470891 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:07:05.482704 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:07:05.576507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:05.587765 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:07:05.593101 systemd[1]: Startup finished in 1.183s (kernel) + 10.095s (initrd) + 8.734s (userspace) = 20.013s. Jul 12 00:07:05.595988 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:05.674023 systemd[1]: Started sshd@1-172.31.31.176:22-139.178.89.65:59140.service - OpenSSH per-connection server daemon (139.178.89.65:59140). Jul 12 00:07:05.786915 amazon-ssm-agent[2079]: 2025-07-12 00:07:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 12 00:07:05.882592 sshd[2166]: Accepted publickey for core from 139.178.89.65 port 59140 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:05.887653 amazon-ssm-agent[2079]: 2025-07-12 00:07:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2169) started Jul 12 00:07:05.888082 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:05.902723 systemd-logind[1909]: New session 2 of user core. Jul 12 00:07:05.911239 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:07:05.990988 amazon-ssm-agent[2079]: 2025-07-12 00:07:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 12 00:07:06.058693 sshd[2166]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:06.066199 systemd[1]: sshd@1-172.31.31.176:22-139.178.89.65:59140.service: Deactivated successfully. Jul 12 00:07:06.072463 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:07:06.077861 systemd-logind[1909]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:07:06.102863 systemd[1]: Started sshd@2-172.31.31.176:22-139.178.89.65:59144.service - OpenSSH per-connection server daemon (139.178.89.65:59144). Jul 12 00:07:06.105148 systemd-logind[1909]: Removed session 2. Jul 12 00:07:06.291227 sshd[2187]: Accepted publickey for core from 139.178.89.65 port 59144 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:06.294594 sshd[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:06.305363 systemd-logind[1909]: New session 3 of user core. Jul 12 00:07:06.310608 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:07:06.436391 sshd[2187]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:06.444157 systemd[1]: sshd@2-172.31.31.176:22-139.178.89.65:59144.service: Deactivated successfully. Jul 12 00:07:06.448978 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:07:06.451216 systemd-logind[1909]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:07:06.454177 systemd-logind[1909]: Removed session 3. Jul 12 00:07:06.474385 systemd[1]: Started sshd@3-172.31.31.176:22-139.178.89.65:59146.service - OpenSSH per-connection server daemon (139.178.89.65:59146). Jul 12 00:07:06.651642 sshd[2194]: Accepted publickey for core from 139.178.89.65 port 59146 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:06.658783 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:06.672738 systemd-logind[1909]: New session 4 of user core. Jul 12 00:07:06.685643 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:07:06.703624 kubelet[2159]: E0712 00:07:06.703426 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:06.708135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:06.708631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:06.710475 systemd[1]: kubelet.service: Consumed 1.521s CPU time. Jul 12 00:07:06.818693 sshd[2194]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:06.825603 systemd[1]: sshd@3-172.31.31.176:22-139.178.89.65:59146.service: Deactivated successfully. Jul 12 00:07:06.830033 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:07:06.831947 systemd-logind[1909]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:07:06.834043 systemd-logind[1909]: Removed session 4. Jul 12 00:07:06.860932 systemd[1]: Started sshd@4-172.31.31.176:22-139.178.89.65:59162.service - OpenSSH per-connection server daemon (139.178.89.65:59162). Jul 12 00:07:07.025350 sshd[2203]: Accepted publickey for core from 139.178.89.65 port 59162 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:07.028129 sshd[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:07.037066 systemd-logind[1909]: New session 5 of user core. Jul 12 00:07:07.046586 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:07:07.170547 sudo[2206]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:07:07.171561 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:07.187166 sudo[2206]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:07.211038 sshd[2203]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:07.218573 systemd[1]: sshd@4-172.31.31.176:22-139.178.89.65:59162.service: Deactivated successfully. Jul 12 00:07:07.222018 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:07:07.223822 systemd-logind[1909]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:07:07.226501 systemd-logind[1909]: Removed session 5. Jul 12 00:07:07.248142 systemd[1]: Started sshd@5-172.31.31.176:22-139.178.89.65:59164.service - OpenSSH per-connection server daemon (139.178.89.65:59164). Jul 12 00:07:07.443915 sshd[2211]: Accepted publickey for core from 139.178.89.65 port 59164 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:07.446135 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:07.455979 systemd-logind[1909]: New session 6 of user core. Jul 12 00:07:07.462636 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:07:07.577601 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:07:07.578694 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:07.593621 sudo[2215]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:07.604472 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:07:07.605142 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:07.627857 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:07.642174 auditctl[2218]: No rules Jul 12 00:07:07.643029 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:07:07.643466 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:07.654004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:07:07.706483 augenrules[2236]: No rules Jul 12 00:07:07.708765 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:07:07.711463 sudo[2214]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:07.736475 sshd[2211]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:07.742328 systemd[1]: sshd@5-172.31.31.176:22-139.178.89.65:59164.service: Deactivated successfully. Jul 12 00:07:07.742909 systemd-logind[1909]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:07:07.746202 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:07:07.751444 systemd-logind[1909]: Removed session 6. Jul 12 00:07:07.771792 systemd[1]: Started sshd@6-172.31.31.176:22-139.178.89.65:59172.service - OpenSSH per-connection server daemon (139.178.89.65:59172). Jul 12 00:07:07.948304 sshd[2244]: Accepted publickey for core from 139.178.89.65 port 59172 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:07:07.951152 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:07:07.960961 systemd-logind[1909]: New session 7 of user core. Jul 12 00:07:07.968632 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:07:08.074924 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:07:08.075753 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:07:08.623464 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:07:08.635608 (dockerd)[2263]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:07:09.083853 dockerd[2263]: time="2025-07-12T00:07:09.083406208Z" level=info msg="Starting up" Jul 12 00:07:09.285126 systemd[1]: var-lib-docker-metacopy\x2dcheck2402895140-merged.mount: Deactivated successfully. Jul 12 00:07:09.691559 systemd-resolved[1843]: Clock change detected. Flushing caches. Jul 12 00:07:09.692599 dockerd[2263]: time="2025-07-12T00:07:09.692173412Z" level=info msg="Loading containers: start." Jul 12 00:07:09.866552 kernel: Initializing XFRM netlink socket Jul 12 00:07:09.901546 (udev-worker)[2286]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:07:10.010098 systemd-networkd[1842]: docker0: Link UP Jul 12 00:07:10.035434 dockerd[2263]: time="2025-07-12T00:07:10.035374613Z" level=info msg="Loading containers: done." Jul 12 00:07:10.065082 dockerd[2263]: time="2025-07-12T00:07:10.065006273Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:07:10.065341 dockerd[2263]: time="2025-07-12T00:07:10.065168021Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:07:10.065433 dockerd[2263]: time="2025-07-12T00:07:10.065396957Z" level=info msg="Daemon has completed initialization" Jul 12 00:07:10.129437 dockerd[2263]: time="2025-07-12T00:07:10.129182106Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:07:10.129867 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:07:10.591837 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2982559930-merged.mount: Deactivated successfully. Jul 12 00:07:11.293440 containerd[1942]: time="2025-07-12T00:07:11.293063707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:07:11.927697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676744268.mount: Deactivated successfully. Jul 12 00:07:13.256196 containerd[1942]: time="2025-07-12T00:07:13.256097601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:13.258535 containerd[1942]: time="2025-07-12T00:07:13.258398493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651793" Jul 12 00:07:13.259483 containerd[1942]: time="2025-07-12T00:07:13.258981141Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:13.265339 containerd[1942]: time="2025-07-12T00:07:13.265243965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:13.268470 containerd[1942]: time="2025-07-12T00:07:13.267961485Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.974830374s" Jul 12 00:07:13.268470 containerd[1942]: time="2025-07-12T00:07:13.268042545Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:07:13.271001 containerd[1942]: time="2025-07-12T00:07:13.270794061Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:07:14.605599 containerd[1942]: time="2025-07-12T00:07:14.605534892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:14.609529 containerd[1942]: time="2025-07-12T00:07:14.609250224Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459677" Jul 12 00:07:14.609682 containerd[1942]: time="2025-07-12T00:07:14.609547884Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:14.616671 containerd[1942]: time="2025-07-12T00:07:14.616573080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:14.619348 containerd[1942]: time="2025-07-12T00:07:14.618951024Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.348089511s" Jul 12 00:07:14.619348 containerd[1942]: time="2025-07-12T00:07:14.619019796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:07:14.620009 containerd[1942]: time="2025-07-12T00:07:14.619954776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:07:15.745823 containerd[1942]: time="2025-07-12T00:07:15.745753862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:15.747983 containerd[1942]: time="2025-07-12T00:07:15.747892106Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125066" Jul 12 00:07:15.748384 containerd[1942]: time="2025-07-12T00:07:15.748314986Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:15.754010 containerd[1942]: time="2025-07-12T00:07:15.753957326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:15.758080 containerd[1942]: time="2025-07-12T00:07:15.757891778Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.137875262s" Jul 12 00:07:15.758080 containerd[1942]: time="2025-07-12T00:07:15.757948898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:07:15.759093 containerd[1942]: time="2025-07-12T00:07:15.758767070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:07:16.997139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46589352.mount: Deactivated successfully. Jul 12 00:07:17.350863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:07:17.364206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:17.773937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:17.782594 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:17.799179 containerd[1942]: time="2025-07-12T00:07:17.799079416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:17.802384 containerd[1942]: time="2025-07-12T00:07:17.801744124Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915957" Jul 12 00:07:17.805838 containerd[1942]: time="2025-07-12T00:07:17.805717912Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:17.813056 containerd[1942]: time="2025-07-12T00:07:17.812992012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:17.815495 containerd[1942]: time="2025-07-12T00:07:17.814324192Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 2.05549423s" Jul 12 00:07:17.815495 containerd[1942]: time="2025-07-12T00:07:17.815131372Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:07:17.816651 containerd[1942]: time="2025-07-12T00:07:17.816303880Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:07:17.868970 kubelet[2479]: E0712 00:07:17.868908 2479 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:17.876003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:17.876371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:18.426528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243948363.mount: Deactivated successfully. Jul 12 00:07:19.684062 containerd[1942]: time="2025-07-12T00:07:19.683963645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:19.686536 containerd[1942]: time="2025-07-12T00:07:19.686388341Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jul 12 00:07:19.688626 containerd[1942]: time="2025-07-12T00:07:19.688509761Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:19.698197 containerd[1942]: time="2025-07-12T00:07:19.698098157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:19.700999 containerd[1942]: time="2025-07-12T00:07:19.700756169Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.884383001s" Jul 12 00:07:19.700999 containerd[1942]: time="2025-07-12T00:07:19.700839233Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:07:19.701903 containerd[1942]: time="2025-07-12T00:07:19.701596181Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:07:20.238849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497923413.mount: Deactivated successfully. Jul 12 00:07:20.260247 containerd[1942]: time="2025-07-12T00:07:20.260159860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:20.262401 containerd[1942]: time="2025-07-12T00:07:20.262329832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jul 12 00:07:20.265430 containerd[1942]: time="2025-07-12T00:07:20.265314892Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:20.270961 containerd[1942]: time="2025-07-12T00:07:20.270840868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:20.273347 containerd[1942]: time="2025-07-12T00:07:20.272823904Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 571.162695ms" Jul 12 00:07:20.273347 containerd[1942]: time="2025-07-12T00:07:20.272893012Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:07:20.273696 containerd[1942]: time="2025-07-12T00:07:20.273574576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:07:20.830225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052504545.mount: Deactivated successfully. Jul 12 00:07:22.879069 containerd[1942]: time="2025-07-12T00:07:22.879001005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:22.881810 containerd[1942]: time="2025-07-12T00:07:22.881718537Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Jul 12 00:07:22.883006 containerd[1942]: time="2025-07-12T00:07:22.882915849Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:22.891507 containerd[1942]: time="2025-07-12T00:07:22.890033505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:22.893294 containerd[1942]: time="2025-07-12T00:07:22.893222145Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.619587641s" Jul 12 00:07:22.893542 containerd[1942]: time="2025-07-12T00:07:22.893498409Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:07:27.938047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:07:27.948954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:28.358862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:28.369017 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:07:28.454691 kubelet[2625]: E0712 00:07:28.454617 2625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:07:28.459446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:07:28.460898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:07:31.872313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:31.886988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:31.940585 systemd[1]: Reloading requested from client PID 2639 ('systemctl') (unit session-7.scope)... Jul 12 00:07:31.940614 systemd[1]: Reloading... Jul 12 00:07:32.230507 zram_generator::config[2683]: No configuration found. Jul 12 00:07:32.506534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:32.698539 systemd[1]: Reloading finished in 757 ms. Jul 12 00:07:32.805026 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:32.811718 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:32.812655 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:32.813119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:32.825122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:33.123702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:33.140290 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:33.212519 kubelet[2746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:33.212519 kubelet[2746]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:33.212519 kubelet[2746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:33.212519 kubelet[2746]: I0712 00:07:33.212386 2746 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:33.677545 kubelet[2746]: I0712 00:07:33.677001 2746 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:07:33.677545 kubelet[2746]: I0712 00:07:33.677048 2746 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:33.677835 kubelet[2746]: I0712 00:07:33.677809 2746 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:07:33.723045 kubelet[2746]: E0712 00:07:33.722986 2746 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.176:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:33.724331 kubelet[2746]: I0712 00:07:33.724295 2746 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:33.743405 kubelet[2746]: E0712 00:07:33.743342 2746 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:33.743772 kubelet[2746]: I0712 00:07:33.743635 2746 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:33.750953 kubelet[2746]: I0712 00:07:33.750894 2746 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:33.751564 kubelet[2746]: I0712 00:07:33.751519 2746 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:07:33.751931 kubelet[2746]: I0712 00:07:33.751863 2746 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:33.752229 kubelet[2746]: I0712 00:07:33.751921 2746 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-176","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:33.752419 kubelet[2746]: I0712 00:07:33.752360 2746 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:33.752419 kubelet[2746]: I0712 00:07:33.752385 2746 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:07:33.752894 kubelet[2746]: I0712 00:07:33.752847 2746 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:33.759231 kubelet[2746]: I0712 00:07:33.758568 2746 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:07:33.759231 kubelet[2746]: I0712 00:07:33.758631 2746 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:33.759231 kubelet[2746]: I0712 00:07:33.758673 2746 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:07:33.759231 kubelet[2746]: I0712 00:07:33.758843 2746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:33.760507 kubelet[2746]: W0712 00:07:33.760179 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-176&limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:33.760507 kubelet[2746]: E0712 00:07:33.760297 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-176&limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:33.767976 kubelet[2746]: I0712 00:07:33.766853 2746 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:33.768630 kubelet[2746]: I0712 00:07:33.768592 2746 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:33.769211 kubelet[2746]: W0712 00:07:33.769177 2746 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:07:33.771379 kubelet[2746]: I0712 00:07:33.771331 2746 server.go:1274] "Started kubelet" Jul 12 00:07:33.771853 kubelet[2746]: W0712 00:07:33.771779 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:33.772039 kubelet[2746]: E0712 00:07:33.772001 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:33.773154 kubelet[2746]: I0712 00:07:33.773079 2746 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:33.775522 kubelet[2746]: I0712 00:07:33.775441 2746 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:07:33.778899 kubelet[2746]: I0712 00:07:33.778808 2746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:33.779556 kubelet[2746]: I0712 00:07:33.779518 2746 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:33.780304 kubelet[2746]: I0712 00:07:33.780172 2746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:33.783826 kubelet[2746]: E0712 00:07:33.780107 2746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.176:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.176:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-176.1851585562776733 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-176,UID:ip-172-31-31-176,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-176,},FirstTimestamp:2025-07-12 00:07:33.771290419 +0000 UTC m=+0.624687520,LastTimestamp:2025-07-12 00:07:33.771290419 +0000 UTC m=+0.624687520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-176,}" Jul 12 00:07:33.786310 kubelet[2746]: I0712 00:07:33.786266 2746 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:33.789449 kubelet[2746]: I0712 00:07:33.789398 2746 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:07:33.790569 kubelet[2746]: E0712 00:07:33.790520 2746 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-176\" not found" Jul 12 00:07:33.794158 kubelet[2746]: I0712 00:07:33.791188 2746 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:07:33.794405 kubelet[2746]: I0712 00:07:33.791320 2746 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:33.794586 kubelet[2746]: W0712 00:07:33.792543 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:33.794938 kubelet[2746]: E0712 00:07:33.794902 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:33.795076 kubelet[2746]: I0712 00:07:33.794006 2746 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:33.795391 kubelet[2746]: I0712 00:07:33.795356 2746 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:33.796025 kubelet[2746]: E0712 00:07:33.793256 2746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-176?timeout=10s\": dial tcp 172.31.31.176:6443: connect: connection refused" interval="200ms" Jul 12 00:07:33.798932 kubelet[2746]: I0712 00:07:33.798882 2746 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:33.802494 kubelet[2746]: E0712 00:07:33.799125 2746 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:07:33.837735 kubelet[2746]: I0712 00:07:33.837641 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:33.841083 kubelet[2746]: I0712 00:07:33.840598 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:33.841083 kubelet[2746]: I0712 00:07:33.840647 2746 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:07:33.841083 kubelet[2746]: I0712 00:07:33.840679 2746 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:07:33.841083 kubelet[2746]: E0712 00:07:33.840757 2746 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:33.846875 kubelet[2746]: I0712 00:07:33.846810 2746 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:07:33.846875 kubelet[2746]: I0712 00:07:33.846849 2746 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:33.846875 kubelet[2746]: I0712 00:07:33.846885 2746 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:33.850027 kubelet[2746]: W0712 00:07:33.849977 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:33.850731 kubelet[2746]: E0712 00:07:33.850646 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:33.852046 kubelet[2746]: I0712 00:07:33.851999 2746 policy_none.go:49] "None policy: Start" Jul 12 00:07:33.853357 kubelet[2746]: I0712 00:07:33.853316 2746 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:07:33.853560 kubelet[2746]: I0712 00:07:33.853370 2746 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:33.868709 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:07:33.885147 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:07:33.893855 kubelet[2746]: E0712 00:07:33.893800 2746 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-176\" not found" Jul 12 00:07:33.894443 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:07:33.909395 kubelet[2746]: I0712 00:07:33.909269 2746 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:33.910019 kubelet[2746]: I0712 00:07:33.909711 2746 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:33.910019 kubelet[2746]: I0712 00:07:33.909752 2746 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:33.910719 kubelet[2746]: I0712 00:07:33.910573 2746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:33.915195 kubelet[2746]: E0712 00:07:33.914966 2746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-176\" not found" Jul 12 00:07:33.961671 systemd[1]: Created slice kubepods-burstable-pode5abfe8f9d530c7172b17bc88dc2ca64.slice - libcontainer container kubepods-burstable-pode5abfe8f9d530c7172b17bc88dc2ca64.slice. Jul 12 00:07:33.985290 systemd[1]: Created slice kubepods-burstable-pod964064b1bd5efcfb81a7fd14496a0220.slice - libcontainer container kubepods-burstable-pod964064b1bd5efcfb81a7fd14496a0220.slice. Jul 12 00:07:33.995713 kubelet[2746]: I0712 00:07:33.995270 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:33.995713 kubelet[2746]: I0712 00:07:33.995359 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/964064b1bd5efcfb81a7fd14496a0220-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-176\" (UID: \"964064b1bd5efcfb81a7fd14496a0220\") " pod="kube-system/kube-scheduler-ip-172-31-31-176" Jul 12 00:07:33.995713 kubelet[2746]: I0712 00:07:33.995400 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76b984a022646b012e8b3e23c3ab1152-ca-certs\") pod \"kube-apiserver-ip-172-31-31-176\" (UID: \"76b984a022646b012e8b3e23c3ab1152\") " pod="kube-system/kube-apiserver-ip-172-31-31-176" Jul 12 00:07:33.995713 kubelet[2746]: I0712 00:07:33.995440 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:33.995713 kubelet[2746]: I0712 00:07:33.995516 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:33.996097 kubelet[2746]: I0712 00:07:33.995555 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:33.996097 kubelet[2746]: I0712 00:07:33.995594 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:33.996097 kubelet[2746]: I0712 00:07:33.995631 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76b984a022646b012e8b3e23c3ab1152-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-176\" (UID: \"76b984a022646b012e8b3e23c3ab1152\") " pod="kube-system/kube-apiserver-ip-172-31-31-176" Jul 12 00:07:33.996097 kubelet[2746]: I0712 00:07:33.995669 2746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76b984a022646b012e8b3e23c3ab1152-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-176\" (UID: \"76b984a022646b012e8b3e23c3ab1152\") " pod="kube-system/kube-apiserver-ip-172-31-31-176" Jul 12 00:07:33.997642 kubelet[2746]: E0712 00:07:33.997564 2746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-176?timeout=10s\": dial tcp 172.31.31.176:6443: connect: connection refused" interval="400ms" Jul 12 00:07:34.007705 systemd[1]: Created slice kubepods-burstable-pod76b984a022646b012e8b3e23c3ab1152.slice - libcontainer container kubepods-burstable-pod76b984a022646b012e8b3e23c3ab1152.slice. Jul 12 00:07:34.012339 kubelet[2746]: I0712 00:07:34.012299 2746 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-176" Jul 12 00:07:34.014146 kubelet[2746]: E0712 00:07:34.013917 2746 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.176:6443/api/v1/nodes\": dial tcp 172.31.31.176:6443: connect: connection refused" node="ip-172-31-31-176" Jul 12 00:07:34.053099 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 12 00:07:34.217872 kubelet[2746]: I0712 00:07:34.217719 2746 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-176" Jul 12 00:07:34.218553 kubelet[2746]: E0712 00:07:34.218278 2746 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.176:6443/api/v1/nodes\": dial tcp 172.31.31.176:6443: connect: connection refused" node="ip-172-31-31-176" Jul 12 00:07:34.278727 containerd[1942]: time="2025-07-12T00:07:34.278659530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-176,Uid:e5abfe8f9d530c7172b17bc88dc2ca64,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:34.302253 containerd[1942]: time="2025-07-12T00:07:34.302111874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-176,Uid:964064b1bd5efcfb81a7fd14496a0220,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:34.316209 containerd[1942]: time="2025-07-12T00:07:34.315809406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-176,Uid:76b984a022646b012e8b3e23c3ab1152,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:34.399005 kubelet[2746]: E0712 00:07:34.398937 2746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-176?timeout=10s\": dial tcp 172.31.31.176:6443: connect: connection refused" interval="800ms" Jul 12 00:07:34.621726 kubelet[2746]: I0712 00:07:34.620689 2746 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-176" Jul 12 00:07:34.621726 kubelet[2746]: E0712 00:07:34.621217 2746 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.176:6443/api/v1/nodes\": dial tcp 172.31.31.176:6443: connect: connection refused" node="ip-172-31-31-176" Jul 12 00:07:34.704243 kubelet[2746]: W0712 00:07:34.704121 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-176&limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:34.704243 kubelet[2746]: E0712 00:07:34.704229 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.176:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-176&limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:34.802761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1664433729.mount: Deactivated successfully. Jul 12 00:07:34.819557 containerd[1942]: time="2025-07-12T00:07:34.819177236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:34.823917 containerd[1942]: time="2025-07-12T00:07:34.823705208Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:34.826981 containerd[1942]: time="2025-07-12T00:07:34.826020392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:34.829425 containerd[1942]: time="2025-07-12T00:07:34.829186844Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:34.831980 containerd[1942]: time="2025-07-12T00:07:34.831887156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:34.834113 containerd[1942]: time="2025-07-12T00:07:34.834019916Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 12 00:07:34.835195 containerd[1942]: time="2025-07-12T00:07:34.835056992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:07:34.840669 containerd[1942]: time="2025-07-12T00:07:34.840445964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:07:34.844075 containerd[1942]: time="2025-07-12T00:07:34.844007756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.216598ms" Jul 12 00:07:34.858610 containerd[1942]: time="2025-07-12T00:07:34.858339177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.102971ms" Jul 12 00:07:34.872216 containerd[1942]: time="2025-07-12T00:07:34.872012853Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.080795ms" Jul 12 00:07:35.063834 kubelet[2746]: W0712 00:07:35.063588 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:35.063834 kubelet[2746]: E0712 00:07:35.063710 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.176:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:35.065221 containerd[1942]: time="2025-07-12T00:07:35.063893994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:35.065221 containerd[1942]: time="2025-07-12T00:07:35.065050662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:35.065717 containerd[1942]: time="2025-07-12T00:07:35.065116866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:35.065717 containerd[1942]: time="2025-07-12T00:07:35.065331570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:35.071647 containerd[1942]: time="2025-07-12T00:07:35.069795894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:35.071647 containerd[1942]: time="2025-07-12T00:07:35.069904326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:35.071647 containerd[1942]: time="2025-07-12T00:07:35.069944262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:35.071647 containerd[1942]: time="2025-07-12T00:07:35.070884534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:35.076429 containerd[1942]: time="2025-07-12T00:07:35.076096578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:35.076429 containerd[1942]: time="2025-07-12T00:07:35.076242486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:35.076429 containerd[1942]: time="2025-07-12T00:07:35.076272870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:35.077575 containerd[1942]: time="2025-07-12T00:07:35.076587330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:35.132921 systemd[1]: Started cri-containerd-14cac4f0064614fa493a98c21f4c21f1e652394a1f3fac5ec8e764ed8a71e178.scope - libcontainer container 14cac4f0064614fa493a98c21f4c21f1e652394a1f3fac5ec8e764ed8a71e178. Jul 12 00:07:35.139014 systemd[1]: Started cri-containerd-2788ef00649606e0134408540919ebb47460c8a0b0b62e045928a7e28db6bca8.scope - libcontainer container 2788ef00649606e0134408540919ebb47460c8a0b0b62e045928a7e28db6bca8. Jul 12 00:07:35.144426 kubelet[2746]: W0712 00:07:35.143296 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:35.144426 kubelet[2746]: E0712 00:07:35.143431 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.176:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:35.144693 systemd[1]: Started cri-containerd-f252675b10b92950ed954985df5f2d4117ccad88f7c02eebd758bbc6b39736d7.scope - libcontainer container f252675b10b92950ed954985df5f2d4117ccad88f7c02eebd758bbc6b39736d7. Jul 12 00:07:35.200582 kubelet[2746]: E0712 00:07:35.200259 2746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-176?timeout=10s\": dial tcp 172.31.31.176:6443: connect: connection refused" interval="1.6s" Jul 12 00:07:35.263859 containerd[1942]: time="2025-07-12T00:07:35.263509555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-176,Uid:964064b1bd5efcfb81a7fd14496a0220,Namespace:kube-system,Attempt:0,} returns sandbox id \"14cac4f0064614fa493a98c21f4c21f1e652394a1f3fac5ec8e764ed8a71e178\"" Jul 12 00:07:35.275633 containerd[1942]: time="2025-07-12T00:07:35.275163643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-176,Uid:76b984a022646b012e8b3e23c3ab1152,Namespace:kube-system,Attempt:0,} returns sandbox id \"2788ef00649606e0134408540919ebb47460c8a0b0b62e045928a7e28db6bca8\"" Jul 12 00:07:35.276590 containerd[1942]: time="2025-07-12T00:07:35.276428023Z" level=info msg="CreateContainer within sandbox \"14cac4f0064614fa493a98c21f4c21f1e652394a1f3fac5ec8e764ed8a71e178\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:07:35.287271 containerd[1942]: time="2025-07-12T00:07:35.286033243Z" level=info msg="CreateContainer within sandbox \"2788ef00649606e0134408540919ebb47460c8a0b0b62e045928a7e28db6bca8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:07:35.308023 containerd[1942]: time="2025-07-12T00:07:35.307947403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-176,Uid:e5abfe8f9d530c7172b17bc88dc2ca64,Namespace:kube-system,Attempt:0,} returns sandbox id \"f252675b10b92950ed954985df5f2d4117ccad88f7c02eebd758bbc6b39736d7\"" Jul 12 00:07:35.313800 containerd[1942]: time="2025-07-12T00:07:35.313735723Z" level=info msg="CreateContainer within sandbox \"f252675b10b92950ed954985df5f2d4117ccad88f7c02eebd758bbc6b39736d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:07:35.317656 kubelet[2746]: W0712 00:07:35.317496 2746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.176:6443: connect: connection refused Jul 12 00:07:35.317656 kubelet[2746]: E0712 00:07:35.317599 2746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.176:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:35.320018 containerd[1942]: time="2025-07-12T00:07:35.319941175Z" level=info msg="CreateContainer within sandbox \"14cac4f0064614fa493a98c21f4c21f1e652394a1f3fac5ec8e764ed8a71e178\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40\"" Jul 12 00:07:35.321570 containerd[1942]: time="2025-07-12T00:07:35.321411967Z" level=info msg="StartContainer for \"82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40\"" Jul 12 00:07:35.336949 containerd[1942]: time="2025-07-12T00:07:35.336890587Z" level=info msg="CreateContainer within sandbox \"2788ef00649606e0134408540919ebb47460c8a0b0b62e045928a7e28db6bca8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12f51a1fd38d5eac2dbb926d3a99afc386e085c19418d7919128b1985411d9d5\"" Jul 12 00:07:35.339320 containerd[1942]: time="2025-07-12T00:07:35.339154435Z" level=info msg="StartContainer for \"12f51a1fd38d5eac2dbb926d3a99afc386e085c19418d7919128b1985411d9d5\"" Jul 12 00:07:35.360730 containerd[1942]: time="2025-07-12T00:07:35.360644923Z" level=info msg="CreateContainer within sandbox \"f252675b10b92950ed954985df5f2d4117ccad88f7c02eebd758bbc6b39736d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8\"" Jul 12 00:07:35.362404 containerd[1942]: time="2025-07-12T00:07:35.362060731Z" level=info msg="StartContainer for \"5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8\"" Jul 12 00:07:35.380791 systemd[1]: Started cri-containerd-82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40.scope - libcontainer container 82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40. Jul 12 00:07:35.419102 systemd[1]: Started cri-containerd-12f51a1fd38d5eac2dbb926d3a99afc386e085c19418d7919128b1985411d9d5.scope - libcontainer container 12f51a1fd38d5eac2dbb926d3a99afc386e085c19418d7919128b1985411d9d5. Jul 12 00:07:35.427854 kubelet[2746]: I0712 00:07:35.427748 2746 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-176" Jul 12 00:07:35.429758 kubelet[2746]: E0712 00:07:35.429583 2746 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.176:6443/api/v1/nodes\": dial tcp 172.31.31.176:6443: connect: connection refused" node="ip-172-31-31-176" Jul 12 00:07:35.469443 systemd[1]: Started cri-containerd-5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8.scope - libcontainer container 5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8. Jul 12 00:07:35.540287 containerd[1942]: time="2025-07-12T00:07:35.540198512Z" level=info msg="StartContainer for \"82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40\" returns successfully" Jul 12 00:07:35.591057 containerd[1942]: time="2025-07-12T00:07:35.590729756Z" level=info msg="StartContainer for \"12f51a1fd38d5eac2dbb926d3a99afc386e085c19418d7919128b1985411d9d5\" returns successfully" Jul 12 00:07:35.609680 containerd[1942]: time="2025-07-12T00:07:35.609436748Z" level=info msg="StartContainer for \"5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8\" returns successfully" Jul 12 00:07:35.841885 kubelet[2746]: E0712 00:07:35.841787 2746 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.176:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.176:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:07:37.034966 kubelet[2746]: I0712 00:07:37.034079 2746 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-176" Jul 12 00:07:39.507195 kubelet[2746]: E0712 00:07:39.507034 2746 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-176\" not found" node="ip-172-31-31-176" Jul 12 00:07:39.548959 kubelet[2746]: E0712 00:07:39.548594 2746 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-176.1851585562776733 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-176,UID:ip-172-31-31-176,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-176,},FirstTimestamp:2025-07-12 00:07:33.771290419 +0000 UTC m=+0.624687520,LastTimestamp:2025-07-12 00:07:33.771290419 +0000 UTC m=+0.624687520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-176,}" Jul 12 00:07:39.553941 kubelet[2746]: I0712 00:07:39.553903 2746 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-176" Jul 12 00:07:39.638042 kubelet[2746]: E0712 00:07:39.637570 2746 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-176.18515855641fcc9f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-176,UID:ip-172-31-31-176,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-31-176,},FirstTimestamp:2025-07-12 00:07:33.799103647 +0000 UTC m=+0.652500760,LastTimestamp:2025-07-12 00:07:33.799103647 +0000 UTC m=+0.652500760,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-176,}" Jul 12 00:07:39.763740 kubelet[2746]: I0712 00:07:39.763549 2746 apiserver.go:52] "Watching apiserver" Jul 12 00:07:39.794626 kubelet[2746]: I0712 00:07:39.794544 2746 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:07:41.623416 systemd[1]: Reloading requested from client PID 3022 ('systemctl') (unit session-7.scope)... Jul 12 00:07:41.623940 systemd[1]: Reloading... Jul 12 00:07:41.783517 zram_generator::config[3060]: No configuration found. Jul 12 00:07:42.053648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:07:42.267297 systemd[1]: Reloading finished in 642 ms. Jul 12 00:07:42.370081 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:42.391561 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:07:42.391980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:42.392061 systemd[1]: kubelet.service: Consumed 1.404s CPU time, 128.1M memory peak, 0B memory swap peak. Jul 12 00:07:42.406766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:07:42.822773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:07:42.842086 (kubelet)[3122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:07:42.943477 kubelet[3122]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:42.943945 kubelet[3122]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:07:42.943945 kubelet[3122]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:07:42.943945 kubelet[3122]: I0712 00:07:42.943633 3122 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:07:42.960525 kubelet[3122]: I0712 00:07:42.958888 3122 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:07:42.960525 kubelet[3122]: I0712 00:07:42.960498 3122 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:07:42.961258 kubelet[3122]: I0712 00:07:42.960972 3122 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:07:42.964591 kubelet[3122]: I0712 00:07:42.964299 3122 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:07:42.968379 kubelet[3122]: I0712 00:07:42.968307 3122 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:07:42.994020 kubelet[3122]: E0712 00:07:42.993934 3122 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:07:42.994020 kubelet[3122]: I0712 00:07:42.994000 3122 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:07:42.997157 sudo[3137]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:07:42.998498 sudo[3137]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:07:43.000702 kubelet[3122]: I0712 00:07:43.000141 3122 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:07:43.000702 kubelet[3122]: I0712 00:07:43.000386 3122 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:07:43.000900 kubelet[3122]: I0712 00:07:43.000768 3122 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:07:43.001490 kubelet[3122]: I0712 00:07:43.000816 3122 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-176","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:07:43.001490 kubelet[3122]: I0712 00:07:43.001110 3122 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:07:43.001490 kubelet[3122]: I0712 00:07:43.001131 3122 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:07:43.001490 kubelet[3122]: I0712 00:07:43.001194 3122 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:43.005615 kubelet[3122]: I0712 00:07:43.001742 3122 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:07:43.005615 kubelet[3122]: I0712 00:07:43.003116 3122 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:07:43.005615 kubelet[3122]: I0712 00:07:43.003176 3122 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:07:43.005615 kubelet[3122]: I0712 00:07:43.003661 3122 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:07:43.009052 kubelet[3122]: I0712 00:07:43.007723 3122 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:07:43.009052 kubelet[3122]: I0712 00:07:43.008607 3122 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:07:43.010529 kubelet[3122]: I0712 00:07:43.009297 3122 server.go:1274] "Started kubelet" Jul 12 00:07:43.018792 kubelet[3122]: I0712 00:07:43.018739 3122 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:07:43.028883 kubelet[3122]: I0712 00:07:43.028796 3122 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:07:43.032888 kubelet[3122]: I0712 00:07:43.030416 3122 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:07:43.037881 kubelet[3122]: I0712 00:07:43.037276 3122 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:07:43.038794 kubelet[3122]: I0712 00:07:43.038750 3122 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:07:43.039162 kubelet[3122]: I0712 00:07:43.039118 3122 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:07:43.045787 kubelet[3122]: I0712 00:07:43.045735 3122 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:07:43.050494 kubelet[3122]: E0712 00:07:43.047604 3122 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-176\" not found" Jul 12 00:07:43.055344 kubelet[3122]: I0712 00:07:43.053807 3122 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:07:43.055344 kubelet[3122]: I0712 00:07:43.054062 3122 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:07:43.069033 kubelet[3122]: I0712 00:07:43.068635 3122 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:07:43.078042 kubelet[3122]: I0712 00:07:43.077594 3122 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:07:43.078042 kubelet[3122]: I0712 00:07:43.077644 3122 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:07:43.078042 kubelet[3122]: I0712 00:07:43.077677 3122 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:07:43.078042 kubelet[3122]: E0712 00:07:43.077752 3122 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:43.082566 kubelet[3122]: I0712 00:07:43.080875 3122 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:07:43.119326 kubelet[3122]: I0712 00:07:43.119269 3122 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:07:43.119326 kubelet[3122]: I0712 00:07:43.119310 3122 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:07:43.149718 kubelet[3122]: E0712 00:07:43.149665 3122 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-176\" not found" Jul 12 00:07:43.178870 kubelet[3122]: E0712 00:07:43.178473 3122 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:07:43.282921 kubelet[3122]: I0712 00:07:43.282851 3122 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:07:43.282921 kubelet[3122]: I0712 00:07:43.282903 3122 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:07:43.283131 kubelet[3122]: I0712 00:07:43.283023 3122 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:07:43.284263 kubelet[3122]: I0712 00:07:43.283336 3122 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:07:43.284263 kubelet[3122]: I0712 00:07:43.283370 3122 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:07:43.284263 kubelet[3122]: I0712 00:07:43.283408 3122 policy_none.go:49] "None policy: Start" Jul 12 00:07:43.288901 kubelet[3122]: I0712 00:07:43.288797 3122 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:07:43.288901 kubelet[3122]: I0712 00:07:43.288859 3122 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:07:43.290028 kubelet[3122]: I0712 00:07:43.289152 3122 state_mem.go:75] "Updated machine memory state" Jul 12 00:07:43.302569 kubelet[3122]: I0712 00:07:43.302500 3122 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:07:43.302831 kubelet[3122]: I0712 00:07:43.302787 3122 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:07:43.302907 kubelet[3122]: I0712 00:07:43.302818 3122 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:07:43.305312 kubelet[3122]: I0712 00:07:43.305098 3122 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:07:43.408623 kubelet[3122]: E0712 00:07:43.408299 3122 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-176\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-176" Jul 12 00:07:43.442285 kubelet[3122]: I0712 00:07:43.442246 3122 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-176" Jul 12 00:07:43.462961 kubelet[3122]: I0712 00:07:43.462908 3122 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-176" Jul 12 00:07:43.463138 kubelet[3122]: I0712 00:07:43.463030 3122 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-176" Jul 12 00:07:43.476891 kubelet[3122]: I0712 00:07:43.476818 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/964064b1bd5efcfb81a7fd14496a0220-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-176\" (UID: \"964064b1bd5efcfb81a7fd14496a0220\") " pod="kube-system/kube-scheduler-ip-172-31-31-176" Jul 12 00:07:43.476891 kubelet[3122]: I0712 00:07:43.476899 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76b984a022646b012e8b3e23c3ab1152-ca-certs\") pod \"kube-apiserver-ip-172-31-31-176\" (UID: \"76b984a022646b012e8b3e23c3ab1152\") " pod="kube-system/kube-apiserver-ip-172-31-31-176" Jul 12 00:07:43.477142 kubelet[3122]: I0712 00:07:43.476943 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76b984a022646b012e8b3e23c3ab1152-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-176\" (UID: \"76b984a022646b012e8b3e23c3ab1152\") " pod="kube-system/kube-apiserver-ip-172-31-31-176" Jul 12 00:07:43.477142 kubelet[3122]: I0712 00:07:43.476981 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76b984a022646b012e8b3e23c3ab1152-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-176\" (UID: \"76b984a022646b012e8b3e23c3ab1152\") " pod="kube-system/kube-apiserver-ip-172-31-31-176" Jul 12 00:07:43.477142 kubelet[3122]: I0712 00:07:43.477025 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:43.477142 kubelet[3122]: I0712 00:07:43.477060 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:43.477142 kubelet[3122]: I0712 00:07:43.477096 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:43.477596 kubelet[3122]: I0712 00:07:43.477130 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:43.477596 kubelet[3122]: I0712 00:07:43.477199 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5abfe8f9d530c7172b17bc88dc2ca64-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-176\" (UID: \"e5abfe8f9d530c7172b17bc88dc2ca64\") " pod="kube-system/kube-controller-manager-ip-172-31-31-176" Jul 12 00:07:43.997396 sudo[3137]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:44.004429 kubelet[3122]: I0712 00:07:44.004296 3122 apiserver.go:52] "Watching apiserver" Jul 12 00:07:44.054785 kubelet[3122]: I0712 00:07:44.054683 3122 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:07:44.329604 kubelet[3122]: I0712 00:07:44.327924 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-176" podStartSLOduration=3.327900148 podStartE2EDuration="3.327900148s" podCreationTimestamp="2025-07-12 00:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:44.281673855 +0000 UTC m=+1.428907028" watchObservedRunningTime="2025-07-12 00:07:44.327900148 +0000 UTC m=+1.475133333" Jul 12 00:07:44.365071 kubelet[3122]: I0712 00:07:44.364763 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-176" podStartSLOduration=1.364739668 podStartE2EDuration="1.364739668s" podCreationTimestamp="2025-07-12 00:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:44.328340704 +0000 UTC m=+1.475573913" watchObservedRunningTime="2025-07-12 00:07:44.364739668 +0000 UTC m=+1.511972853" Jul 12 00:07:46.978562 kubelet[3122]: I0712 00:07:46.978306 3122 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:07:46.982142 containerd[1942]: time="2025-07-12T00:07:46.979919505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:07:46.982912 kubelet[3122]: I0712 00:07:46.981099 3122 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:07:47.081197 kubelet[3122]: I0712 00:07:47.080836 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-176" podStartSLOduration=4.080815589 podStartE2EDuration="4.080815589s" podCreationTimestamp="2025-07-12 00:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:44.367056724 +0000 UTC m=+1.514289909" watchObservedRunningTime="2025-07-12 00:07:47.080815589 +0000 UTC m=+4.228048762" Jul 12 00:07:47.690256 systemd[1]: Created slice kubepods-besteffort-pod5def7f7b_c5e1_4bd5_9f13_c7309c4a544a.slice - libcontainer container kubepods-besteffort-pod5def7f7b_c5e1_4bd5_9f13_c7309c4a544a.slice. Jul 12 00:07:47.701133 kubelet[3122]: I0712 00:07:47.700912 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xnk\" (UniqueName: \"kubernetes.io/projected/5def7f7b-c5e1-4bd5-9f13-c7309c4a544a-kube-api-access-95xnk\") pod \"kube-proxy-tzkdf\" (UID: \"5def7f7b-c5e1-4bd5-9f13-c7309c4a544a\") " pod="kube-system/kube-proxy-tzkdf" Jul 12 00:07:47.701133 kubelet[3122]: I0712 00:07:47.700988 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5def7f7b-c5e1-4bd5-9f13-c7309c4a544a-kube-proxy\") pod \"kube-proxy-tzkdf\" (UID: \"5def7f7b-c5e1-4bd5-9f13-c7309c4a544a\") " pod="kube-system/kube-proxy-tzkdf" Jul 12 00:07:47.701133 kubelet[3122]: I0712 00:07:47.701029 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5def7f7b-c5e1-4bd5-9f13-c7309c4a544a-xtables-lock\") pod \"kube-proxy-tzkdf\" (UID: \"5def7f7b-c5e1-4bd5-9f13-c7309c4a544a\") " pod="kube-system/kube-proxy-tzkdf" Jul 12 00:07:47.701133 kubelet[3122]: I0712 00:07:47.701067 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5def7f7b-c5e1-4bd5-9f13-c7309c4a544a-lib-modules\") pod \"kube-proxy-tzkdf\" (UID: \"5def7f7b-c5e1-4bd5-9f13-c7309c4a544a\") " pod="kube-system/kube-proxy-tzkdf" Jul 12 00:07:47.758522 systemd[1]: Created slice kubepods-burstable-podb09f8826_6df4_4da3_8509_54d7e18bd133.slice - libcontainer container kubepods-burstable-podb09f8826_6df4_4da3_8509_54d7e18bd133.slice. Jul 12 00:07:47.806508 kubelet[3122]: I0712 00:07:47.802693 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cni-path\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806508 kubelet[3122]: I0712 00:07:47.802770 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b09f8826-6df4-4da3-8509-54d7e18bd133-clustermesh-secrets\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806508 kubelet[3122]: I0712 00:07:47.802814 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-xtables-lock\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806508 kubelet[3122]: I0712 00:07:47.802872 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-bpf-maps\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806508 kubelet[3122]: I0712 00:07:47.802915 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-etc-cni-netd\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806508 kubelet[3122]: I0712 00:07:47.802953 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-lib-modules\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806986 kubelet[3122]: I0712 00:07:47.803078 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-run\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806986 kubelet[3122]: I0712 00:07:47.803117 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-hostproc\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806986 kubelet[3122]: I0712 00:07:47.803157 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-config-path\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806986 kubelet[3122]: I0712 00:07:47.803195 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-net\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806986 kubelet[3122]: I0712 00:07:47.803249 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-hubble-tls\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.806986 kubelet[3122]: I0712 00:07:47.803293 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmtj4\" (UniqueName: \"kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-kube-api-access-hmtj4\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.807378 kubelet[3122]: I0712 00:07:47.803361 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-cgroup\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.807378 kubelet[3122]: I0712 00:07:47.803405 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-kernel\") pod \"cilium-twdx8\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " pod="kube-system/cilium-twdx8" Jul 12 00:07:47.909277 sudo[2247]: pam_unix(sudo:session): session closed for user root Jul 12 00:07:47.940870 sshd[2244]: pam_unix(sshd:session): session closed for user core Jul 12 00:07:47.970601 systemd[1]: sshd@6-172.31.31.176:22-139.178.89.65:59172.service: Deactivated successfully. Jul 12 00:07:47.971972 update_engine[1910]: I20250712 00:07:47.970937 1910 update_attempter.cc:509] Updating boot flags... Jul 12 00:07:47.976213 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:07:47.977783 systemd[1]: session-7.scope: Consumed 13.646s CPU time, 152.5M memory peak, 0B memory swap peak. Jul 12 00:07:47.983609 systemd-logind[1909]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:07:47.987441 systemd-logind[1909]: Removed session 7. Jul 12 00:07:48.010530 containerd[1942]: time="2025-07-12T00:07:48.009573834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzkdf,Uid:5def7f7b-c5e1-4bd5-9f13-c7309c4a544a,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:48.142684 containerd[1942]: time="2025-07-12T00:07:48.141743118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:48.142684 containerd[1942]: time="2025-07-12T00:07:48.141853266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:48.142684 containerd[1942]: time="2025-07-12T00:07:48.141900342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:48.142684 containerd[1942]: time="2025-07-12T00:07:48.142086858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:48.206442 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3229) Jul 12 00:07:48.273796 systemd[1]: Started cri-containerd-ad3396f241a69ba1f07c88e668d2e497f856a25ffd7e71a523aca73fac7bfc6d.scope - libcontainer container ad3396f241a69ba1f07c88e668d2e497f856a25ffd7e71a523aca73fac7bfc6d. Jul 12 00:07:48.368949 containerd[1942]: time="2025-07-12T00:07:48.367888616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-twdx8,Uid:b09f8826-6df4-4da3-8509-54d7e18bd133,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:48.390672 systemd[1]: Created slice kubepods-besteffort-podb94dc4f2_f930_4015_9701_64890813fbf2.slice - libcontainer container kubepods-besteffort-podb94dc4f2_f930_4015_9701_64890813fbf2.slice. Jul 12 00:07:48.419534 kubelet[3122]: I0712 00:07:48.417602 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9fnc\" (UniqueName: \"kubernetes.io/projected/b94dc4f2-f930-4015-9701-64890813fbf2-kube-api-access-j9fnc\") pod \"cilium-operator-5d85765b45-zrtwh\" (UID: \"b94dc4f2-f930-4015-9701-64890813fbf2\") " pod="kube-system/cilium-operator-5d85765b45-zrtwh" Jul 12 00:07:48.419534 kubelet[3122]: I0712 00:07:48.417709 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b94dc4f2-f930-4015-9701-64890813fbf2-cilium-config-path\") pod \"cilium-operator-5d85765b45-zrtwh\" (UID: \"b94dc4f2-f930-4015-9701-64890813fbf2\") " pod="kube-system/cilium-operator-5d85765b45-zrtwh" Jul 12 00:07:48.563288 containerd[1942]: time="2025-07-12T00:07:48.554945001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:48.563288 containerd[1942]: time="2025-07-12T00:07:48.555049437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:48.563288 containerd[1942]: time="2025-07-12T00:07:48.555086193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:48.563288 containerd[1942]: time="2025-07-12T00:07:48.555240069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:48.603528 containerd[1942]: time="2025-07-12T00:07:48.602792469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzkdf,Uid:5def7f7b-c5e1-4bd5-9f13-c7309c4a544a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad3396f241a69ba1f07c88e668d2e497f856a25ffd7e71a523aca73fac7bfc6d\"" Jul 12 00:07:48.635047 containerd[1942]: time="2025-07-12T00:07:48.633441525Z" level=info msg="CreateContainer within sandbox \"ad3396f241a69ba1f07c88e668d2e497f856a25ffd7e71a523aca73fac7bfc6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:07:48.710108 containerd[1942]: time="2025-07-12T00:07:48.709915065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zrtwh,Uid:b94dc4f2-f930-4015-9701-64890813fbf2,Namespace:kube-system,Attempt:0,}" Jul 12 00:07:48.766817 containerd[1942]: time="2025-07-12T00:07:48.765486478Z" level=info msg="CreateContainer within sandbox \"ad3396f241a69ba1f07c88e668d2e497f856a25ffd7e71a523aca73fac7bfc6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f50818619dcdb5568ed9abfb9b2c1fb7b1ce6c763a763870405dcdbdcd743b3\"" Jul 12 00:07:48.774345 containerd[1942]: time="2025-07-12T00:07:48.774082942Z" level=info msg="StartContainer for \"1f50818619dcdb5568ed9abfb9b2c1fb7b1ce6c763a763870405dcdbdcd743b3\"" Jul 12 00:07:48.814895 systemd[1]: Started cri-containerd-da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929.scope - libcontainer container da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929. Jul 12 00:07:48.874601 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3229) Jul 12 00:07:48.919719 containerd[1942]: time="2025-07-12T00:07:48.919097686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:07:48.919719 containerd[1942]: time="2025-07-12T00:07:48.919414438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:07:48.919719 containerd[1942]: time="2025-07-12T00:07:48.919577986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:48.921345 containerd[1942]: time="2025-07-12T00:07:48.920567110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:07:49.013153 systemd[1]: run-containerd-runc-k8s.io-d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85-runc.pLStNw.mount: Deactivated successfully. Jul 12 00:07:49.030845 systemd[1]: Started cri-containerd-1f50818619dcdb5568ed9abfb9b2c1fb7b1ce6c763a763870405dcdbdcd743b3.scope - libcontainer container 1f50818619dcdb5568ed9abfb9b2c1fb7b1ce6c763a763870405dcdbdcd743b3. Jul 12 00:07:49.035151 systemd[1]: Started cri-containerd-d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85.scope - libcontainer container d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85. Jul 12 00:07:49.061706 containerd[1942]: time="2025-07-12T00:07:49.061061479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-twdx8,Uid:b09f8826-6df4-4da3-8509-54d7e18bd133,Namespace:kube-system,Attempt:0,} returns sandbox id \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\"" Jul 12 00:07:49.080267 containerd[1942]: time="2025-07-12T00:07:49.076563355Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:07:49.164262 containerd[1942]: time="2025-07-12T00:07:49.164202536Z" level=info msg="StartContainer for \"1f50818619dcdb5568ed9abfb9b2c1fb7b1ce6c763a763870405dcdbdcd743b3\" returns successfully" Jul 12 00:07:49.360840 containerd[1942]: time="2025-07-12T00:07:49.359731245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-zrtwh,Uid:b94dc4f2-f930-4015-9701-64890813fbf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\"" Jul 12 00:07:49.366774 kubelet[3122]: I0712 00:07:49.366167 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tzkdf" podStartSLOduration=2.366145725 podStartE2EDuration="2.366145725s" podCreationTimestamp="2025-07-12 00:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:07:49.364250265 +0000 UTC m=+6.511483462" watchObservedRunningTime="2025-07-12 00:07:49.366145725 +0000 UTC m=+6.513378898" Jul 12 00:07:57.150934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount548442032.mount: Deactivated successfully. Jul 12 00:07:59.782728 containerd[1942]: time="2025-07-12T00:07:59.782656340Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:59.784604 containerd[1942]: time="2025-07-12T00:07:59.784493372Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 12 00:07:59.785540 containerd[1942]: time="2025-07-12T00:07:59.785430680Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:07:59.789605 containerd[1942]: time="2025-07-12T00:07:59.789440408Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.711894277s" Jul 12 00:07:59.790250 containerd[1942]: time="2025-07-12T00:07:59.790064996Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:07:59.792835 containerd[1942]: time="2025-07-12T00:07:59.792314696Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:07:59.795118 containerd[1942]: time="2025-07-12T00:07:59.794785460Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:07:59.814757 containerd[1942]: time="2025-07-12T00:07:59.814703108Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\"" Jul 12 00:07:59.816598 containerd[1942]: time="2025-07-12T00:07:59.816279260Z" level=info msg="StartContainer for \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\"" Jul 12 00:07:59.891815 systemd[1]: Started cri-containerd-5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31.scope - libcontainer container 5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31. Jul 12 00:07:59.936574 containerd[1942]: time="2025-07-12T00:07:59.936503349Z" level=info msg="StartContainer for \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\" returns successfully" Jul 12 00:07:59.962064 systemd[1]: cri-containerd-5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31.scope: Deactivated successfully. Jul 12 00:08:00.809550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31-rootfs.mount: Deactivated successfully. Jul 12 00:08:01.216132 containerd[1942]: time="2025-07-12T00:08:01.215867491Z" level=info msg="shim disconnected" id=5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31 namespace=k8s.io Jul 12 00:08:01.216132 containerd[1942]: time="2025-07-12T00:08:01.216114475Z" level=warning msg="cleaning up after shim disconnected" id=5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31 namespace=k8s.io Jul 12 00:08:01.216132 containerd[1942]: time="2025-07-12T00:08:01.216140215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:01.381192 containerd[1942]: time="2025-07-12T00:08:01.380907440Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:08:01.428492 containerd[1942]: time="2025-07-12T00:08:01.428408900Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\"" Jul 12 00:08:01.432320 containerd[1942]: time="2025-07-12T00:08:01.430616804Z" level=info msg="StartContainer for \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\"" Jul 12 00:08:01.497863 systemd[1]: Started cri-containerd-edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b.scope - libcontainer container edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b. Jul 12 00:08:01.566312 containerd[1942]: time="2025-07-12T00:08:01.566129505Z" level=info msg="StartContainer for \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\" returns successfully" Jul 12 00:08:01.594885 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:08:01.596228 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:08:01.596379 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:08:01.609366 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:08:01.612658 systemd[1]: cri-containerd-edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b.scope: Deactivated successfully. Jul 12 00:08:01.659602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:08:01.696174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b-rootfs.mount: Deactivated successfully. Jul 12 00:08:01.735269 containerd[1942]: time="2025-07-12T00:08:01.735182050Z" level=info msg="shim disconnected" id=edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b namespace=k8s.io Jul 12 00:08:01.736850 containerd[1942]: time="2025-07-12T00:08:01.736424134Z" level=warning msg="cleaning up after shim disconnected" id=edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b namespace=k8s.io Jul 12 00:08:01.736850 containerd[1942]: time="2025-07-12T00:08:01.736519078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:01.778166 containerd[1942]: time="2025-07-12T00:08:01.777409750Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:08:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:08:02.402181 containerd[1942]: time="2025-07-12T00:08:02.401968449Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:08:02.474374 containerd[1942]: time="2025-07-12T00:08:02.474007270Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\"" Jul 12 00:08:02.476591 containerd[1942]: time="2025-07-12T00:08:02.474926194Z" level=info msg="StartContainer for \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\"" Jul 12 00:08:02.528192 containerd[1942]: time="2025-07-12T00:08:02.528122602Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:02.532382 containerd[1942]: time="2025-07-12T00:08:02.532196446Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:08:02.533913 containerd[1942]: time="2025-07-12T00:08:02.533847154Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 12 00:08:02.547501 containerd[1942]: time="2025-07-12T00:08:02.546802234Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.754386738s" Jul 12 00:08:02.547501 containerd[1942]: time="2025-07-12T00:08:02.546878014Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:08:02.557525 containerd[1942]: time="2025-07-12T00:08:02.557423026Z" level=info msg="CreateContainer within sandbox \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:08:02.565796 systemd[1]: Started cri-containerd-72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f.scope - libcontainer container 72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f. Jul 12 00:08:02.585753 containerd[1942]: time="2025-07-12T00:08:02.585682702Z" level=info msg="CreateContainer within sandbox \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\"" Jul 12 00:08:02.588931 containerd[1942]: time="2025-07-12T00:08:02.588626518Z" level=info msg="StartContainer for \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\"" Jul 12 00:08:02.656404 containerd[1942]: time="2025-07-12T00:08:02.656109455Z" level=info msg="StartContainer for \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\" returns successfully" Jul 12 00:08:02.656916 systemd[1]: Started cri-containerd-977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77.scope - libcontainer container 977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77. Jul 12 00:08:02.667793 systemd[1]: cri-containerd-72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f.scope: Deactivated successfully. Jul 12 00:08:02.755369 containerd[1942]: time="2025-07-12T00:08:02.755285447Z" level=info msg="StartContainer for \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\" returns successfully" Jul 12 00:08:02.817272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f-rootfs.mount: Deactivated successfully. Jul 12 00:08:02.938880 containerd[1942]: time="2025-07-12T00:08:02.938754564Z" level=info msg="shim disconnected" id=72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f namespace=k8s.io Jul 12 00:08:02.939948 containerd[1942]: time="2025-07-12T00:08:02.939555516Z" level=warning msg="cleaning up after shim disconnected" id=72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f namespace=k8s.io Jul 12 00:08:02.939948 containerd[1942]: time="2025-07-12T00:08:02.939730740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:03.409371 containerd[1942]: time="2025-07-12T00:08:03.408042730Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:08:03.448541 containerd[1942]: time="2025-07-12T00:08:03.447269603Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\"" Jul 12 00:08:03.454542 containerd[1942]: time="2025-07-12T00:08:03.454477955Z" level=info msg="StartContainer for \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\"" Jul 12 00:08:03.566859 systemd[1]: Started cri-containerd-8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8.scope - libcontainer container 8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8. Jul 12 00:08:03.609608 kubelet[3122]: I0712 00:08:03.608753 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-zrtwh" podStartSLOduration=2.42394609 podStartE2EDuration="15.608557319s" podCreationTimestamp="2025-07-12 00:07:48 +0000 UTC" firstStartedPulling="2025-07-12 00:07:49.366032661 +0000 UTC m=+6.513265834" lastFinishedPulling="2025-07-12 00:08:02.55064389 +0000 UTC m=+19.697877063" observedRunningTime="2025-07-12 00:08:03.607938203 +0000 UTC m=+20.755171388" watchObservedRunningTime="2025-07-12 00:08:03.608557319 +0000 UTC m=+20.755790588" Jul 12 00:08:03.673530 containerd[1942]: time="2025-07-12T00:08:03.672728076Z" level=info msg="StartContainer for \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\" returns successfully" Jul 12 00:08:03.681306 systemd[1]: cri-containerd-8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8.scope: Deactivated successfully. Jul 12 00:08:03.756507 containerd[1942]: time="2025-07-12T00:08:03.755150652Z" level=info msg="shim disconnected" id=8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8 namespace=k8s.io Jul 12 00:08:03.756507 containerd[1942]: time="2025-07-12T00:08:03.755245740Z" level=warning msg="cleaning up after shim disconnected" id=8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8 namespace=k8s.io Jul 12 00:08:03.756507 containerd[1942]: time="2025-07-12T00:08:03.755267448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:08:03.814067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8-rootfs.mount: Deactivated successfully. Jul 12 00:08:04.431545 containerd[1942]: time="2025-07-12T00:08:04.431386319Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:08:04.488416 containerd[1942]: time="2025-07-12T00:08:04.485642532Z" level=info msg="CreateContainer within sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\"" Jul 12 00:08:04.488416 containerd[1942]: time="2025-07-12T00:08:04.486449400Z" level=info msg="StartContainer for \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\"" Jul 12 00:08:04.582797 systemd[1]: Started cri-containerd-ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d.scope - libcontainer container ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d. Jul 12 00:08:04.690577 containerd[1942]: time="2025-07-12T00:08:04.690297421Z" level=info msg="StartContainer for \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\" returns successfully" Jul 12 00:08:04.811726 systemd[1]: run-containerd-runc-k8s.io-ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d-runc.o0du6z.mount: Deactivated successfully. Jul 12 00:08:04.899593 kubelet[3122]: I0712 00:08:04.899006 3122 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:08:04.969105 systemd[1]: Created slice kubepods-burstable-podb724874d_f5c3_4df3_b2a9_c6b184bc8855.slice - libcontainer container kubepods-burstable-podb724874d_f5c3_4df3_b2a9_c6b184bc8855.slice. Jul 12 00:08:04.989982 systemd[1]: Created slice kubepods-burstable-pod839d01d6_d909_42d6_8587_6dfd8f1f7533.slice - libcontainer container kubepods-burstable-pod839d01d6_d909_42d6_8587_6dfd8f1f7533.slice. Jul 12 00:08:05.042002 kubelet[3122]: I0712 00:08:05.041809 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b724874d-f5c3-4df3-b2a9-c6b184bc8855-config-volume\") pod \"coredns-7c65d6cfc9-bszz4\" (UID: \"b724874d-f5c3-4df3-b2a9-c6b184bc8855\") " pod="kube-system/coredns-7c65d6cfc9-bszz4" Jul 12 00:08:05.042002 kubelet[3122]: I0712 00:08:05.041881 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h94x\" (UniqueName: \"kubernetes.io/projected/b724874d-f5c3-4df3-b2a9-c6b184bc8855-kube-api-access-8h94x\") pod \"coredns-7c65d6cfc9-bszz4\" (UID: \"b724874d-f5c3-4df3-b2a9-c6b184bc8855\") " pod="kube-system/coredns-7c65d6cfc9-bszz4" Jul 12 00:08:05.042002 kubelet[3122]: I0712 00:08:05.041928 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/839d01d6-d909-42d6-8587-6dfd8f1f7533-config-volume\") pod \"coredns-7c65d6cfc9-lscq9\" (UID: \"839d01d6-d909-42d6-8587-6dfd8f1f7533\") " pod="kube-system/coredns-7c65d6cfc9-lscq9" Jul 12 00:08:05.042002 kubelet[3122]: I0712 00:08:05.041965 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl2qh\" (UniqueName: \"kubernetes.io/projected/839d01d6-d909-42d6-8587-6dfd8f1f7533-kube-api-access-hl2qh\") pod \"coredns-7c65d6cfc9-lscq9\" (UID: \"839d01d6-d909-42d6-8587-6dfd8f1f7533\") " pod="kube-system/coredns-7c65d6cfc9-lscq9" Jul 12 00:08:05.281270 containerd[1942]: time="2025-07-12T00:08:05.280188336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bszz4,Uid:b724874d-f5c3-4df3-b2a9-c6b184bc8855,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:05.301146 containerd[1942]: time="2025-07-12T00:08:05.300547128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lscq9,Uid:839d01d6-d909-42d6-8587-6dfd8f1f7533,Namespace:kube-system,Attempt:0,}" Jul 12 00:08:07.943866 systemd-networkd[1842]: cilium_host: Link UP Jul 12 00:08:07.945701 (udev-worker)[4104]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:08:07.947856 (udev-worker)[4102]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:08:07.948266 systemd-networkd[1842]: cilium_net: Link UP Jul 12 00:08:07.948274 systemd-networkd[1842]: cilium_net: Gained carrier Jul 12 00:08:07.949401 systemd-networkd[1842]: cilium_host: Gained carrier Jul 12 00:08:08.141035 (udev-worker)[4151]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:08:08.152646 systemd-networkd[1842]: cilium_vxlan: Link UP Jul 12 00:08:08.152663 systemd-networkd[1842]: cilium_vxlan: Gained carrier Jul 12 00:08:08.233948 systemd-networkd[1842]: cilium_net: Gained IPv6LL Jul 12 00:08:08.806748 kernel: NET: Registered PF_ALG protocol family Jul 12 00:08:08.882710 systemd-networkd[1842]: cilium_host: Gained IPv6LL Jul 12 00:08:09.969795 systemd-networkd[1842]: cilium_vxlan: Gained IPv6LL Jul 12 00:08:10.269556 systemd-networkd[1842]: lxc_health: Link UP Jul 12 00:08:10.296649 systemd-networkd[1842]: lxc_health: Gained carrier Jul 12 00:08:10.408560 kubelet[3122]: I0712 00:08:10.406592 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-twdx8" podStartSLOduration=12.687061688 podStartE2EDuration="23.406567073s" podCreationTimestamp="2025-07-12 00:07:47 +0000 UTC" firstStartedPulling="2025-07-12 00:07:49.072478375 +0000 UTC m=+6.219711536" lastFinishedPulling="2025-07-12 00:07:59.791983664 +0000 UTC m=+16.939216921" observedRunningTime="2025-07-12 00:08:05.514641673 +0000 UTC m=+22.661874858" watchObservedRunningTime="2025-07-12 00:08:10.406567073 +0000 UTC m=+27.553800270" Jul 12 00:08:10.922559 kernel: eth0: renamed from tmpcdd7f Jul 12 00:08:10.928591 systemd-networkd[1842]: lxc3449285eb587: Link UP Jul 12 00:08:10.933902 systemd-networkd[1842]: lxc3449285eb587: Gained carrier Jul 12 00:08:10.955099 systemd-networkd[1842]: lxcc6ed3b11883b: Link UP Jul 12 00:08:10.967833 (udev-worker)[4152]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:08:10.974579 kernel: eth0: renamed from tmp23c32 Jul 12 00:08:10.984428 systemd-networkd[1842]: lxcc6ed3b11883b: Gained carrier Jul 12 00:08:11.441762 systemd-networkd[1842]: lxc_health: Gained IPv6LL Jul 12 00:08:12.401744 systemd-networkd[1842]: lxc3449285eb587: Gained IPv6LL Jul 12 00:08:12.593742 systemd-networkd[1842]: lxcc6ed3b11883b: Gained IPv6LL Jul 12 00:08:13.138029 kubelet[3122]: I0712 00:08:13.137249 3122 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:08:14.690927 ntpd[1901]: Listen normally on 7 cilium_host 192.168.0.126:123 Jul 12 00:08:14.691068 ntpd[1901]: Listen normally on 8 cilium_net [fe80::54d9:1dff:fee4:fd64%4]:123 Jul 12 00:08:14.691586 ntpd[1901]: 12 Jul 00:08:14 ntpd[1901]: Listen normally on 7 cilium_host 192.168.0.126:123 Jul 12 00:08:14.691586 ntpd[1901]: 12 Jul 00:08:14 ntpd[1901]: Listen normally on 8 cilium_net [fe80::54d9:1dff:fee4:fd64%4]:123 Jul 12 00:08:14.691586 ntpd[1901]: 12 Jul 00:08:14 ntpd[1901]: Listen normally on 9 cilium_host [fe80::3c6f:28ff:fee7:c171%5]:123 Jul 12 00:08:14.691586 ntpd[1901]: 12 Jul 00:08:14 ntpd[1901]: Listen normally on 10 cilium_vxlan [fe80::6c59:e8ff:feec:98ea%6]:123 Jul 12 00:08:14.691586 ntpd[1901]: 12 Jul 00:08:14 ntpd[1901]: Listen normally on 11 lxc_health [fe80::306b:1ff:fec7:892d%8]:123 Jul 12 00:08:14.691586 ntpd[1901]: 12 Jul 00:08:14 ntpd[1901]: Listen normally on 12 lxc3449285eb587 [fe80::bc73:35ff:fee7:7564%10]:123 Jul 12 00:08:14.691586 ntpd[1901]: 12 Jul 00:08:14 ntpd[1901]: Listen normally on 13 lxcc6ed3b11883b [fe80::c97:2aff:fe9c:3f6a%12]:123 Jul 12 00:08:14.691153 ntpd[1901]: Listen normally on 9 cilium_host [fe80::3c6f:28ff:fee7:c171%5]:123 Jul 12 00:08:14.691223 ntpd[1901]: Listen normally on 10 cilium_vxlan [fe80::6c59:e8ff:feec:98ea%6]:123 Jul 12 00:08:14.691291 ntpd[1901]: Listen normally on 11 lxc_health [fe80::306b:1ff:fec7:892d%8]:123 Jul 12 00:08:14.691360 ntpd[1901]: Listen normally on 12 lxc3449285eb587 [fe80::bc73:35ff:fee7:7564%10]:123 Jul 12 00:08:14.691438 ntpd[1901]: Listen normally on 13 lxcc6ed3b11883b [fe80::c97:2aff:fe9c:3f6a%12]:123 Jul 12 00:08:19.334657 containerd[1942]: time="2025-07-12T00:08:19.333138541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:19.334657 containerd[1942]: time="2025-07-12T00:08:19.333252349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:19.335728 containerd[1942]: time="2025-07-12T00:08:19.333289297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:19.337691 containerd[1942]: time="2025-07-12T00:08:19.337442509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:19.390346 systemd[1]: run-containerd-runc-k8s.io-23c32de03e7916509a3527dd70a7bbff0407bbec608d2343c1ec956fdd803b53-runc.aSOk3W.mount: Deactivated successfully. Jul 12 00:08:19.410777 systemd[1]: Started cri-containerd-23c32de03e7916509a3527dd70a7bbff0407bbec608d2343c1ec956fdd803b53.scope - libcontainer container 23c32de03e7916509a3527dd70a7bbff0407bbec608d2343c1ec956fdd803b53. Jul 12 00:08:19.455550 containerd[1942]: time="2025-07-12T00:08:19.454741394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:08:19.455550 containerd[1942]: time="2025-07-12T00:08:19.454864814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:08:19.455550 containerd[1942]: time="2025-07-12T00:08:19.454903202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:19.455550 containerd[1942]: time="2025-07-12T00:08:19.455297390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:08:19.518284 systemd[1]: Started cri-containerd-cdd7f47215d6fbf2df984a3ea80bb3e32eb8e118c8cb073509e0d2869304b851.scope - libcontainer container cdd7f47215d6fbf2df984a3ea80bb3e32eb8e118c8cb073509e0d2869304b851. Jul 12 00:08:19.577106 containerd[1942]: time="2025-07-12T00:08:19.576956523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lscq9,Uid:839d01d6-d909-42d6-8587-6dfd8f1f7533,Namespace:kube-system,Attempt:0,} returns sandbox id \"23c32de03e7916509a3527dd70a7bbff0407bbec608d2343c1ec956fdd803b53\"" Jul 12 00:08:19.583967 containerd[1942]: time="2025-07-12T00:08:19.583742259Z" level=info msg="CreateContainer within sandbox \"23c32de03e7916509a3527dd70a7bbff0407bbec608d2343c1ec956fdd803b53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:19.617793 containerd[1942]: time="2025-07-12T00:08:19.617388303Z" level=info msg="CreateContainer within sandbox \"23c32de03e7916509a3527dd70a7bbff0407bbec608d2343c1ec956fdd803b53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76a9e6ea77685a8978b1cc6b492d133f5e037b5ee675e98e9fbec388d8d4d7f1\"" Jul 12 00:08:19.619257 containerd[1942]: time="2025-07-12T00:08:19.619188927Z" level=info msg="StartContainer for \"76a9e6ea77685a8978b1cc6b492d133f5e037b5ee675e98e9fbec388d8d4d7f1\"" Jul 12 00:08:19.665048 containerd[1942]: time="2025-07-12T00:08:19.664939275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bszz4,Uid:b724874d-f5c3-4df3-b2a9-c6b184bc8855,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdd7f47215d6fbf2df984a3ea80bb3e32eb8e118c8cb073509e0d2869304b851\"" Jul 12 00:08:19.677708 containerd[1942]: time="2025-07-12T00:08:19.677366019Z" level=info msg="CreateContainer within sandbox \"cdd7f47215d6fbf2df984a3ea80bb3e32eb8e118c8cb073509e0d2869304b851\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:08:19.711847 systemd[1]: Started cri-containerd-76a9e6ea77685a8978b1cc6b492d133f5e037b5ee675e98e9fbec388d8d4d7f1.scope - libcontainer container 76a9e6ea77685a8978b1cc6b492d133f5e037b5ee675e98e9fbec388d8d4d7f1. Jul 12 00:08:19.722380 containerd[1942]: time="2025-07-12T00:08:19.722285403Z" level=info msg="CreateContainer within sandbox \"cdd7f47215d6fbf2df984a3ea80bb3e32eb8e118c8cb073509e0d2869304b851\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5118b7cc54c107d2e575098cdbf3c219ebc8bd937a76f60966a40d2e136de567\"" Jul 12 00:08:19.723869 containerd[1942]: time="2025-07-12T00:08:19.723775143Z" level=info msg="StartContainer for \"5118b7cc54c107d2e575098cdbf3c219ebc8bd937a76f60966a40d2e136de567\"" Jul 12 00:08:19.807818 systemd[1]: Started cri-containerd-5118b7cc54c107d2e575098cdbf3c219ebc8bd937a76f60966a40d2e136de567.scope - libcontainer container 5118b7cc54c107d2e575098cdbf3c219ebc8bd937a76f60966a40d2e136de567. Jul 12 00:08:19.819603 containerd[1942]: time="2025-07-12T00:08:19.818996056Z" level=info msg="StartContainer for \"76a9e6ea77685a8978b1cc6b492d133f5e037b5ee675e98e9fbec388d8d4d7f1\" returns successfully" Jul 12 00:08:19.912447 containerd[1942]: time="2025-07-12T00:08:19.912169204Z" level=info msg="StartContainer for \"5118b7cc54c107d2e575098cdbf3c219ebc8bd937a76f60966a40d2e136de567\" returns successfully" Jul 12 00:08:20.514918 kubelet[3122]: I0712 00:08:20.513525 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lscq9" podStartSLOduration=32.513501663 podStartE2EDuration="32.513501663s" podCreationTimestamp="2025-07-12 00:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:20.512249067 +0000 UTC m=+37.659482252" watchObservedRunningTime="2025-07-12 00:08:20.513501663 +0000 UTC m=+37.660734860" Jul 12 00:08:20.542878 kubelet[3122]: I0712 00:08:20.541280 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bszz4" podStartSLOduration=32.541229883 podStartE2EDuration="32.541229883s" podCreationTimestamp="2025-07-12 00:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:08:20.539801631 +0000 UTC m=+37.687034828" watchObservedRunningTime="2025-07-12 00:08:20.541229883 +0000 UTC m=+37.688463116" Jul 12 00:08:31.697991 systemd[1]: Started sshd@7-172.31.31.176:22-139.178.89.65:55712.service - OpenSSH per-connection server daemon (139.178.89.65:55712). Jul 12 00:08:31.873302 sshd[4679]: Accepted publickey for core from 139.178.89.65 port 55712 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:31.876091 sshd[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:31.884838 systemd-logind[1909]: New session 8 of user core. Jul 12 00:08:31.892939 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:08:32.163961 sshd[4679]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:32.171150 systemd[1]: sshd@7-172.31.31.176:22-139.178.89.65:55712.service: Deactivated successfully. Jul 12 00:08:32.174366 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:08:32.175925 systemd-logind[1909]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:08:32.178211 systemd-logind[1909]: Removed session 8. Jul 12 00:08:37.202993 systemd[1]: Started sshd@8-172.31.31.176:22-139.178.89.65:55720.service - OpenSSH per-connection server daemon (139.178.89.65:55720). Jul 12 00:08:37.378045 sshd[4694]: Accepted publickey for core from 139.178.89.65 port 55720 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:37.380789 sshd[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:37.388684 systemd-logind[1909]: New session 9 of user core. Jul 12 00:08:37.400737 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:08:37.641893 sshd[4694]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:37.649125 systemd[1]: sshd@8-172.31.31.176:22-139.178.89.65:55720.service: Deactivated successfully. Jul 12 00:08:37.652901 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:08:37.654321 systemd-logind[1909]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:08:37.656628 systemd-logind[1909]: Removed session 9. Jul 12 00:08:42.682988 systemd[1]: Started sshd@9-172.31.31.176:22-139.178.89.65:60046.service - OpenSSH per-connection server daemon (139.178.89.65:60046). Jul 12 00:08:42.860600 sshd[4708]: Accepted publickey for core from 139.178.89.65 port 60046 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:42.863337 sshd[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:42.871178 systemd-logind[1909]: New session 10 of user core. Jul 12 00:08:42.877762 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:08:43.121868 sshd[4708]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:43.128536 systemd[1]: sshd@9-172.31.31.176:22-139.178.89.65:60046.service: Deactivated successfully. Jul 12 00:08:43.132585 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:08:43.134532 systemd-logind[1909]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:08:43.136701 systemd-logind[1909]: Removed session 10. Jul 12 00:08:48.159985 systemd[1]: Started sshd@10-172.31.31.176:22-139.178.89.65:60056.service - OpenSSH per-connection server daemon (139.178.89.65:60056). Jul 12 00:08:48.334406 sshd[4724]: Accepted publickey for core from 139.178.89.65 port 60056 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:48.337446 sshd[4724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:48.348065 systemd-logind[1909]: New session 11 of user core. Jul 12 00:08:48.351760 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:08:48.592279 sshd[4724]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:48.599235 systemd[1]: sshd@10-172.31.31.176:22-139.178.89.65:60056.service: Deactivated successfully. Jul 12 00:08:48.603888 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:08:48.605608 systemd-logind[1909]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:08:48.608139 systemd-logind[1909]: Removed session 11. Jul 12 00:08:48.634966 systemd[1]: Started sshd@11-172.31.31.176:22-139.178.89.65:60072.service - OpenSSH per-connection server daemon (139.178.89.65:60072). Jul 12 00:08:48.801287 sshd[4738]: Accepted publickey for core from 139.178.89.65 port 60072 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:48.803947 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:48.812626 systemd-logind[1909]: New session 12 of user core. Jul 12 00:08:48.818820 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:08:49.132946 sshd[4738]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:49.144670 systemd-logind[1909]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:08:49.148414 systemd[1]: sshd@11-172.31.31.176:22-139.178.89.65:60072.service: Deactivated successfully. Jul 12 00:08:49.156722 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:08:49.177524 systemd-logind[1909]: Removed session 12. Jul 12 00:08:49.186980 systemd[1]: Started sshd@12-172.31.31.176:22-139.178.89.65:60082.service - OpenSSH per-connection server daemon (139.178.89.65:60082). Jul 12 00:08:49.373513 sshd[4749]: Accepted publickey for core from 139.178.89.65 port 60082 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:49.376760 sshd[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:49.385560 systemd-logind[1909]: New session 13 of user core. Jul 12 00:08:49.391771 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:08:49.638888 sshd[4749]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:49.645669 systemd[1]: sshd@12-172.31.31.176:22-139.178.89.65:60082.service: Deactivated successfully. Jul 12 00:08:49.649819 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:08:49.651559 systemd-logind[1909]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:08:49.653387 systemd-logind[1909]: Removed session 13. Jul 12 00:08:54.680037 systemd[1]: Started sshd@13-172.31.31.176:22-139.178.89.65:39964.service - OpenSSH per-connection server daemon (139.178.89.65:39964). Jul 12 00:08:54.861890 sshd[4765]: Accepted publickey for core from 139.178.89.65 port 39964 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:08:54.864788 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:08:54.873392 systemd-logind[1909]: New session 14 of user core. Jul 12 00:08:54.879766 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:08:55.131066 sshd[4765]: pam_unix(sshd:session): session closed for user core Jul 12 00:08:55.138602 systemd[1]: sshd@13-172.31.31.176:22-139.178.89.65:39964.service: Deactivated successfully. Jul 12 00:08:55.142727 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:08:55.145182 systemd-logind[1909]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:08:55.147383 systemd-logind[1909]: Removed session 14. Jul 12 00:09:00.170029 systemd[1]: Started sshd@14-172.31.31.176:22-139.178.89.65:43360.service - OpenSSH per-connection server daemon (139.178.89.65:43360). Jul 12 00:09:00.343400 sshd[4778]: Accepted publickey for core from 139.178.89.65 port 43360 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:00.346120 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:00.355571 systemd-logind[1909]: New session 15 of user core. Jul 12 00:09:00.364736 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:09:00.602493 sshd[4778]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:00.608642 systemd[1]: sshd@14-172.31.31.176:22-139.178.89.65:43360.service: Deactivated successfully. Jul 12 00:09:00.613789 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:09:00.617183 systemd-logind[1909]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:09:00.619999 systemd-logind[1909]: Removed session 15. Jul 12 00:09:05.642085 systemd[1]: Started sshd@15-172.31.31.176:22-139.178.89.65:43364.service - OpenSSH per-connection server daemon (139.178.89.65:43364). Jul 12 00:09:05.807619 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 43364 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:05.810364 sshd[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:05.818133 systemd-logind[1909]: New session 16 of user core. Jul 12 00:09:05.828754 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:09:06.071827 sshd[4791]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:06.077971 systemd-logind[1909]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:09:06.078373 systemd[1]: sshd@15-172.31.31.176:22-139.178.89.65:43364.service: Deactivated successfully. Jul 12 00:09:06.083415 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:09:06.089338 systemd-logind[1909]: Removed session 16. Jul 12 00:09:06.110265 systemd[1]: Started sshd@16-172.31.31.176:22-139.178.89.65:43374.service - OpenSSH per-connection server daemon (139.178.89.65:43374). Jul 12 00:09:06.294556 sshd[4804]: Accepted publickey for core from 139.178.89.65 port 43374 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:06.297528 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:06.306349 systemd-logind[1909]: New session 17 of user core. Jul 12 00:09:06.317753 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:09:06.650986 sshd[4804]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:06.657268 systemd[1]: sshd@16-172.31.31.176:22-139.178.89.65:43374.service: Deactivated successfully. Jul 12 00:09:06.660999 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:09:06.662603 systemd-logind[1909]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:09:06.665396 systemd-logind[1909]: Removed session 17. Jul 12 00:09:06.688087 systemd[1]: Started sshd@17-172.31.31.176:22-139.178.89.65:43384.service - OpenSSH per-connection server daemon (139.178.89.65:43384). Jul 12 00:09:06.862772 sshd[4814]: Accepted publickey for core from 139.178.89.65 port 43384 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:06.865686 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:06.873534 systemd-logind[1909]: New session 18 of user core. Jul 12 00:09:06.879759 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:09:09.190323 sshd[4814]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:09.200228 systemd[1]: sshd@17-172.31.31.176:22-139.178.89.65:43384.service: Deactivated successfully. Jul 12 00:09:09.206483 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:09:09.214093 systemd-logind[1909]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:09:09.239287 systemd[1]: Started sshd@18-172.31.31.176:22-139.178.89.65:43398.service - OpenSSH per-connection server daemon (139.178.89.65:43398). Jul 12 00:09:09.242646 systemd-logind[1909]: Removed session 18. Jul 12 00:09:09.426376 sshd[4834]: Accepted publickey for core from 139.178.89.65 port 43398 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:09.429072 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:09.436713 systemd-logind[1909]: New session 19 of user core. Jul 12 00:09:09.444095 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:09:09.931669 sshd[4834]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:09.941028 systemd[1]: sshd@18-172.31.31.176:22-139.178.89.65:43398.service: Deactivated successfully. Jul 12 00:09:09.945546 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:09:09.948021 systemd-logind[1909]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:09:09.952057 systemd-logind[1909]: Removed session 19. Jul 12 00:09:09.971393 systemd[1]: Started sshd@19-172.31.31.176:22-139.178.89.65:41516.service - OpenSSH per-connection server daemon (139.178.89.65:41516). Jul 12 00:09:10.151643 sshd[4845]: Accepted publickey for core from 139.178.89.65 port 41516 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:10.154803 sshd[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:10.162045 systemd-logind[1909]: New session 20 of user core. Jul 12 00:09:10.173752 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:09:10.411817 sshd[4845]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:10.418224 systemd[1]: sshd@19-172.31.31.176:22-139.178.89.65:41516.service: Deactivated successfully. Jul 12 00:09:10.422323 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:09:10.425419 systemd-logind[1909]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:09:10.427860 systemd-logind[1909]: Removed session 20. Jul 12 00:09:15.452000 systemd[1]: Started sshd@20-172.31.31.176:22-139.178.89.65:41528.service - OpenSSH per-connection server daemon (139.178.89.65:41528). Jul 12 00:09:15.628225 sshd[4858]: Accepted publickey for core from 139.178.89.65 port 41528 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:15.630885 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:15.639790 systemd-logind[1909]: New session 21 of user core. Jul 12 00:09:15.647741 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:09:15.886306 sshd[4858]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:15.893631 systemd[1]: sshd@20-172.31.31.176:22-139.178.89.65:41528.service: Deactivated successfully. Jul 12 00:09:15.898333 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:09:15.901663 systemd-logind[1909]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:09:15.903359 systemd-logind[1909]: Removed session 21. Jul 12 00:09:20.924922 systemd[1]: Started sshd@21-172.31.31.176:22-139.178.89.65:38854.service - OpenSSH per-connection server daemon (139.178.89.65:38854). Jul 12 00:09:21.100976 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 38854 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:21.103679 sshd[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:21.111802 systemd-logind[1909]: New session 22 of user core. Jul 12 00:09:21.122719 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:09:21.362079 sshd[4876]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:21.367836 systemd-logind[1909]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:09:21.368138 systemd[1]: sshd@21-172.31.31.176:22-139.178.89.65:38854.service: Deactivated successfully. Jul 12 00:09:21.372775 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:09:21.377943 systemd-logind[1909]: Removed session 22. Jul 12 00:09:26.402874 systemd[1]: Started sshd@22-172.31.31.176:22-139.178.89.65:38856.service - OpenSSH per-connection server daemon (139.178.89.65:38856). Jul 12 00:09:26.582940 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 38856 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:26.585651 sshd[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:26.593560 systemd-logind[1909]: New session 23 of user core. Jul 12 00:09:26.604764 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:09:26.849768 sshd[4890]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:26.855903 systemd-logind[1909]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:09:26.857833 systemd[1]: sshd@22-172.31.31.176:22-139.178.89.65:38856.service: Deactivated successfully. Jul 12 00:09:26.861727 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:09:26.865879 systemd-logind[1909]: Removed session 23. Jul 12 00:09:31.888270 systemd[1]: Started sshd@23-172.31.31.176:22-139.178.89.65:60672.service - OpenSSH per-connection server daemon (139.178.89.65:60672). Jul 12 00:09:32.065161 sshd[4903]: Accepted publickey for core from 139.178.89.65 port 60672 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:32.067856 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:32.076591 systemd-logind[1909]: New session 24 of user core. Jul 12 00:09:32.083738 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:09:32.326917 sshd[4903]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:32.334315 systemd[1]: sshd@23-172.31.31.176:22-139.178.89.65:60672.service: Deactivated successfully. Jul 12 00:09:32.338775 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:09:32.340654 systemd-logind[1909]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:09:32.344567 systemd-logind[1909]: Removed session 24. Jul 12 00:09:32.365027 systemd[1]: Started sshd@24-172.31.31.176:22-139.178.89.65:60680.service - OpenSSH per-connection server daemon (139.178.89.65:60680). Jul 12 00:09:32.546385 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 60680 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:32.549037 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:32.557898 systemd-logind[1909]: New session 25 of user core. Jul 12 00:09:32.562745 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:09:35.646089 containerd[1942]: time="2025-07-12T00:09:35.645310360Z" level=info msg="StopContainer for \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\" with timeout 30 (s)" Jul 12 00:09:35.653606 containerd[1942]: time="2025-07-12T00:09:35.652796921Z" level=info msg="Stop container \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\" with signal terminated" Jul 12 00:09:35.682310 systemd[1]: cri-containerd-977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77.scope: Deactivated successfully. Jul 12 00:09:35.690609 containerd[1942]: time="2025-07-12T00:09:35.689655197Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:09:35.708568 containerd[1942]: time="2025-07-12T00:09:35.708502205Z" level=info msg="StopContainer for \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\" with timeout 2 (s)" Jul 12 00:09:35.714580 containerd[1942]: time="2025-07-12T00:09:35.713437481Z" level=info msg="Stop container \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\" with signal terminated" Jul 12 00:09:35.738901 systemd-networkd[1842]: lxc_health: Link DOWN Jul 12 00:09:35.738923 systemd-networkd[1842]: lxc_health: Lost carrier Jul 12 00:09:35.772772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77-rootfs.mount: Deactivated successfully. Jul 12 00:09:35.776193 systemd[1]: cri-containerd-ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d.scope: Deactivated successfully. Jul 12 00:09:35.777308 systemd[1]: cri-containerd-ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d.scope: Consumed 14.990s CPU time. Jul 12 00:09:35.786500 containerd[1942]: time="2025-07-12T00:09:35.785337041Z" level=info msg="shim disconnected" id=977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77 namespace=k8s.io Jul 12 00:09:35.786500 containerd[1942]: time="2025-07-12T00:09:35.785426225Z" level=warning msg="cleaning up after shim disconnected" id=977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77 namespace=k8s.io Jul 12 00:09:35.786500 containerd[1942]: time="2025-07-12T00:09:35.785448701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:35.823767 containerd[1942]: time="2025-07-12T00:09:35.823714817Z" level=info msg="StopContainer for \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\" returns successfully" Jul 12 00:09:35.825415 containerd[1942]: time="2025-07-12T00:09:35.825347861Z" level=info msg="StopPodSandbox for \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\"" Jul 12 00:09:35.825604 containerd[1942]: time="2025-07-12T00:09:35.825425441Z" level=info msg="Container to stop \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:35.829526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85-shm.mount: Deactivated successfully. Jul 12 00:09:35.847550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d-rootfs.mount: Deactivated successfully. Jul 12 00:09:35.851320 containerd[1942]: time="2025-07-12T00:09:35.851243789Z" level=info msg="shim disconnected" id=ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d namespace=k8s.io Jul 12 00:09:35.851788 containerd[1942]: time="2025-07-12T00:09:35.851750333Z" level=warning msg="cleaning up after shim disconnected" id=ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d namespace=k8s.io Jul 12 00:09:35.852035 containerd[1942]: time="2025-07-12T00:09:35.852006989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:35.856808 systemd[1]: cri-containerd-d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85.scope: Deactivated successfully. Jul 12 00:09:35.889875 containerd[1942]: time="2025-07-12T00:09:35.889803618Z" level=info msg="StopContainer for \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\" returns successfully" Jul 12 00:09:35.891226 containerd[1942]: time="2025-07-12T00:09:35.890905758Z" level=info msg="StopPodSandbox for \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\"" Jul 12 00:09:35.891226 containerd[1942]: time="2025-07-12T00:09:35.890971722Z" level=info msg="Container to stop \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:35.891226 containerd[1942]: time="2025-07-12T00:09:35.890998134Z" level=info msg="Container to stop \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:35.891226 containerd[1942]: time="2025-07-12T00:09:35.891021390Z" level=info msg="Container to stop \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:35.891226 containerd[1942]: time="2025-07-12T00:09:35.891044850Z" level=info msg="Container to stop \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:35.891226 containerd[1942]: time="2025-07-12T00:09:35.891067626Z" level=info msg="Container to stop \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:09:35.908590 systemd[1]: cri-containerd-da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929.scope: Deactivated successfully. Jul 12 00:09:35.920641 containerd[1942]: time="2025-07-12T00:09:35.920506290Z" level=info msg="shim disconnected" id=d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85 namespace=k8s.io Jul 12 00:09:35.921367 containerd[1942]: time="2025-07-12T00:09:35.920962566Z" level=warning msg="cleaning up after shim disconnected" id=d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85 namespace=k8s.io Jul 12 00:09:35.921367 containerd[1942]: time="2025-07-12T00:09:35.921025974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:35.955293 containerd[1942]: time="2025-07-12T00:09:35.955215114Z" level=info msg="TearDown network for sandbox \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" successfully" Jul 12 00:09:35.955293 containerd[1942]: time="2025-07-12T00:09:35.955276914Z" level=info msg="StopPodSandbox for \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" returns successfully" Jul 12 00:09:35.971282 containerd[1942]: time="2025-07-12T00:09:35.970993062Z" level=info msg="shim disconnected" id=da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929 namespace=k8s.io Jul 12 00:09:35.971588 containerd[1942]: time="2025-07-12T00:09:35.971219274Z" level=warning msg="cleaning up after shim disconnected" id=da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929 namespace=k8s.io Jul 12 00:09:35.971588 containerd[1942]: time="2025-07-12T00:09:35.971411178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:36.003949 containerd[1942]: time="2025-07-12T00:09:36.003731114Z" level=info msg="TearDown network for sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" successfully" Jul 12 00:09:36.003949 containerd[1942]: time="2025-07-12T00:09:36.003781142Z" level=info msg="StopPodSandbox for \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" returns successfully" Jul 12 00:09:36.134738 kubelet[3122]: I0712 00:09:36.134654 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-kernel\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.134738 kubelet[3122]: I0712 00:09:36.134728 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-run\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135393 kubelet[3122]: I0712 00:09:36.134765 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-net\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135393 kubelet[3122]: I0712 00:09:36.134809 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b09f8826-6df4-4da3-8509-54d7e18bd133-clustermesh-secrets\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135393 kubelet[3122]: I0712 00:09:36.134842 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-hostproc\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135393 kubelet[3122]: I0712 00:09:36.134874 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-bpf-maps\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135393 kubelet[3122]: I0712 00:09:36.134911 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-etc-cni-netd\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135393 kubelet[3122]: I0712 00:09:36.134941 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-lib-modules\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135775 kubelet[3122]: I0712 00:09:36.134973 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cni-path\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135775 kubelet[3122]: I0712 00:09:36.135010 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b94dc4f2-f930-4015-9701-64890813fbf2-cilium-config-path\") pod \"b94dc4f2-f930-4015-9701-64890813fbf2\" (UID: \"b94dc4f2-f930-4015-9701-64890813fbf2\") " Jul 12 00:09:36.135775 kubelet[3122]: I0712 00:09:36.135048 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-xtables-lock\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135775 kubelet[3122]: I0712 00:09:36.135087 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmtj4\" (UniqueName: \"kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-kube-api-access-hmtj4\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135775 kubelet[3122]: I0712 00:09:36.135131 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-config-path\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.135775 kubelet[3122]: I0712 00:09:36.135168 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-cgroup\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.136119 kubelet[3122]: I0712 00:09:36.135208 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-hubble-tls\") pod \"b09f8826-6df4-4da3-8509-54d7e18bd133\" (UID: \"b09f8826-6df4-4da3-8509-54d7e18bd133\") " Jul 12 00:09:36.136119 kubelet[3122]: I0712 00:09:36.135245 3122 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9fnc\" (UniqueName: \"kubernetes.io/projected/b94dc4f2-f930-4015-9701-64890813fbf2-kube-api-access-j9fnc\") pod \"b94dc4f2-f930-4015-9701-64890813fbf2\" (UID: \"b94dc4f2-f930-4015-9701-64890813fbf2\") " Jul 12 00:09:36.138547 kubelet[3122]: I0712 00:09:36.136349 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.138547 kubelet[3122]: I0712 00:09:36.136441 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.138547 kubelet[3122]: I0712 00:09:36.136526 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.138547 kubelet[3122]: I0712 00:09:36.136565 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.138547 kubelet[3122]: I0712 00:09:36.137705 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cni-path" (OuterVolumeSpecName: "cni-path") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.138911 kubelet[3122]: I0712 00:09:36.138537 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.139217 kubelet[3122]: I0712 00:09:36.139178 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-hostproc" (OuterVolumeSpecName: "hostproc") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.140000 kubelet[3122]: I0712 00:09:36.139433 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.140189 kubelet[3122]: I0712 00:09:36.139674 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.140697 kubelet[3122]: I0712 00:09:36.140646 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:09:36.150337 kubelet[3122]: I0712 00:09:36.150278 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b09f8826-6df4-4da3-8509-54d7e18bd133-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:09:36.152829 kubelet[3122]: I0712 00:09:36.152749 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94dc4f2-f930-4015-9701-64890813fbf2-kube-api-access-j9fnc" (OuterVolumeSpecName: "kube-api-access-j9fnc") pod "b94dc4f2-f930-4015-9701-64890813fbf2" (UID: "b94dc4f2-f930-4015-9701-64890813fbf2"). InnerVolumeSpecName "kube-api-access-j9fnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:09:36.155112 kubelet[3122]: I0712 00:09:36.155045 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-kube-api-access-hmtj4" (OuterVolumeSpecName: "kube-api-access-hmtj4") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "kube-api-access-hmtj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:09:36.162398 kubelet[3122]: I0712 00:09:36.159402 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:09:36.162398 kubelet[3122]: I0712 00:09:36.159777 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b09f8826-6df4-4da3-8509-54d7e18bd133" (UID: "b09f8826-6df4-4da3-8509-54d7e18bd133"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:09:36.162398 kubelet[3122]: I0712 00:09:36.162193 3122 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94dc4f2-f930-4015-9701-64890813fbf2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b94dc4f2-f930-4015-9701-64890813fbf2" (UID: "b94dc4f2-f930-4015-9701-64890813fbf2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236553 3122 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-kernel\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236600 3122 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-run\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236622 3122 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-host-proc-sys-net\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236644 3122 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b09f8826-6df4-4da3-8509-54d7e18bd133-clustermesh-secrets\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236666 3122 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-hostproc\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236686 3122 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-lib-modules\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236706 3122 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-bpf-maps\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.236962 kubelet[3122]: I0712 00:09:36.236726 3122 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-etc-cni-netd\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236745 3122 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cni-path\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236765 3122 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b94dc4f2-f930-4015-9701-64890813fbf2-cilium-config-path\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236785 3122 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmtj4\" (UniqueName: \"kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-kube-api-access-hmtj4\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236812 3122 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-config-path\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236854 3122 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-cilium-cgroup\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236875 3122 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b09f8826-6df4-4da3-8509-54d7e18bd133-xtables-lock\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236895 3122 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b09f8826-6df4-4da3-8509-54d7e18bd133-hubble-tls\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.237510 kubelet[3122]: I0712 00:09:36.236918 3122 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9fnc\" (UniqueName: \"kubernetes.io/projected/b94dc4f2-f930-4015-9701-64890813fbf2-kube-api-access-j9fnc\") on node \"ip-172-31-31-176\" DevicePath \"\"" Jul 12 00:09:36.645321 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85-rootfs.mount: Deactivated successfully. Jul 12 00:09:36.645586 systemd[1]: var-lib-kubelet-pods-b94dc4f2\x2df930\x2d4015\x2d9701\x2d64890813fbf2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9fnc.mount: Deactivated successfully. Jul 12 00:09:36.645734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929-rootfs.mount: Deactivated successfully. Jul 12 00:09:36.645864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929-shm.mount: Deactivated successfully. Jul 12 00:09:36.646021 systemd[1]: var-lib-kubelet-pods-b09f8826\x2d6df4\x2d4da3\x2d8509\x2d54d7e18bd133-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhmtj4.mount: Deactivated successfully. Jul 12 00:09:36.646183 systemd[1]: var-lib-kubelet-pods-b09f8826\x2d6df4\x2d4da3\x2d8509\x2d54d7e18bd133-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:09:36.646323 systemd[1]: var-lib-kubelet-pods-b09f8826\x2d6df4\x2d4da3\x2d8509\x2d54d7e18bd133-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:09:36.723598 kubelet[3122]: I0712 00:09:36.722717 3122 scope.go:117] "RemoveContainer" containerID="977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77" Jul 12 00:09:36.729328 containerd[1942]: time="2025-07-12T00:09:36.729249282Z" level=info msg="RemoveContainer for \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\"" Jul 12 00:09:36.752489 containerd[1942]: time="2025-07-12T00:09:36.752005830Z" level=info msg="RemoveContainer for \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\" returns successfully" Jul 12 00:09:36.755088 systemd[1]: Removed slice kubepods-besteffort-podb94dc4f2_f930_4015_9701_64890813fbf2.slice - libcontainer container kubepods-besteffort-podb94dc4f2_f930_4015_9701_64890813fbf2.slice. Jul 12 00:09:36.768824 kubelet[3122]: I0712 00:09:36.768753 3122 scope.go:117] "RemoveContainer" containerID="977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77" Jul 12 00:09:36.769316 containerd[1942]: time="2025-07-12T00:09:36.769240458Z" level=error msg="ContainerStatus for \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\": not found" Jul 12 00:09:36.769987 kubelet[3122]: E0712 00:09:36.769652 3122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\": not found" containerID="977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77" Jul 12 00:09:36.770277 kubelet[3122]: I0712 00:09:36.770155 3122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77"} err="failed to get container status \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\": rpc error: code = NotFound desc = an error occurred when try to find container \"977fc9245a33afede84d22a1483cb1f072d2d059b507cac8b6d2e9618a4b3d77\": not found" Jul 12 00:09:36.770392 kubelet[3122]: I0712 00:09:36.770371 3122 scope.go:117] "RemoveContainer" containerID="ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d" Jul 12 00:09:36.776193 containerd[1942]: time="2025-07-12T00:09:36.775701990Z" level=info msg="RemoveContainer for \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\"" Jul 12 00:09:36.776177 systemd[1]: Removed slice kubepods-burstable-podb09f8826_6df4_4da3_8509_54d7e18bd133.slice - libcontainer container kubepods-burstable-podb09f8826_6df4_4da3_8509_54d7e18bd133.slice. Jul 12 00:09:36.776404 systemd[1]: kubepods-burstable-podb09f8826_6df4_4da3_8509_54d7e18bd133.slice: Consumed 15.157s CPU time. Jul 12 00:09:36.784492 containerd[1942]: time="2025-07-12T00:09:36.784410870Z" level=info msg="RemoveContainer for \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\" returns successfully" Jul 12 00:09:36.785615 kubelet[3122]: I0712 00:09:36.785531 3122 scope.go:117] "RemoveContainer" containerID="8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8" Jul 12 00:09:36.791600 containerd[1942]: time="2025-07-12T00:09:36.791518554Z" level=info msg="RemoveContainer for \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\"" Jul 12 00:09:36.798073 containerd[1942]: time="2025-07-12T00:09:36.797876142Z" level=info msg="RemoveContainer for \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\" returns successfully" Jul 12 00:09:36.798803 kubelet[3122]: I0712 00:09:36.798752 3122 scope.go:117] "RemoveContainer" containerID="72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f" Jul 12 00:09:36.804734 containerd[1942]: time="2025-07-12T00:09:36.804446850Z" level=info msg="RemoveContainer for \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\"" Jul 12 00:09:36.811214 containerd[1942]: time="2025-07-12T00:09:36.811062750Z" level=info msg="RemoveContainer for \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\" returns successfully" Jul 12 00:09:36.812049 kubelet[3122]: I0712 00:09:36.811998 3122 scope.go:117] "RemoveContainer" containerID="edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b" Jul 12 00:09:36.814619 containerd[1942]: time="2025-07-12T00:09:36.814527330Z" level=info msg="RemoveContainer for \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\"" Jul 12 00:09:36.818437 containerd[1942]: time="2025-07-12T00:09:36.818346558Z" level=info msg="RemoveContainer for \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\" returns successfully" Jul 12 00:09:36.818850 kubelet[3122]: I0712 00:09:36.818809 3122 scope.go:117] "RemoveContainer" containerID="5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31" Jul 12 00:09:36.822592 containerd[1942]: time="2025-07-12T00:09:36.822499818Z" level=info msg="RemoveContainer for \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\"" Jul 12 00:09:36.830193 containerd[1942]: time="2025-07-12T00:09:36.830109282Z" level=info msg="RemoveContainer for \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\" returns successfully" Jul 12 00:09:36.831359 kubelet[3122]: I0712 00:09:36.830638 3122 scope.go:117] "RemoveContainer" containerID="ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d" Jul 12 00:09:36.833763 containerd[1942]: time="2025-07-12T00:09:36.833622510Z" level=error msg="ContainerStatus for \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\": not found" Jul 12 00:09:36.834368 kubelet[3122]: E0712 00:09:36.833984 3122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\": not found" containerID="ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d" Jul 12 00:09:36.834368 kubelet[3122]: I0712 00:09:36.834044 3122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d"} err="failed to get container status \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef3527cfd1cd80ee62c482a14e44c27302ce64e70bf8403bbea0e9f8c738629d\": not found" Jul 12 00:09:36.834368 kubelet[3122]: I0712 00:09:36.834087 3122 scope.go:117] "RemoveContainer" containerID="8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8" Jul 12 00:09:36.835305 containerd[1942]: time="2025-07-12T00:09:36.835219818Z" level=error msg="ContainerStatus for \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\": not found" Jul 12 00:09:36.835753 kubelet[3122]: E0712 00:09:36.835708 3122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\": not found" containerID="8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8" Jul 12 00:09:36.835892 kubelet[3122]: I0712 00:09:36.835768 3122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8"} err="failed to get container status \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f08568aa5a5cbc3209ac8aceb16c765284858fc4a9ca27314ad656bdbd44ab8\": not found" Jul 12 00:09:36.835892 kubelet[3122]: I0712 00:09:36.835833 3122 scope.go:117] "RemoveContainer" containerID="72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f" Jul 12 00:09:36.836790 containerd[1942]: time="2025-07-12T00:09:36.836588598Z" level=error msg="ContainerStatus for \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\": not found" Jul 12 00:09:36.837100 kubelet[3122]: E0712 00:09:36.836988 3122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\": not found" containerID="72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f" Jul 12 00:09:36.837100 kubelet[3122]: I0712 00:09:36.837040 3122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f"} err="failed to get container status \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"72e59acd387d05575d4b6d614ed2b3e73f81591b8642d37e456a668e05e50e9f\": not found" Jul 12 00:09:36.837100 kubelet[3122]: I0712 00:09:36.837078 3122 scope.go:117] "RemoveContainer" containerID="edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b" Jul 12 00:09:36.837882 containerd[1942]: time="2025-07-12T00:09:36.837594066Z" level=error msg="ContainerStatus for \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\": not found" Jul 12 00:09:36.838478 kubelet[3122]: E0712 00:09:36.838031 3122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\": not found" containerID="edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b" Jul 12 00:09:36.838478 kubelet[3122]: I0712 00:09:36.838235 3122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b"} err="failed to get container status \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"edbf3a4a433df0fa2191e8a82eeb237e619aafdc355129832ee8e63f2a8e0a6b\": not found" Jul 12 00:09:36.838478 kubelet[3122]: I0712 00:09:36.838275 3122 scope.go:117] "RemoveContainer" containerID="5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31" Jul 12 00:09:36.838792 containerd[1942]: time="2025-07-12T00:09:36.838699398Z" level=error msg="ContainerStatus for \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\": not found" Jul 12 00:09:36.839011 kubelet[3122]: E0712 00:09:36.838962 3122 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\": not found" containerID="5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31" Jul 12 00:09:36.839160 kubelet[3122]: I0712 00:09:36.839018 3122 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31"} err="failed to get container status \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\": rpc error: code = NotFound desc = an error occurred when try to find container \"5820637da0a72bf0d574c61e87ce95e2e08ee843ee3bea1885206a8fff4a5b31\": not found" Jul 12 00:09:37.083884 kubelet[3122]: I0712 00:09:37.083425 3122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09f8826-6df4-4da3-8509-54d7e18bd133" path="/var/lib/kubelet/pods/b09f8826-6df4-4da3-8509-54d7e18bd133/volumes" Jul 12 00:09:37.085207 kubelet[3122]: I0712 00:09:37.085149 3122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94dc4f2-f930-4015-9701-64890813fbf2" path="/var/lib/kubelet/pods/b94dc4f2-f930-4015-9701-64890813fbf2/volumes" Jul 12 00:09:37.571807 sshd[4915]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:37.579441 systemd[1]: sshd@24-172.31.31.176:22-139.178.89.65:60680.service: Deactivated successfully. Jul 12 00:09:37.582875 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:09:37.583307 systemd[1]: session-25.scope: Consumed 2.311s CPU time. Jul 12 00:09:37.584949 systemd-logind[1909]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:09:37.587358 systemd-logind[1909]: Removed session 25. Jul 12 00:09:37.611995 systemd[1]: Started sshd@25-172.31.31.176:22-139.178.89.65:60688.service - OpenSSH per-connection server daemon (139.178.89.65:60688). Jul 12 00:09:37.793593 sshd[5084]: Accepted publickey for core from 139.178.89.65 port 60688 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:37.796874 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:37.805592 systemd-logind[1909]: New session 26 of user core. Jul 12 00:09:37.811737 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:09:38.345267 kubelet[3122]: E0712 00:09:38.345206 3122 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:09:38.690961 ntpd[1901]: Deleting interface #11 lxc_health, fe80::306b:1ff:fec7:892d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Jul 12 00:09:38.691574 ntpd[1901]: 12 Jul 00:09:38 ntpd[1901]: Deleting interface #11 lxc_health, fe80::306b:1ff:fec7:892d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Jul 12 00:09:39.479822 sshd[5084]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:39.488789 kubelet[3122]: E0712 00:09:39.484889 3122 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b09f8826-6df4-4da3-8509-54d7e18bd133" containerName="cilium-agent" Jul 12 00:09:39.488789 kubelet[3122]: E0712 00:09:39.484947 3122 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b09f8826-6df4-4da3-8509-54d7e18bd133" containerName="mount-cgroup" Jul 12 00:09:39.488789 kubelet[3122]: E0712 00:09:39.484964 3122 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b94dc4f2-f930-4015-9701-64890813fbf2" containerName="cilium-operator" Jul 12 00:09:39.488789 kubelet[3122]: E0712 00:09:39.484980 3122 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b09f8826-6df4-4da3-8509-54d7e18bd133" containerName="apply-sysctl-overwrites" Jul 12 00:09:39.488789 kubelet[3122]: E0712 00:09:39.484994 3122 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b09f8826-6df4-4da3-8509-54d7e18bd133" containerName="mount-bpf-fs" Jul 12 00:09:39.488789 kubelet[3122]: E0712 00:09:39.485012 3122 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b09f8826-6df4-4da3-8509-54d7e18bd133" containerName="clean-cilium-state" Jul 12 00:09:39.488789 kubelet[3122]: I0712 00:09:39.485055 3122 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09f8826-6df4-4da3-8509-54d7e18bd133" containerName="cilium-agent" Jul 12 00:09:39.488789 kubelet[3122]: I0712 00:09:39.485072 3122 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94dc4f2-f930-4015-9701-64890813fbf2" containerName="cilium-operator" Jul 12 00:09:39.494300 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:09:39.494792 systemd[1]: session-26.scope: Consumed 1.468s CPU time. Jul 12 00:09:39.522795 systemd[1]: sshd@25-172.31.31.176:22-139.178.89.65:60688.service: Deactivated successfully. Jul 12 00:09:39.533253 systemd-logind[1909]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:09:39.550996 systemd[1]: Started sshd@26-172.31.31.176:22-139.178.89.65:60690.service - OpenSSH per-connection server daemon (139.178.89.65:60690). Jul 12 00:09:39.553764 systemd-logind[1909]: Removed session 26. Jul 12 00:09:39.570310 systemd[1]: Created slice kubepods-burstable-pod29dee821_40a2_4d8a_991d_ec3cf3662896.slice - libcontainer container kubepods-burstable-pod29dee821_40a2_4d8a_991d_ec3cf3662896.slice. Jul 12 00:09:39.658902 kubelet[3122]: I0712 00:09:39.658620 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-xtables-lock\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.658902 kubelet[3122]: I0712 00:09:39.658693 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29dee821-40a2-4d8a-991d-ec3cf3662896-clustermesh-secrets\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.658902 kubelet[3122]: I0712 00:09:39.658734 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29dee821-40a2-4d8a-991d-ec3cf3662896-cilium-config-path\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.658902 kubelet[3122]: I0712 00:09:39.658778 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-etc-cni-netd\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.658902 kubelet[3122]: I0712 00:09:39.658815 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-lib-modules\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.658902 kubelet[3122]: I0712 00:09:39.658860 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-cilium-cgroup\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659337 kubelet[3122]: I0712 00:09:39.658894 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-cni-path\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659337 kubelet[3122]: I0712 00:09:39.658930 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-bpf-maps\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659337 kubelet[3122]: I0712 00:09:39.658968 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-cilium-run\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659337 kubelet[3122]: I0712 00:09:39.659007 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/29dee821-40a2-4d8a-991d-ec3cf3662896-cilium-ipsec-secrets\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659337 kubelet[3122]: I0712 00:09:39.659043 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqz2\" (UniqueName: \"kubernetes.io/projected/29dee821-40a2-4d8a-991d-ec3cf3662896-kube-api-access-lrqz2\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659337 kubelet[3122]: I0712 00:09:39.659081 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-host-proc-sys-kernel\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659678 kubelet[3122]: I0712 00:09:39.659118 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-hostproc\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659678 kubelet[3122]: I0712 00:09:39.659155 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29dee821-40a2-4d8a-991d-ec3cf3662896-host-proc-sys-net\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.659678 kubelet[3122]: I0712 00:09:39.659187 3122 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29dee821-40a2-4d8a-991d-ec3cf3662896-hubble-tls\") pod \"cilium-4xwqd\" (UID: \"29dee821-40a2-4d8a-991d-ec3cf3662896\") " pod="kube-system/cilium-4xwqd" Jul 12 00:09:39.753488 sshd[5095]: Accepted publickey for core from 139.178.89.65 port 60690 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:39.753196 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:39.765445 systemd-logind[1909]: New session 27 of user core. Jul 12 00:09:39.786245 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 12 00:09:39.885042 containerd[1942]: time="2025-07-12T00:09:39.884978518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xwqd,Uid:29dee821-40a2-4d8a-991d-ec3cf3662896,Namespace:kube-system,Attempt:0,}" Jul 12 00:09:39.922154 containerd[1942]: time="2025-07-12T00:09:39.921985414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:09:39.922762 containerd[1942]: time="2025-07-12T00:09:39.922099750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:09:39.922762 containerd[1942]: time="2025-07-12T00:09:39.922154206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:39.922762 containerd[1942]: time="2025-07-12T00:09:39.922313242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:09:39.942142 sshd[5095]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:39.957633 systemd[1]: sshd@26-172.31.31.176:22-139.178.89.65:60690.service: Deactivated successfully. Jul 12 00:09:39.963400 systemd[1]: session-27.scope: Deactivated successfully. Jul 12 00:09:39.984378 systemd-logind[1909]: Session 27 logged out. Waiting for processes to exit. Jul 12 00:09:39.989806 systemd[1]: Started cri-containerd-d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d.scope - libcontainer container d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d. Jul 12 00:09:39.993980 systemd[1]: Started sshd@27-172.31.31.176:22-139.178.89.65:37114.service - OpenSSH per-connection server daemon (139.178.89.65:37114). Jul 12 00:09:40.001502 systemd-logind[1909]: Removed session 27. Jul 12 00:09:40.046431 containerd[1942]: time="2025-07-12T00:09:40.046360590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4xwqd,Uid:29dee821-40a2-4d8a-991d-ec3cf3662896,Namespace:kube-system,Attempt:0,} returns sandbox id \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\"" Jul 12 00:09:40.053410 containerd[1942]: time="2025-07-12T00:09:40.053356758Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:09:40.068715 containerd[1942]: time="2025-07-12T00:09:40.068656062Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b\"" Jul 12 00:09:40.069741 containerd[1942]: time="2025-07-12T00:09:40.069612234Z" level=info msg="StartContainer for \"ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b\"" Jul 12 00:09:40.113822 systemd[1]: Started cri-containerd-ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b.scope - libcontainer container ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b. Jul 12 00:09:40.158032 containerd[1942]: time="2025-07-12T00:09:40.157834543Z" level=info msg="StartContainer for \"ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b\" returns successfully" Jul 12 00:09:40.178336 systemd[1]: cri-containerd-ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b.scope: Deactivated successfully. Jul 12 00:09:40.198615 sshd[5135]: Accepted publickey for core from 139.178.89.65 port 37114 ssh2: RSA SHA256:rqVc07ZHJYS8k/+pkEfeFkMPPbocnthwPTDCiAXji4Q Jul 12 00:09:40.201840 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:09:40.211844 systemd-logind[1909]: New session 28 of user core. Jul 12 00:09:40.218793 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 12 00:09:40.239006 containerd[1942]: time="2025-07-12T00:09:40.238930351Z" level=info msg="shim disconnected" id=ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b namespace=k8s.io Jul 12 00:09:40.239636 containerd[1942]: time="2025-07-12T00:09:40.239329291Z" level=warning msg="cleaning up after shim disconnected" id=ee7b7d1aa7e345c7875f06fbdc17e044cca7f1a736c20d9d994fb91bf0e5719b namespace=k8s.io Jul 12 00:09:40.239636 containerd[1942]: time="2025-07-12T00:09:40.239359951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:40.759211 containerd[1942]: time="2025-07-12T00:09:40.759130654Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:09:40.779498 containerd[1942]: time="2025-07-12T00:09:40.779386318Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8\"" Jul 12 00:09:40.782523 containerd[1942]: time="2025-07-12T00:09:40.781762630Z" level=info msg="StartContainer for \"b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8\"" Jul 12 00:09:40.795628 systemd[1]: run-containerd-runc-k8s.io-d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d-runc.kFvB09.mount: Deactivated successfully. Jul 12 00:09:40.849786 systemd[1]: Started cri-containerd-b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8.scope - libcontainer container b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8. Jul 12 00:09:40.898829 containerd[1942]: time="2025-07-12T00:09:40.898736843Z" level=info msg="StartContainer for \"b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8\" returns successfully" Jul 12 00:09:40.913860 systemd[1]: cri-containerd-b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8.scope: Deactivated successfully. Jul 12 00:09:40.952782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8-rootfs.mount: Deactivated successfully. Jul 12 00:09:40.955584 containerd[1942]: time="2025-07-12T00:09:40.955439807Z" level=info msg="shim disconnected" id=b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8 namespace=k8s.io Jul 12 00:09:40.955776 containerd[1942]: time="2025-07-12T00:09:40.955581779Z" level=warning msg="cleaning up after shim disconnected" id=b9458e61f4cb019ed56c683c13669f229641007dffbf9f2e52d69cf16fae5cf8 namespace=k8s.io Jul 12 00:09:40.955776 containerd[1942]: time="2025-07-12T00:09:40.955604423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:41.768120 containerd[1942]: time="2025-07-12T00:09:41.767880083Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:09:41.801601 containerd[1942]: time="2025-07-12T00:09:41.801446963Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9\"" Jul 12 00:09:41.803979 containerd[1942]: time="2025-07-12T00:09:41.803908619Z" level=info msg="StartContainer for \"2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9\"" Jul 12 00:09:41.866174 systemd[1]: Started cri-containerd-2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9.scope - libcontainer container 2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9. Jul 12 00:09:41.930138 containerd[1942]: time="2025-07-12T00:09:41.930056544Z" level=info msg="StartContainer for \"2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9\" returns successfully" Jul 12 00:09:41.937042 systemd[1]: cri-containerd-2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9.scope: Deactivated successfully. Jul 12 00:09:41.979680 containerd[1942]: time="2025-07-12T00:09:41.979598088Z" level=info msg="shim disconnected" id=2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9 namespace=k8s.io Jul 12 00:09:41.980146 containerd[1942]: time="2025-07-12T00:09:41.979678572Z" level=warning msg="cleaning up after shim disconnected" id=2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9 namespace=k8s.io Jul 12 00:09:41.980146 containerd[1942]: time="2025-07-12T00:09:41.979701504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:42.776216 containerd[1942]: time="2025-07-12T00:09:42.775832412Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:09:42.791689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2072f8cf39e26f8f4f22b24ff57cf9f5a8ce2adafe87e8fc21a8af211433afb9-rootfs.mount: Deactivated successfully. Jul 12 00:09:42.803906 containerd[1942]: time="2025-07-12T00:09:42.803816976Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8\"" Jul 12 00:09:42.804996 containerd[1942]: time="2025-07-12T00:09:42.804742056Z" level=info msg="StartContainer for \"27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8\"" Jul 12 00:09:42.869594 systemd[1]: Started cri-containerd-27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8.scope - libcontainer container 27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8. Jul 12 00:09:42.918709 systemd[1]: cri-containerd-27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8.scope: Deactivated successfully. Jul 12 00:09:42.925388 containerd[1942]: time="2025-07-12T00:09:42.925115401Z" level=info msg="StartContainer for \"27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8\" returns successfully" Jul 12 00:09:42.959253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8-rootfs.mount: Deactivated successfully. Jul 12 00:09:42.971524 containerd[1942]: time="2025-07-12T00:09:42.971368153Z" level=info msg="shim disconnected" id=27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8 namespace=k8s.io Jul 12 00:09:42.971524 containerd[1942]: time="2025-07-12T00:09:42.971438089Z" level=warning msg="cleaning up after shim disconnected" id=27bae8770e70cea1dab695741ac383a9566c12e11ac1b3bec0ee4e15b1b2b5e8 namespace=k8s.io Jul 12 00:09:42.971524 containerd[1942]: time="2025-07-12T00:09:42.971485381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:09:42.994637 containerd[1942]: time="2025-07-12T00:09:42.994537645Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:09:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:09:43.179011 containerd[1942]: time="2025-07-12T00:09:43.178229626Z" level=info msg="StopPodSandbox for \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\"" Jul 12 00:09:43.179011 containerd[1942]: time="2025-07-12T00:09:43.178378726Z" level=info msg="TearDown network for sandbox \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" successfully" Jul 12 00:09:43.179011 containerd[1942]: time="2025-07-12T00:09:43.178404094Z" level=info msg="StopPodSandbox for \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" returns successfully" Jul 12 00:09:43.180054 containerd[1942]: time="2025-07-12T00:09:43.179816662Z" level=info msg="RemovePodSandbox for \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\"" Jul 12 00:09:43.180054 containerd[1942]: time="2025-07-12T00:09:43.179873698Z" level=info msg="Forcibly stopping sandbox \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\"" Jul 12 00:09:43.180054 containerd[1942]: time="2025-07-12T00:09:43.179980726Z" level=info msg="TearDown network for sandbox \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" successfully" Jul 12 00:09:43.186248 containerd[1942]: time="2025-07-12T00:09:43.186166822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:43.186527 containerd[1942]: time="2025-07-12T00:09:43.186267406Z" level=info msg="RemovePodSandbox \"d94ec637628bfa50fef7206677d2ae8d83f87e8cc99aa2093917fbcb2e933d85\" returns successfully" Jul 12 00:09:43.187340 containerd[1942]: time="2025-07-12T00:09:43.187010854Z" level=info msg="StopPodSandbox for \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\"" Jul 12 00:09:43.187340 containerd[1942]: time="2025-07-12T00:09:43.187156930Z" level=info msg="TearDown network for sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" successfully" Jul 12 00:09:43.187340 containerd[1942]: time="2025-07-12T00:09:43.187194346Z" level=info msg="StopPodSandbox for \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" returns successfully" Jul 12 00:09:43.188017 containerd[1942]: time="2025-07-12T00:09:43.187981690Z" level=info msg="RemovePodSandbox for \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\"" Jul 12 00:09:43.188705 containerd[1942]: time="2025-07-12T00:09:43.188173606Z" level=info msg="Forcibly stopping sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\"" Jul 12 00:09:43.188705 containerd[1942]: time="2025-07-12T00:09:43.188277274Z" level=info msg="TearDown network for sandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" successfully" Jul 12 00:09:43.194448 containerd[1942]: time="2025-07-12T00:09:43.194368066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:09:43.194700 containerd[1942]: time="2025-07-12T00:09:43.194497246Z" level=info msg="RemovePodSandbox \"da8e8e4af505171b52eca75932156c34c50ac58787f1a40287959ee5155b5929\" returns successfully" Jul 12 00:09:43.347050 kubelet[3122]: E0712 00:09:43.347002 3122 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:09:43.782119 containerd[1942]: time="2025-07-12T00:09:43.782038405Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:09:43.817354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368676361.mount: Deactivated successfully. Jul 12 00:09:43.822521 containerd[1942]: time="2025-07-12T00:09:43.822343885Z" level=info msg="CreateContainer within sandbox \"d528034c6c2a3735b05255788a481dd6c33ccf0963220bfa46a47f90bf0e273d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"51d4f2e1477576daa58d2dbfbeae4e05428b17fc1454c02a5a557b64ea98854f\"" Jul 12 00:09:43.825272 containerd[1942]: time="2025-07-12T00:09:43.823845361Z" level=info msg="StartContainer for \"51d4f2e1477576daa58d2dbfbeae4e05428b17fc1454c02a5a557b64ea98854f\"" Jul 12 00:09:43.886884 systemd[1]: run-containerd-runc-k8s.io-51d4f2e1477576daa58d2dbfbeae4e05428b17fc1454c02a5a557b64ea98854f-runc.RT8KkL.mount: Deactivated successfully. Jul 12 00:09:43.897789 systemd[1]: Started cri-containerd-51d4f2e1477576daa58d2dbfbeae4e05428b17fc1454c02a5a557b64ea98854f.scope - libcontainer container 51d4f2e1477576daa58d2dbfbeae4e05428b17fc1454c02a5a557b64ea98854f. Jul 12 00:09:43.969966 containerd[1942]: time="2025-07-12T00:09:43.969891494Z" level=info msg="StartContainer for \"51d4f2e1477576daa58d2dbfbeae4e05428b17fc1454c02a5a557b64ea98854f\" returns successfully" Jul 12 00:09:44.743861 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 12 00:09:44.823721 kubelet[3122]: I0712 00:09:44.823587 3122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4xwqd" podStartSLOduration=5.823562354 podStartE2EDuration="5.823562354s" podCreationTimestamp="2025-07-12 00:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:09:44.819537722 +0000 UTC m=+121.966770931" watchObservedRunningTime="2025-07-12 00:09:44.823562354 +0000 UTC m=+121.970795539" Jul 12 00:09:46.455771 kubelet[3122]: I0712 00:09:46.455709 3122 setters.go:600] "Node became not ready" node="ip-172-31-31-176" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:09:46Z","lastTransitionTime":"2025-07-12T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:09:49.258359 (udev-worker)[5950]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:09:49.263173 systemd-networkd[1842]: lxc_health: Link UP Jul 12 00:09:49.274387 (udev-worker)[5951]: Network interface NamePolicy= disabled on kernel command line. Jul 12 00:09:49.300706 systemd-networkd[1842]: lxc_health: Gained carrier Jul 12 00:09:50.578188 systemd-networkd[1842]: lxc_health: Gained IPv6LL Jul 12 00:09:52.690972 ntpd[1901]: Listen normally on 14 lxc_health [fe80::90f2:d1ff:fe77:b575%14]:123 Jul 12 00:09:52.692360 ntpd[1901]: 12 Jul 00:09:52 ntpd[1901]: Listen normally on 14 lxc_health [fe80::90f2:d1ff:fe77:b575%14]:123 Jul 12 00:09:53.838683 kubelet[3122]: E0712 00:09:53.838429 3122 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:58840->127.0.0.1:45805: read tcp 127.0.0.1:58840->127.0.0.1:45805: read: connection reset by peer Jul 12 00:09:56.004534 systemd[1]: run-containerd-runc-k8s.io-51d4f2e1477576daa58d2dbfbeae4e05428b17fc1454c02a5a557b64ea98854f-runc.TTShQl.mount: Deactivated successfully. Jul 12 00:09:56.124359 sshd[5135]: pam_unix(sshd:session): session closed for user core Jul 12 00:09:56.133154 systemd[1]: sshd@27-172.31.31.176:22-139.178.89.65:37114.service: Deactivated successfully. Jul 12 00:09:56.140664 systemd[1]: session-28.scope: Deactivated successfully. Jul 12 00:09:56.146994 systemd-logind[1909]: Session 28 logged out. Waiting for processes to exit. Jul 12 00:09:56.150407 systemd-logind[1909]: Removed session 28. Jul 12 00:10:10.895745 systemd[1]: cri-containerd-5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8.scope: Deactivated successfully. Jul 12 00:10:10.896250 systemd[1]: cri-containerd-5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8.scope: Consumed 4.141s CPU time, 17.6M memory peak, 0B memory swap peak. Jul 12 00:10:10.933846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8-rootfs.mount: Deactivated successfully. Jul 12 00:10:10.954770 containerd[1942]: time="2025-07-12T00:10:10.954679324Z" level=info msg="shim disconnected" id=5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8 namespace=k8s.io Jul 12 00:10:10.954770 containerd[1942]: time="2025-07-12T00:10:10.954761968Z" level=warning msg="cleaning up after shim disconnected" id=5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8 namespace=k8s.io Jul 12 00:10:10.955880 containerd[1942]: time="2025-07-12T00:10:10.954784996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:11.858905 kubelet[3122]: I0712 00:10:11.858842 3122 scope.go:117] "RemoveContainer" containerID="5126a4127ce27e2bbcd30faa8c5efcb6bceaec280f3d8ec7c52c699319e39eb8" Jul 12 00:10:11.862404 containerd[1942]: time="2025-07-12T00:10:11.862320700Z" level=info msg="CreateContainer within sandbox \"f252675b10b92950ed954985df5f2d4117ccad88f7c02eebd758bbc6b39736d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:10:11.886941 containerd[1942]: time="2025-07-12T00:10:11.886859536Z" level=info msg="CreateContainer within sandbox \"f252675b10b92950ed954985df5f2d4117ccad88f7c02eebd758bbc6b39736d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"33b1626fdb0dae07bf8394db06b34d9d1a05949ddd703d7cf1602d19e22520cf\"" Jul 12 00:10:11.887562 containerd[1942]: time="2025-07-12T00:10:11.887499436Z" level=info msg="StartContainer for \"33b1626fdb0dae07bf8394db06b34d9d1a05949ddd703d7cf1602d19e22520cf\"" Jul 12 00:10:11.947784 systemd[1]: Started cri-containerd-33b1626fdb0dae07bf8394db06b34d9d1a05949ddd703d7cf1602d19e22520cf.scope - libcontainer container 33b1626fdb0dae07bf8394db06b34d9d1a05949ddd703d7cf1602d19e22520cf. Jul 12 00:10:12.018379 containerd[1942]: time="2025-07-12T00:10:12.018294853Z" level=info msg="StartContainer for \"33b1626fdb0dae07bf8394db06b34d9d1a05949ddd703d7cf1602d19e22520cf\" returns successfully" Jul 12 00:10:15.982396 systemd[1]: cri-containerd-82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40.scope: Deactivated successfully. Jul 12 00:10:15.983305 systemd[1]: cri-containerd-82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40.scope: Consumed 3.445s CPU time, 15.5M memory peak, 0B memory swap peak. Jul 12 00:10:16.023702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40-rootfs.mount: Deactivated successfully. Jul 12 00:10:16.039086 containerd[1942]: time="2025-07-12T00:10:16.038941517Z" level=info msg="shim disconnected" id=82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40 namespace=k8s.io Jul 12 00:10:16.039086 containerd[1942]: time="2025-07-12T00:10:16.039058157Z" level=warning msg="cleaning up after shim disconnected" id=82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40 namespace=k8s.io Jul 12 00:10:16.039086 containerd[1942]: time="2025-07-12T00:10:16.039079073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:10:16.704923 kubelet[3122]: E0712 00:10:16.704840 3122 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.176:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-176?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 12 00:10:16.876623 kubelet[3122]: I0712 00:10:16.876574 3122 scope.go:117] "RemoveContainer" containerID="82529a21561a2db428bcd54a502f01915c690eef13126cb00b9e527ac659cf40" Jul 12 00:10:16.879744 containerd[1942]: time="2025-07-12T00:10:16.879505881Z" level=info msg="CreateContainer within sandbox \"14cac4f0064614fa493a98c21f4c21f1e652394a1f3fac5ec8e764ed8a71e178\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:10:16.908622 containerd[1942]: time="2025-07-12T00:10:16.908504409Z" level=info msg="CreateContainer within sandbox \"14cac4f0064614fa493a98c21f4c21f1e652394a1f3fac5ec8e764ed8a71e178\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f5e66b8f93b83da6daef3bd928788cbbbab08b8f2440fbea59be384585a0435e\"" Jul 12 00:10:16.909487 containerd[1942]: time="2025-07-12T00:10:16.909346857Z" level=info msg="StartContainer for \"f5e66b8f93b83da6daef3bd928788cbbbab08b8f2440fbea59be384585a0435e\"" Jul 12 00:10:16.959516 systemd[1]: run-containerd-runc-k8s.io-f5e66b8f93b83da6daef3bd928788cbbbab08b8f2440fbea59be384585a0435e-runc.e2o1Hv.mount: Deactivated successfully. Jul 12 00:10:16.975826 systemd[1]: Started cri-containerd-f5e66b8f93b83da6daef3bd928788cbbbab08b8f2440fbea59be384585a0435e.scope - libcontainer container f5e66b8f93b83da6daef3bd928788cbbbab08b8f2440fbea59be384585a0435e. Jul 12 00:10:17.044030 containerd[1942]: time="2025-07-12T00:10:17.043833006Z" level=info msg="StartContainer for \"f5e66b8f93b83da6daef3bd928788cbbbab08b8f2440fbea59be384585a0435e\" returns successfully" Jul 12 00:10:26.706015 kubelet[3122]: E0712 00:10:26.705875 3122 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-31-176)" Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.980576 1910 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.980643 1910 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.981037 1910 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.981870 1910 omaha_request_params.cc:62] Current group set to lts Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.982016 1910 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.982039 1910 update_attempter.cc:643] Scheduling an action processor start. Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.982069 1910 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 12 00:10:26.982419 update_engine[1910]: I20250712 00:10:26.982125 1910 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 12 00:10:26.983334 locksmithd[1954]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 12 00:10:26.984156 update_engine[1910]: I20250712 00:10:26.982832 1910 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 12 00:10:26.984156 update_engine[1910]: I20250712 00:10:26.982870 1910 omaha_request_action.cc:272] Request: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: Jul 12 00:10:26.984156 update_engine[1910]: I20250712 00:10:26.982889 1910 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 12 00:10:26.986178 update_engine[1910]: I20250712 00:10:26.986110 1910 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 12 00:10:26.986739 update_engine[1910]: I20250712 00:10:26.986685 1910 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 12 00:10:27.018859 update_engine[1910]: E20250712 00:10:27.018784 1910 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 12 00:10:27.018990 update_engine[1910]: I20250712 00:10:27.018904 1910 libcurl_http_fetcher.cc:283] No HTTP response, retry 1