Sep 4 23:44:23.203990 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 23:44:23.204033 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Sep 4 22:21:25 -00 2025 Sep 4 23:44:23.204058 kernel: KASLR disabled due to lack of seed Sep 4 23:44:23.204074 kernel: efi: EFI v2.7 by EDK II Sep 4 23:44:23.204090 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Sep 4 23:44:23.204105 kernel: secureboot: Secure boot disabled Sep 4 23:44:23.204123 kernel: ACPI: Early table checksum verification disabled Sep 4 23:44:23.204138 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 23:44:23.204154 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 23:44:23.204169 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 23:44:23.204189 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 23:44:23.204205 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 23:44:23.204220 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 23:44:23.204235 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 23:44:23.204253 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 23:44:23.204274 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 23:44:23.204290 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 23:44:23.204307 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 23:44:23.204323 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 23:44:23.204339 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 23:44:23.204356 kernel: printk: bootconsole [uart0] enabled Sep 4 23:44:23.204372 kernel: NUMA: Failed to initialise from firmware Sep 4 23:44:23.204388 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 23:44:23.204446 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 23:44:23.204464 kernel: Zone ranges: Sep 4 23:44:23.204481 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 23:44:23.204504 kernel: DMA32 empty Sep 4 23:44:23.204521 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 23:44:23.204537 kernel: Movable zone start for each node Sep 4 23:44:23.204553 kernel: Early memory node ranges Sep 4 23:44:23.204569 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 23:44:23.204585 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 23:44:23.204602 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 23:44:23.204618 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 23:44:23.204633 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 23:44:23.204650 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 23:44:23.204666 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 23:44:23.204682 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 23:44:23.204702 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 23:44:23.204719 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 23:44:23.204742 kernel: psci: probing for conduit method from ACPI. Sep 4 23:44:23.204759 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 23:44:23.204776 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 23:44:23.204798 kernel: psci: Trusted OS migration not required Sep 4 23:44:23.204815 kernel: psci: SMC Calling Convention v1.1 Sep 4 23:44:23.204832 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 4 23:44:23.204850 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 23:44:23.204866 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 23:44:23.204884 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 23:44:23.204901 kernel: Detected PIPT I-cache on CPU0 Sep 4 23:44:23.204918 kernel: CPU features: detected: GIC system register CPU interface Sep 4 23:44:23.204935 kernel: CPU features: detected: Spectre-v2 Sep 4 23:44:23.204952 kernel: CPU features: detected: Spectre-v3a Sep 4 23:44:23.204968 kernel: CPU features: detected: Spectre-BHB Sep 4 23:44:23.204989 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 23:44:23.205006 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 23:44:23.205023 kernel: alternatives: applying boot alternatives Sep 4 23:44:23.205042 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:23.205060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 23:44:23.205077 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 23:44:23.205094 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 23:44:23.205111 kernel: Fallback order for Node 0: 0 Sep 4 23:44:23.205128 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 23:44:23.205145 kernel: Policy zone: Normal Sep 4 23:44:23.205162 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 23:44:23.205183 kernel: software IO TLB: area num 2. Sep 4 23:44:23.205200 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 23:44:23.205218 kernel: Memory: 3821112K/4030464K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 209352K reserved, 0K cma-reserved) Sep 4 23:44:23.205235 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 23:44:23.205252 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 23:44:23.205270 kernel: rcu: RCU event tracing is enabled. Sep 4 23:44:23.205287 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 23:44:23.205304 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 23:44:23.205322 kernel: Tracing variant of Tasks RCU enabled. Sep 4 23:44:23.205339 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 23:44:23.205356 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 23:44:23.205377 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 23:44:23.205411 kernel: GICv3: 96 SPIs implemented Sep 4 23:44:23.205432 kernel: GICv3: 0 Extended SPIs implemented Sep 4 23:44:23.205449 kernel: Root IRQ handler: gic_handle_irq Sep 4 23:44:23.205466 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 23:44:23.205483 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 23:44:23.205500 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 23:44:23.205517 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 23:44:23.205534 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 4 23:44:23.205552 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 4 23:44:23.205568 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 23:44:23.205585 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 4 23:44:23.205609 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 23:44:23.205626 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 23:44:23.205643 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 23:44:23.205661 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 23:44:23.205678 kernel: Console: colour dummy device 80x25 Sep 4 23:44:23.205695 kernel: printk: console [tty1] enabled Sep 4 23:44:23.205713 kernel: ACPI: Core revision 20230628 Sep 4 23:44:23.205750 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 23:44:23.205768 kernel: pid_max: default: 32768 minimum: 301 Sep 4 23:44:23.205786 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 23:44:23.205809 kernel: landlock: Up and running. Sep 4 23:44:23.205827 kernel: SELinux: Initializing. Sep 4 23:44:23.205844 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:23.205861 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 23:44:23.205879 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:23.205896 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 4 23:44:23.205914 kernel: rcu: Hierarchical SRCU implementation. Sep 4 23:44:23.205931 kernel: rcu: Max phase no-delay instances is 400. Sep 4 23:44:23.205949 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 23:44:23.205971 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 23:44:23.205988 kernel: Remapping and enabling EFI services. Sep 4 23:44:23.206005 kernel: smp: Bringing up secondary CPUs ... Sep 4 23:44:23.206022 kernel: Detected PIPT I-cache on CPU1 Sep 4 23:44:23.206039 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 23:44:23.206056 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 4 23:44:23.206074 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 23:44:23.206116 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 23:44:23.206136 kernel: SMP: Total of 2 processors activated. Sep 4 23:44:23.206160 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 23:44:23.206178 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 23:44:23.206206 kernel: CPU features: detected: CRC32 instructions Sep 4 23:44:23.206228 kernel: CPU: All CPU(s) started at EL1 Sep 4 23:44:23.206246 kernel: alternatives: applying system-wide alternatives Sep 4 23:44:23.206264 kernel: devtmpfs: initialized Sep 4 23:44:23.206282 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 23:44:23.206300 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 23:44:23.206319 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 23:44:23.206341 kernel: SMBIOS 3.0.0 present. Sep 4 23:44:23.206359 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 23:44:23.206377 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 23:44:23.206414 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 23:44:23.206459 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 23:44:23.206478 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 23:44:23.206496 kernel: audit: initializing netlink subsys (disabled) Sep 4 23:44:23.206521 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Sep 4 23:44:23.206539 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 23:44:23.206557 kernel: cpuidle: using governor menu Sep 4 23:44:23.206575 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 23:44:23.206594 kernel: ASID allocator initialised with 65536 entries Sep 4 23:44:23.206612 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 23:44:23.206630 kernel: Serial: AMBA PL011 UART driver Sep 4 23:44:23.206648 kernel: Modules: 17728 pages in range for non-PLT usage Sep 4 23:44:23.206666 kernel: Modules: 509248 pages in range for PLT usage Sep 4 23:44:23.206689 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 23:44:23.206707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 23:44:23.206725 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 23:44:23.206743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 23:44:23.206761 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 23:44:23.206780 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 23:44:23.206798 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 23:44:23.206816 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 23:44:23.206834 kernel: ACPI: Added _OSI(Module Device) Sep 4 23:44:23.206856 kernel: ACPI: Added _OSI(Processor Device) Sep 4 23:44:23.206874 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 23:44:23.206892 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 23:44:23.206910 kernel: ACPI: Interpreter enabled Sep 4 23:44:23.206928 kernel: ACPI: Using GIC for interrupt routing Sep 4 23:44:23.206946 kernel: ACPI: MCFG table detected, 1 entries Sep 4 23:44:23.206964 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 23:44:23.207266 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 23:44:23.207522 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 23:44:23.207727 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 23:44:23.207922 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 23:44:23.208119 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 23:44:23.208144 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 23:44:23.208164 kernel: acpiphp: Slot [1] registered Sep 4 23:44:23.208182 kernel: acpiphp: Slot [2] registered Sep 4 23:44:23.208200 kernel: acpiphp: Slot [3] registered Sep 4 23:44:23.208218 kernel: acpiphp: Slot [4] registered Sep 4 23:44:23.208245 kernel: acpiphp: Slot [5] registered Sep 4 23:44:23.208264 kernel: acpiphp: Slot [6] registered Sep 4 23:44:23.208281 kernel: acpiphp: Slot [7] registered Sep 4 23:44:23.208300 kernel: acpiphp: Slot [8] registered Sep 4 23:44:23.208319 kernel: acpiphp: Slot [9] registered Sep 4 23:44:23.208337 kernel: acpiphp: Slot [10] registered Sep 4 23:44:23.208355 kernel: acpiphp: Slot [11] registered Sep 4 23:44:23.208373 kernel: acpiphp: Slot [12] registered Sep 4 23:44:23.208425 kernel: acpiphp: Slot [13] registered Sep 4 23:44:23.208455 kernel: acpiphp: Slot [14] registered Sep 4 23:44:23.208475 kernel: acpiphp: Slot [15] registered Sep 4 23:44:23.208493 kernel: acpiphp: Slot [16] registered Sep 4 23:44:23.208510 kernel: acpiphp: Slot [17] registered Sep 4 23:44:23.208528 kernel: acpiphp: Slot [18] registered Sep 4 23:44:23.208546 kernel: acpiphp: Slot [19] registered Sep 4 23:44:23.208564 kernel: acpiphp: Slot [20] registered Sep 4 23:44:23.208582 kernel: acpiphp: Slot [21] registered Sep 4 23:44:23.208600 kernel: acpiphp: Slot [22] registered Sep 4 23:44:23.208618 kernel: acpiphp: Slot [23] registered Sep 4 23:44:23.208641 kernel: acpiphp: Slot [24] registered Sep 4 23:44:23.208659 kernel: acpiphp: Slot [25] registered Sep 4 23:44:23.208677 kernel: acpiphp: Slot [26] registered Sep 4 23:44:23.208694 kernel: acpiphp: Slot [27] registered Sep 4 23:44:23.208712 kernel: acpiphp: Slot [28] registered Sep 4 23:44:23.208730 kernel: acpiphp: Slot [29] registered Sep 4 23:44:23.208748 kernel: acpiphp: Slot [30] registered Sep 4 23:44:23.208766 kernel: acpiphp: Slot [31] registered Sep 4 23:44:23.208784 kernel: PCI host bridge to bus 0000:00 Sep 4 23:44:23.209010 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 23:44:23.209197 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 23:44:23.209379 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 23:44:23.209588 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 23:44:23.209835 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 23:44:23.210076 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 23:44:23.210291 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 23:44:23.210566 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 23:44:23.210776 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 23:44:23.210988 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 23:44:23.211640 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 23:44:23.211846 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 23:44:23.212045 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 23:44:23.212249 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 23:44:23.212470 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 23:44:23.212677 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 23:44:23.212880 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 23:44:23.213083 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 23:44:23.213292 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 23:44:23.213525 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 23:44:23.213761 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 23:44:23.213961 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 23:44:23.214148 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 23:44:23.214174 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 23:44:23.214193 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 23:44:23.214211 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 23:44:23.214230 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 23:44:23.214248 kernel: iommu: Default domain type: Translated Sep 4 23:44:23.214275 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 23:44:23.214293 kernel: efivars: Registered efivars operations Sep 4 23:44:23.214311 kernel: vgaarb: loaded Sep 4 23:44:23.214329 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 23:44:23.214347 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 23:44:23.214365 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 23:44:23.214383 kernel: pnp: PnP ACPI init Sep 4 23:44:23.214644 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 23:44:23.214680 kernel: pnp: PnP ACPI: found 1 devices Sep 4 23:44:23.214700 kernel: NET: Registered PF_INET protocol family Sep 4 23:44:23.214719 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 23:44:23.214738 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 23:44:23.214757 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 23:44:23.214775 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 23:44:23.214794 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 23:44:23.214812 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 23:44:23.214831 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:23.214854 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 23:44:23.214873 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 23:44:23.214892 kernel: PCI: CLS 0 bytes, default 64 Sep 4 23:44:23.214910 kernel: kvm [1]: HYP mode not available Sep 4 23:44:23.216445 kernel: Initialise system trusted keyrings Sep 4 23:44:23.216483 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 23:44:23.216502 kernel: Key type asymmetric registered Sep 4 23:44:23.216521 kernel: Asymmetric key parser 'x509' registered Sep 4 23:44:23.216539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 23:44:23.216566 kernel: io scheduler mq-deadline registered Sep 4 23:44:23.216585 kernel: io scheduler kyber registered Sep 4 23:44:23.216603 kernel: io scheduler bfq registered Sep 4 23:44:23.216859 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 23:44:23.216887 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 23:44:23.216906 kernel: ACPI: button: Power Button [PWRB] Sep 4 23:44:23.216924 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 23:44:23.216943 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 23:44:23.216961 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 23:44:23.216986 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 23:44:23.217190 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 23:44:23.217216 kernel: printk: console [ttyS0] disabled Sep 4 23:44:23.217235 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 23:44:23.217253 kernel: printk: console [ttyS0] enabled Sep 4 23:44:23.217271 kernel: printk: bootconsole [uart0] disabled Sep 4 23:44:23.217289 kernel: thunder_xcv, ver 1.0 Sep 4 23:44:23.217306 kernel: thunder_bgx, ver 1.0 Sep 4 23:44:23.217324 kernel: nicpf, ver 1.0 Sep 4 23:44:23.217348 kernel: nicvf, ver 1.0 Sep 4 23:44:23.217686 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 23:44:23.217902 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-04T23:44:22 UTC (1757029462) Sep 4 23:44:23.217929 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 23:44:23.217948 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 23:44:23.217967 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 23:44:23.217985 kernel: watchdog: Hard watchdog permanently disabled Sep 4 23:44:23.218004 kernel: NET: Registered PF_INET6 protocol family Sep 4 23:44:23.218029 kernel: Segment Routing with IPv6 Sep 4 23:44:23.218048 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 23:44:23.218066 kernel: NET: Registered PF_PACKET protocol family Sep 4 23:44:23.218084 kernel: Key type dns_resolver registered Sep 4 23:44:23.218101 kernel: registered taskstats version 1 Sep 4 23:44:23.218119 kernel: Loading compiled-in X.509 certificates Sep 4 23:44:23.218138 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 83306acb9da7bc81cc6aa49a1c622f78672939c0' Sep 4 23:44:23.218156 kernel: Key type .fscrypt registered Sep 4 23:44:23.218174 kernel: Key type fscrypt-provisioning registered Sep 4 23:44:23.218197 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 23:44:23.218215 kernel: ima: Allocated hash algorithm: sha1 Sep 4 23:44:23.218233 kernel: ima: No architecture policies found Sep 4 23:44:23.218251 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 23:44:23.218269 kernel: clk: Disabling unused clocks Sep 4 23:44:23.218287 kernel: Freeing unused kernel memory: 38400K Sep 4 23:44:23.218305 kernel: Run /init as init process Sep 4 23:44:23.218322 kernel: with arguments: Sep 4 23:44:23.218340 kernel: /init Sep 4 23:44:23.218362 kernel: with environment: Sep 4 23:44:23.218380 kernel: HOME=/ Sep 4 23:44:23.220472 kernel: TERM=linux Sep 4 23:44:23.220502 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 23:44:23.220523 systemd[1]: Successfully made /usr/ read-only. Sep 4 23:44:23.220550 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:23.220572 systemd[1]: Detected virtualization amazon. Sep 4 23:44:23.220634 systemd[1]: Detected architecture arm64. Sep 4 23:44:23.220657 systemd[1]: Running in initrd. Sep 4 23:44:23.220677 systemd[1]: No hostname configured, using default hostname. Sep 4 23:44:23.220698 systemd[1]: Hostname set to . Sep 4 23:44:23.220718 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:44:23.220737 systemd[1]: Queued start job for default target initrd.target. Sep 4 23:44:23.220758 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:23.220779 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:23.220800 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 23:44:23.220827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:23.220847 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 23:44:23.220869 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 23:44:23.220892 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 23:44:23.220912 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 23:44:23.220932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:23.220957 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:23.220977 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:23.220997 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:23.221017 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:23.221037 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:23.221057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:23.221077 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:23.221097 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 23:44:23.221117 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 23:44:23.221142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:23.221162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:23.221183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:23.221202 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:23.221222 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 23:44:23.221242 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:23.221262 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 23:44:23.221282 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 23:44:23.221302 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:23.221326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:23.221346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:23.221366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:23.221386 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:23.221441 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 23:44:23.221471 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 23:44:23.221540 systemd-journald[252]: Collecting audit messages is disabled. Sep 4 23:44:23.221584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:23.221610 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:23.221631 systemd-journald[252]: Journal started Sep 4 23:44:23.221668 systemd-journald[252]: Runtime Journal (/run/log/journal/ec297263479b762d890e0993a34b892e) is 8M, max 75.3M, 67.3M free. Sep 4 23:44:23.190615 systemd-modules-load[253]: Inserted module 'overlay' Sep 4 23:44:23.233533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 23:44:23.233617 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:23.237525 systemd-modules-load[253]: Inserted module 'br_netfilter' Sep 4 23:44:23.242558 kernel: Bridge firewalling registered Sep 4 23:44:23.244150 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:23.252617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 23:44:23.264827 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:23.274725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:23.283674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:23.299280 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:23.313737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:23.319737 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:23.333736 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 23:44:23.343546 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:23.360749 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:23.378301 dracut-cmdline[288]: dracut-dracut-053 Sep 4 23:44:23.385435 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0304960b24e314f6095f7d8ad705a9bc0a9a4a34f7817da10ea634466a73d86e Sep 4 23:44:23.455068 systemd-resolved[290]: Positive Trust Anchors: Sep 4 23:44:23.455094 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:23.455155 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:23.562434 kernel: SCSI subsystem initialized Sep 4 23:44:23.571416 kernel: Loading iSCSI transport class v2.0-870. Sep 4 23:44:23.582438 kernel: iscsi: registered transport (tcp) Sep 4 23:44:23.604429 kernel: iscsi: registered transport (qla4xxx) Sep 4 23:44:23.604515 kernel: QLogic iSCSI HBA Driver Sep 4 23:44:23.690419 kernel: random: crng init done Sep 4 23:44:23.690797 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 4 23:44:23.694670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:23.697856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:23.724465 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:23.733865 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 23:44:23.768637 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 23:44:23.768765 kernel: device-mapper: uevent: version 1.0.3 Sep 4 23:44:23.768795 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 23:44:23.835438 kernel: raid6: neonx8 gen() 6628 MB/s Sep 4 23:44:23.852427 kernel: raid6: neonx4 gen() 6593 MB/s Sep 4 23:44:23.869426 kernel: raid6: neonx2 gen() 5459 MB/s Sep 4 23:44:23.886427 kernel: raid6: neonx1 gen() 3978 MB/s Sep 4 23:44:23.903426 kernel: raid6: int64x8 gen() 3635 MB/s Sep 4 23:44:23.920427 kernel: raid6: int64x4 gen() 3723 MB/s Sep 4 23:44:23.937426 kernel: raid6: int64x2 gen() 3619 MB/s Sep 4 23:44:23.955397 kernel: raid6: int64x1 gen() 2770 MB/s Sep 4 23:44:23.955429 kernel: raid6: using algorithm neonx8 gen() 6628 MB/s Sep 4 23:44:23.973428 kernel: raid6: .... xor() 4743 MB/s, rmw enabled Sep 4 23:44:23.973463 kernel: raid6: using neon recovery algorithm Sep 4 23:44:23.981861 kernel: xor: measuring software checksum speed Sep 4 23:44:23.981916 kernel: 8regs : 12935 MB/sec Sep 4 23:44:23.983036 kernel: 32regs : 13047 MB/sec Sep 4 23:44:23.984299 kernel: arm64_neon : 9579 MB/sec Sep 4 23:44:23.984331 kernel: xor: using function: 32regs (13047 MB/sec) Sep 4 23:44:24.067438 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 23:44:24.086594 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:24.104706 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:24.139387 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 4 23:44:24.150743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:24.162989 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 23:44:24.194189 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 4 23:44:24.250423 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:24.262681 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:24.384477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:24.396765 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 23:44:24.440660 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:24.446823 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:24.452763 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:24.455602 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:24.470744 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 23:44:24.513921 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:24.598442 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 23:44:24.598512 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 23:44:24.604024 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 23:44:24.604388 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 23:44:24.611448 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:dc:0d:7d:19:11 Sep 4 23:44:24.617923 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:44:24.624037 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:24.626151 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:24.632646 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:24.636534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:24.636831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:24.650925 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:24.659812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:24.665574 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:24.673472 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 23:44:24.673511 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 23:44:24.683438 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 23:44:24.691155 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 23:44:24.691234 kernel: GPT:9289727 != 16777215 Sep 4 23:44:24.691259 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 23:44:24.692105 kernel: GPT:9289727 != 16777215 Sep 4 23:44:24.693485 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 23:44:24.694417 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:44:24.700803 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:24.712765 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 23:44:24.761901 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:24.810441 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by (udev-worker) (519) Sep 4 23:44:24.816692 kernel: BTRFS: device fsid 74a5374f-334b-4c07-8952-82f9f0ad22ae devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (516) Sep 4 23:44:24.913350 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 23:44:24.958652 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 23:44:24.985539 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:44:25.005686 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 23:44:25.013809 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 23:44:25.033622 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 23:44:25.048514 disk-uuid[662]: Primary Header is updated. Sep 4 23:44:25.048514 disk-uuid[662]: Secondary Entries is updated. Sep 4 23:44:25.048514 disk-uuid[662]: Secondary Header is updated. Sep 4 23:44:25.061438 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:44:26.074509 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 23:44:26.075558 disk-uuid[663]: The operation has completed successfully. Sep 4 23:44:26.261930 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 23:44:26.263879 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 23:44:26.369646 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 23:44:26.377346 sh[923]: Success Sep 4 23:44:26.398439 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 23:44:26.515568 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 23:44:26.520422 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 23:44:26.530781 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 23:44:26.564644 kernel: BTRFS info (device dm-0): first mount of filesystem 74a5374f-334b-4c07-8952-82f9f0ad22ae Sep 4 23:44:26.564717 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:26.564744 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 23:44:26.566484 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 23:44:26.567821 kernel: BTRFS info (device dm-0): using free space tree Sep 4 23:44:26.694441 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 23:44:26.714239 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 23:44:26.718811 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 23:44:26.736632 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 23:44:26.742634 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 23:44:26.788001 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:26.788072 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:26.789427 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:44:26.806471 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:44:26.814476 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:26.818569 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 23:44:26.835780 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 23:44:26.926451 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:26.942665 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:27.000237 systemd-networkd[1126]: lo: Link UP Sep 4 23:44:27.002007 systemd-networkd[1126]: lo: Gained carrier Sep 4 23:44:27.006470 systemd-networkd[1126]: Enumeration completed Sep 4 23:44:27.007544 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:27.007603 systemd-networkd[1126]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:27.007610 systemd-networkd[1126]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:27.012983 systemd[1]: Reached target network.target - Network. Sep 4 23:44:27.018281 systemd-networkd[1126]: eth0: Link UP Sep 4 23:44:27.018288 systemd-networkd[1126]: eth0: Gained carrier Sep 4 23:44:27.018306 systemd-networkd[1126]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:27.052468 systemd-networkd[1126]: eth0: DHCPv4 address 172.31.31.201/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:44:27.233830 ignition[1054]: Ignition 2.20.0 Sep 4 23:44:27.233867 ignition[1054]: Stage: fetch-offline Sep 4 23:44:27.237767 ignition[1054]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:27.237932 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:27.240231 ignition[1054]: Ignition finished successfully Sep 4 23:44:27.246351 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:27.263648 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 23:44:27.289295 ignition[1138]: Ignition 2.20.0 Sep 4 23:44:27.289333 ignition[1138]: Stage: fetch Sep 4 23:44:27.291154 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:27.291182 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:27.292493 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:27.309199 ignition[1138]: PUT result: OK Sep 4 23:44:27.312757 ignition[1138]: parsed url from cmdline: "" Sep 4 23:44:27.312783 ignition[1138]: no config URL provided Sep 4 23:44:27.312799 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 23:44:27.312830 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Sep 4 23:44:27.312864 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:27.314610 ignition[1138]: PUT result: OK Sep 4 23:44:27.314701 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 23:44:27.319374 ignition[1138]: GET result: OK Sep 4 23:44:27.319656 ignition[1138]: parsing config with SHA512: 144912cc9ec34474fd9a5a5cbdae2bcc600b70552660003eb578e2e246735fd260a548562b4319af953263ddf115447944d1db6079c05c44985930aebb4825a8 Sep 4 23:44:27.341147 unknown[1138]: fetched base config from "system" Sep 4 23:44:27.341178 unknown[1138]: fetched base config from "system" Sep 4 23:44:27.341192 unknown[1138]: fetched user config from "aws" Sep 4 23:44:27.344334 ignition[1138]: fetch: fetch complete Sep 4 23:44:27.344346 ignition[1138]: fetch: fetch passed Sep 4 23:44:27.344455 ignition[1138]: Ignition finished successfully Sep 4 23:44:27.350720 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 23:44:27.364838 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 23:44:27.395555 ignition[1144]: Ignition 2.20.0 Sep 4 23:44:27.396053 ignition[1144]: Stage: kargs Sep 4 23:44:27.396706 ignition[1144]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:27.396731 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:27.396901 ignition[1144]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:27.406061 ignition[1144]: PUT result: OK Sep 4 23:44:27.410915 ignition[1144]: kargs: kargs passed Sep 4 23:44:27.411026 ignition[1144]: Ignition finished successfully Sep 4 23:44:27.415633 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 23:44:27.426765 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 23:44:27.448678 ignition[1150]: Ignition 2.20.0 Sep 4 23:44:27.448707 ignition[1150]: Stage: disks Sep 4 23:44:27.449308 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:27.449335 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:27.449520 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:27.460611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 23:44:27.451623 ignition[1150]: PUT result: OK Sep 4 23:44:27.465865 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:27.458014 ignition[1150]: disks: disks passed Sep 4 23:44:27.470618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 23:44:27.458113 ignition[1150]: Ignition finished successfully Sep 4 23:44:27.473278 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:27.476758 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:27.479002 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:27.495659 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 23:44:27.548369 systemd-fsck[1158]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 23:44:27.556629 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 23:44:27.567558 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 23:44:27.652465 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 22b06923-f972-4753-b92e-d6b25ef15ca3 r/w with ordered data mode. Quota mode: none. Sep 4 23:44:27.653869 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 23:44:27.662492 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:27.680595 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:27.690601 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 23:44:27.699298 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 23:44:27.699415 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 23:44:27.721966 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by mount (1177) Sep 4 23:44:27.722005 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:27.722031 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:27.699493 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:27.725903 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:44:27.728582 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 23:44:27.740437 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:44:27.743700 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 23:44:27.750564 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:28.187331 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 23:44:28.231575 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory Sep 4 23:44:28.240360 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 23:44:28.249780 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 23:44:28.579017 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:28.591612 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 23:44:28.603951 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 23:44:28.623410 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 23:44:28.629532 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:28.663622 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 23:44:28.676027 ignition[1290]: INFO : Ignition 2.20.0 Sep 4 23:44:28.676027 ignition[1290]: INFO : Stage: mount Sep 4 23:44:28.679744 ignition[1290]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:28.679744 ignition[1290]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:28.685089 ignition[1290]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:28.687767 ignition[1290]: INFO : PUT result: OK Sep 4 23:44:28.692874 ignition[1290]: INFO : mount: mount passed Sep 4 23:44:28.694615 ignition[1290]: INFO : Ignition finished successfully Sep 4 23:44:28.699285 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 23:44:28.708733 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 23:44:28.722156 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 23:44:28.760430 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1301) Sep 4 23:44:28.764707 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6280ecaa-ba8f-4e5e-8483-db3a07084cf9 Sep 4 23:44:28.764752 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 23:44:28.766006 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 23:44:28.771421 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 23:44:28.774995 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 23:44:28.807955 ignition[1318]: INFO : Ignition 2.20.0 Sep 4 23:44:28.807955 ignition[1318]: INFO : Stage: files Sep 4 23:44:28.812936 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:28.812936 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:28.812936 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:28.812936 ignition[1318]: INFO : PUT result: OK Sep 4 23:44:28.825568 ignition[1318]: DEBUG : files: compiled without relabeling support, skipping Sep 4 23:44:28.828536 ignition[1318]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 23:44:28.828536 ignition[1318]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 23:44:28.839486 ignition[1318]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 23:44:28.843017 ignition[1318]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 23:44:28.843017 ignition[1318]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 23:44:28.842834 unknown[1318]: wrote ssh authorized keys file for user: core Sep 4 23:44:28.850498 systemd-networkd[1126]: eth0: Gained IPv6LL Sep 4 23:44:28.858759 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 4 23:44:28.858759 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:28.944675 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 23:44:29.102494 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 4 23:44:29.102494 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:29.110846 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 4 23:44:29.371441 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 23:44:29.601455 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 23:44:29.601455 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:29.609218 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 23:44:29.609218 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:29.609218 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 23:44:29.620735 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:29.620735 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 23:44:29.620735 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:29.620735 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 23:44:29.637092 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:29.641230 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 23:44:29.645166 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:29.650767 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:29.656236 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:29.656236 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 4 23:44:30.048541 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 23:44:30.409549 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 4 23:44:30.409549 ignition[1318]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 23:44:30.417714 ignition[1318]: INFO : files: files passed Sep 4 23:44:30.417714 ignition[1318]: INFO : Ignition finished successfully Sep 4 23:44:30.417076 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 23:44:30.456658 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 23:44:30.465710 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 23:44:30.478994 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 23:44:30.481199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 23:44:30.497440 initrd-setup-root-after-ignition[1347]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:30.497440 initrd-setup-root-after-ignition[1347]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:30.508243 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 23:44:30.515461 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:30.515819 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 23:44:30.534884 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 23:44:30.576524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 23:44:30.576732 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 23:44:30.585969 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 23:44:30.588437 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 23:44:30.591064 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 23:44:30.607157 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 23:44:30.630982 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:30.643684 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 23:44:30.670103 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:30.675695 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:30.679193 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 23:44:30.683321 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 23:44:30.683606 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 23:44:30.692384 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 23:44:30.698886 systemd[1]: Stopped target basic.target - Basic System. Sep 4 23:44:30.702897 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 23:44:30.709005 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 23:44:30.714007 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 23:44:30.718960 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 23:44:30.721424 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 23:44:30.724710 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 23:44:30.733772 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 23:44:30.736265 systemd[1]: Stopped target swap.target - Swaps. Sep 4 23:44:30.741906 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 23:44:30.742142 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 23:44:30.748839 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:30.751625 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:30.756208 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 23:44:30.759799 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:30.762736 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 23:44:30.763050 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 23:44:30.777136 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 23:44:30.779910 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 23:44:30.785446 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 23:44:30.785694 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 23:44:30.799709 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 23:44:30.801930 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 23:44:30.802189 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:30.819703 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 23:44:30.822060 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 23:44:30.822993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:30.833191 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 23:44:30.834510 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 23:44:30.855494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 23:44:30.859550 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 23:44:30.869536 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 23:44:30.879048 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 23:44:30.881199 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 23:44:30.888970 ignition[1371]: INFO : Ignition 2.20.0 Sep 4 23:44:30.888970 ignition[1371]: INFO : Stage: umount Sep 4 23:44:30.888970 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 23:44:30.888970 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 23:44:30.888970 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 23:44:30.900678 ignition[1371]: INFO : PUT result: OK Sep 4 23:44:30.904419 ignition[1371]: INFO : umount: umount passed Sep 4 23:44:30.906325 ignition[1371]: INFO : Ignition finished successfully Sep 4 23:44:30.910071 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 23:44:30.910505 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 23:44:30.915556 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 23:44:30.915724 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 23:44:30.923487 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 23:44:30.923606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 23:44:30.929992 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 23:44:30.930094 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 23:44:30.932296 systemd[1]: Stopped target network.target - Network. Sep 4 23:44:30.934448 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 23:44:30.934546 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 23:44:30.937227 systemd[1]: Stopped target paths.target - Path Units. Sep 4 23:44:30.939168 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 23:44:30.948719 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:30.951448 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 23:44:30.953474 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 23:44:30.955925 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 23:44:30.956010 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 23:44:30.960543 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 23:44:30.960619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 23:44:30.964191 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 23:44:30.964284 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 23:44:30.966573 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 23:44:30.966657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 23:44:30.971757 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 23:44:30.971848 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 23:44:30.977045 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 23:44:30.979897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 23:44:30.993096 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 23:44:30.993451 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 23:44:31.016307 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 23:44:31.016892 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 23:44:31.017116 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 23:44:31.022361 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 23:44:31.024025 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 23:44:31.024148 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:31.041422 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 23:44:31.045698 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 23:44:31.045822 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 23:44:31.049358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:44:31.049466 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:31.052672 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 23:44:31.052760 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:31.069701 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 23:44:31.069807 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:31.076718 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:31.089681 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 23:44:31.089825 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:31.111249 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 23:44:31.112904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:31.121674 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 23:44:31.122293 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:31.128966 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 23:44:31.129256 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:31.136195 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 23:44:31.136312 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 23:44:31.139809 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 23:44:31.139921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 23:44:31.150843 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 23:44:31.150957 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 23:44:31.169005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 23:44:31.175541 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 23:44:31.175761 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:31.188111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 23:44:31.188217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:31.192671 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 23:44:31.192797 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 23:44:31.200857 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 23:44:31.201106 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 23:44:31.220950 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 23:44:31.222646 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 23:44:31.229820 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 23:44:31.241763 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 23:44:31.260847 systemd[1]: Switching root. Sep 4 23:44:31.333490 systemd-journald[252]: Journal stopped Sep 4 23:44:34.225274 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Sep 4 23:44:34.225484 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 23:44:34.225534 kernel: SELinux: policy capability open_perms=1 Sep 4 23:44:34.225568 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 23:44:34.225598 kernel: SELinux: policy capability always_check_network=0 Sep 4 23:44:34.225649 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 23:44:34.225687 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 23:44:34.225721 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 23:44:34.225751 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 23:44:34.225780 kernel: audit: type=1403 audit(1757029472.074:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 23:44:34.225827 systemd[1]: Successfully loaded SELinux policy in 96.797ms. Sep 4 23:44:34.225882 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.088ms. Sep 4 23:44:34.225917 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 23:44:34.225950 systemd[1]: Detected virtualization amazon. Sep 4 23:44:34.225981 systemd[1]: Detected architecture arm64. Sep 4 23:44:34.226012 systemd[1]: Detected first boot. Sep 4 23:44:34.226046 systemd[1]: Initializing machine ID from VM UUID. Sep 4 23:44:34.226079 zram_generator::config[1415]: No configuration found. Sep 4 23:44:34.226119 kernel: NET: Registered PF_VSOCK protocol family Sep 4 23:44:34.226153 systemd[1]: Populated /etc with preset unit settings. Sep 4 23:44:34.226188 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 23:44:34.226222 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 23:44:34.226255 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 23:44:34.226289 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 23:44:34.226320 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 23:44:34.226350 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 23:44:34.226380 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 23:44:34.226465 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 23:44:34.226504 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 23:44:34.226535 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 23:44:34.226570 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 23:44:34.226603 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 23:44:34.226634 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 23:44:34.226668 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 23:44:34.226710 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 23:44:34.226745 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 23:44:34.226776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 23:44:34.226808 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 23:44:34.226840 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 23:44:34.226872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 23:44:34.226902 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 23:44:34.226934 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 23:44:34.226967 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 23:44:34.227003 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 23:44:34.227036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 23:44:34.227066 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 23:44:34.227096 systemd[1]: Reached target slices.target - Slice Units. Sep 4 23:44:34.227129 systemd[1]: Reached target swap.target - Swaps. Sep 4 23:44:34.227159 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 23:44:34.227190 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 23:44:34.227222 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 23:44:34.227253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 23:44:34.227290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 23:44:34.227320 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 23:44:34.227351 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 23:44:34.227383 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 23:44:34.227442 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 23:44:34.227479 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 23:44:34.227512 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 23:44:34.227545 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 23:44:34.227575 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 23:44:34.227615 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 23:44:34.227649 systemd[1]: Reached target machines.target - Containers. Sep 4 23:44:34.227681 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 23:44:34.227715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:34.227745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 23:44:34.227774 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 23:44:34.227804 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:34.227835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:34.227879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:34.227911 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 23:44:34.227940 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:34.227971 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 23:44:34.228000 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 23:44:34.228029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 23:44:34.228058 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 23:44:34.228086 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 23:44:34.228116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:34.228153 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 23:44:34.228182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 23:44:34.228211 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 23:44:34.228240 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 23:44:34.228271 kernel: fuse: init (API version 7.39) Sep 4 23:44:34.228299 kernel: loop: module loaded Sep 4 23:44:34.228330 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 23:44:34.228362 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 23:44:34.230454 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 23:44:34.230530 systemd[1]: Stopped verity-setup.service. Sep 4 23:44:34.230561 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 23:44:34.230590 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 23:44:34.230622 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 23:44:34.230657 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 23:44:34.230688 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 23:44:34.230718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 23:44:34.230747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 23:44:34.230781 kernel: ACPI: bus type drm_connector registered Sep 4 23:44:34.230811 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 23:44:34.230845 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 23:44:34.230878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:34.230909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:34.230939 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:34.231032 systemd-journald[1501]: Collecting audit messages is disabled. Sep 4 23:44:34.231090 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:34.231124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:34.231161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:34.231190 systemd-journald[1501]: Journal started Sep 4 23:44:34.231239 systemd-journald[1501]: Runtime Journal (/run/log/journal/ec297263479b762d890e0993a34b892e) is 8M, max 75.3M, 67.3M free. Sep 4 23:44:33.557210 systemd[1]: Queued start job for default target multi-user.target. Sep 4 23:44:33.572680 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 23:44:33.573652 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 23:44:34.239844 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 23:44:34.242026 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 23:44:34.244618 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 23:44:34.247972 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:34.248479 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:34.254958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 23:44:34.258142 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 23:44:34.262077 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 23:44:34.266885 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 23:44:34.304542 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 23:44:34.315613 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 23:44:34.330651 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 23:44:34.335615 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 23:44:34.335684 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 23:44:34.343978 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 23:44:34.354758 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 23:44:34.366749 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 23:44:34.371066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:34.384537 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 23:44:34.394472 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 23:44:34.398653 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:34.412802 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 23:44:34.415599 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:34.422367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:44:34.430845 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 23:44:34.439933 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 23:44:34.444198 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 23:44:34.447857 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 23:44:34.451800 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 23:44:34.473828 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 23:44:34.488703 systemd-journald[1501]: Time spent on flushing to /var/log/journal/ec297263479b762d890e0993a34b892e is 97.676ms for 921 entries. Sep 4 23:44:34.488703 systemd-journald[1501]: System Journal (/var/log/journal/ec297263479b762d890e0993a34b892e) is 8M, max 195.6M, 187.6M free. Sep 4 23:44:34.610222 systemd-journald[1501]: Received client request to flush runtime journal. Sep 4 23:44:34.611163 kernel: loop0: detected capacity change from 0 to 211168 Sep 4 23:44:34.494871 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 23:44:34.498105 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 23:44:34.514789 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 23:44:34.587121 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 23:44:34.592738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:44:34.596626 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 23:44:34.622855 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 23:44:34.637935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 23:44:34.660733 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 23:44:34.671160 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 23:44:34.687579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 23:44:34.719640 udevadm[1566]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 23:44:34.782717 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Sep 4 23:44:34.783773 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Sep 4 23:44:34.804456 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 23:44:34.806542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 23:44:34.844430 kernel: loop1: detected capacity change from 0 to 123192 Sep 4 23:44:34.974448 kernel: loop2: detected capacity change from 0 to 53784 Sep 4 23:44:35.111922 kernel: loop3: detected capacity change from 0 to 113512 Sep 4 23:44:35.228463 kernel: loop4: detected capacity change from 0 to 211168 Sep 4 23:44:35.255451 kernel: loop5: detected capacity change from 0 to 123192 Sep 4 23:44:35.269446 kernel: loop6: detected capacity change from 0 to 53784 Sep 4 23:44:35.290501 kernel: loop7: detected capacity change from 0 to 113512 Sep 4 23:44:35.303342 (sd-merge)[1576]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 23:44:35.304354 (sd-merge)[1576]: Merged extensions into '/usr'. Sep 4 23:44:35.317877 systemd[1]: Reload requested from client PID 1549 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 23:44:35.317911 systemd[1]: Reloading... Sep 4 23:44:35.512480 zram_generator::config[1607]: No configuration found. Sep 4 23:44:35.857876 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:36.016279 systemd[1]: Reloading finished in 697 ms. Sep 4 23:44:36.029773 ldconfig[1544]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 23:44:36.041435 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 23:44:36.044657 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 23:44:36.048325 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 23:44:36.064739 systemd[1]: Starting ensure-sysext.service... Sep 4 23:44:36.073834 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 23:44:36.088655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 23:44:36.117685 systemd[1]: Reload requested from client PID 1657 ('systemctl') (unit ensure-sysext.service)... Sep 4 23:44:36.117724 systemd[1]: Reloading... Sep 4 23:44:36.163857 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 23:44:36.165046 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 23:44:36.166927 systemd-tmpfiles[1658]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 23:44:36.168198 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Sep 4 23:44:36.168518 systemd-tmpfiles[1658]: ACLs are not supported, ignoring. Sep 4 23:44:36.194303 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:36.194554 systemd-tmpfiles[1658]: Skipping /boot Sep 4 23:44:36.211137 systemd-udevd[1659]: Using default interface naming scheme 'v255'. Sep 4 23:44:36.239627 systemd-tmpfiles[1658]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 23:44:36.239814 systemd-tmpfiles[1658]: Skipping /boot Sep 4 23:44:36.337443 zram_generator::config[1691]: No configuration found. Sep 4 23:44:36.590170 (udev-worker)[1718]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:44:36.754028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:44:36.796832 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1745) Sep 4 23:44:36.970593 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 23:44:36.970912 systemd[1]: Reloading finished in 852 ms. Sep 4 23:44:37.033988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 23:44:37.071086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 23:44:37.156587 systemd[1]: Finished ensure-sysext.service. Sep 4 23:44:37.186197 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 23:44:37.226826 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 23:44:37.236864 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:44:37.249658 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 23:44:37.254820 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 23:44:37.257268 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 23:44:37.268542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 23:44:37.278067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 23:44:37.284074 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 23:44:37.290799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 23:44:37.293416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 23:44:37.298766 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 23:44:37.301513 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 23:44:37.312944 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 23:44:37.321793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 23:44:37.331743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 23:44:37.334118 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 23:44:37.341728 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 23:44:37.348764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 23:44:37.354961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 23:44:37.357230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 23:44:37.392421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 23:44:37.392914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 23:44:37.396057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 23:44:37.405432 lvm[1858]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:37.424091 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 23:44:37.441759 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 23:44:37.443843 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 23:44:37.448076 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 23:44:37.450637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 23:44:37.457000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 23:44:37.476179 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 23:44:37.496664 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 23:44:37.506501 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 23:44:37.524056 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 23:44:37.543853 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 23:44:37.547277 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 23:44:37.589276 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 23:44:37.595384 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 23:44:37.600208 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 23:44:37.613748 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 23:44:37.641924 augenrules[1904]: No rules Sep 4 23:44:37.645016 lvm[1901]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 23:44:37.645364 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:44:37.647094 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:44:37.675594 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 23:44:37.703523 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 23:44:37.741116 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 23:44:37.828755 systemd-networkd[1868]: lo: Link UP Sep 4 23:44:37.828771 systemd-networkd[1868]: lo: Gained carrier Sep 4 23:44:37.832799 systemd-networkd[1868]: Enumeration completed Sep 4 23:44:37.833158 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 23:44:37.834605 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:37.834615 systemd-networkd[1868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 23:44:37.837607 systemd-networkd[1868]: eth0: Link UP Sep 4 23:44:37.838226 systemd-networkd[1868]: eth0: Gained carrier Sep 4 23:44:37.838415 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 23:44:37.843738 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 23:44:37.852204 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 23:44:37.862610 systemd-networkd[1868]: eth0: DHCPv4 address 172.31.31.201/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 23:44:37.881341 systemd-resolved[1869]: Positive Trust Anchors: Sep 4 23:44:37.881386 systemd-resolved[1869]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 23:44:37.881484 systemd-resolved[1869]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 23:44:37.900954 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 23:44:37.908808 systemd-resolved[1869]: Defaulting to hostname 'linux'. Sep 4 23:44:37.912528 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 23:44:37.915332 systemd[1]: Reached target network.target - Network. Sep 4 23:44:37.917817 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 23:44:37.920654 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 23:44:37.923466 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 23:44:37.926486 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 23:44:37.929916 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 23:44:37.932737 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 23:44:37.935758 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 23:44:37.938703 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 23:44:37.938769 systemd[1]: Reached target paths.target - Path Units. Sep 4 23:44:37.940996 systemd[1]: Reached target timers.target - Timer Units. Sep 4 23:44:37.944573 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 23:44:37.950101 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 23:44:37.958059 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 23:44:37.962112 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 23:44:37.965087 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 23:44:37.977840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 23:44:37.981172 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 23:44:37.985242 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 23:44:37.988275 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 23:44:37.990932 systemd[1]: Reached target basic.target - Basic System. Sep 4 23:44:37.993471 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:44:37.993747 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 23:44:38.007485 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 23:44:38.014662 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 23:44:38.021371 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 23:44:38.037739 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 23:44:38.044206 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 23:44:38.046636 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 23:44:38.050335 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 23:44:38.064603 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 23:44:38.071700 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 23:44:38.083614 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 23:44:38.090777 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 23:44:38.100823 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 23:44:38.116739 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 23:44:38.127985 jq[1930]: false Sep 4 23:44:38.123882 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 23:44:38.124881 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 23:44:38.127760 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 23:44:38.155064 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 23:44:38.168247 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 23:44:38.168814 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 23:44:38.169549 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 23:44:38.169989 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 23:44:38.203720 dbus-daemon[1929]: [system] SELinux support is enabled Sep 4 23:44:38.203989 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 23:44:38.230896 dbus-daemon[1929]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1868 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 23:44:38.213465 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 23:44:38.213521 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 23:44:38.216511 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 23:44:38.216552 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 23:44:38.261443 update_engine[1941]: I20250904 23:44:38.253347 1941 main.cc:92] Flatcar Update Engine starting Sep 4 23:44:38.261443 update_engine[1941]: I20250904 23:44:38.259769 1941 update_check_scheduler.cc:74] Next update check in 5m37s Sep 4 23:44:38.265005 systemd[1]: Started update-engine.service - Update Engine. Sep 4 23:44:38.278749 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 23:44:38.286841 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 23:44:38.291726 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 23:44:38.293525 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 23:44:38.327455 extend-filesystems[1931]: Found loop4 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found loop5 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found loop6 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found loop7 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1p1 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1p2 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1p3 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found usr Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1p4 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1p6 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1p7 Sep 4 23:44:38.327455 extend-filesystems[1931]: Found nvme0n1p9 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:39:02 UTC 2025 (1): Starting Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: ---------------------------------------------------- Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: corporation. Support and training for ntp-4 are Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: available at https://www.nwtime.org/support Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: ---------------------------------------------------- Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: proto: precision = 0.108 usec (-23) Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: basedate set to 2025-08-23 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: gps base set to 2025-08-24 (week 2381) Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Listen normally on 3 eth0 172.31.31.201:123 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Listen normally on 4 lo [::1]:123 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: bind(21) AF_INET6 fe80::4dc:dff:fe7d:1911%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: unable to create socket on eth0 (5) for fe80::4dc:dff:fe7d:1911%2#123 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: failed to init interface for address fe80::4dc:dff:fe7d:1911%2 Sep 4 23:44:38.412627 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: Listening on routing socket on fd #21 for interface updates Sep 4 23:44:38.425484 jq[1943]: true Sep 4 23:44:38.350355 (ntainerd)[1963]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 23:44:38.363935 ntpd[1933]: ntpd 4.2.8p17@1.4004-o Thu Sep 4 21:39:02 UTC 2025 (1): Starting Sep 4 23:44:38.436012 extend-filesystems[1931]: Checking size of /dev/nvme0n1p9 Sep 4 23:44:38.443664 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:44:38.443664 ntpd[1933]: 4 Sep 23:44:38 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:44:38.363988 ntpd[1933]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 23:44:38.364008 ntpd[1933]: ---------------------------------------------------- Sep 4 23:44:38.364028 ntpd[1933]: ntp-4 is maintained by Network Time Foundation, Sep 4 23:44:38.364046 ntpd[1933]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 23:44:38.364065 ntpd[1933]: corporation. Support and training for ntp-4 are Sep 4 23:44:38.364082 ntpd[1933]: available at https://www.nwtime.org/support Sep 4 23:44:38.364099 ntpd[1933]: ---------------------------------------------------- Sep 4 23:44:38.376420 ntpd[1933]: proto: precision = 0.108 usec (-23) Sep 4 23:44:38.381877 ntpd[1933]: basedate set to 2025-08-23 Sep 4 23:44:38.381913 ntpd[1933]: gps base set to 2025-08-24 (week 2381) Sep 4 23:44:38.391616 ntpd[1933]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 23:44:38.391711 ntpd[1933]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 23:44:38.470746 extend-filesystems[1931]: Resized partition /dev/nvme0n1p9 Sep 4 23:44:38.475184 tar[1957]: linux-arm64/LICENSE Sep 4 23:44:38.475184 tar[1957]: linux-arm64/helm Sep 4 23:44:38.393708 ntpd[1933]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 23:44:38.393796 ntpd[1933]: Listen normally on 3 eth0 172.31.31.201:123 Sep 4 23:44:38.393867 ntpd[1933]: Listen normally on 4 lo [::1]:123 Sep 4 23:44:38.393958 ntpd[1933]: bind(21) AF_INET6 fe80::4dc:dff:fe7d:1911%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 23:44:38.393999 ntpd[1933]: unable to create socket on eth0 (5) for fe80::4dc:dff:fe7d:1911%2#123 Sep 4 23:44:38.394028 ntpd[1933]: failed to init interface for address fe80::4dc:dff:fe7d:1911%2 Sep 4 23:44:38.394086 ntpd[1933]: Listening on routing socket on fd #21 for interface updates Sep 4 23:44:38.428255 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:44:38.428309 ntpd[1933]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 23:44:38.508286 extend-filesystems[1979]: resize2fs 1.47.1 (20-May-2024) Sep 4 23:44:38.528623 jq[1966]: true Sep 4 23:44:38.538551 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 23:44:38.539385 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 23:44:38.617020 coreos-metadata[1928]: Sep 04 23:44:38.616 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:44:38.619627 coreos-metadata[1928]: Sep 04 23:44:38.619 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 23:44:38.622690 coreos-metadata[1928]: Sep 04 23:44:38.622 INFO Fetch successful Sep 4 23:44:38.622690 coreos-metadata[1928]: Sep 04 23:44:38.622 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 23:44:38.623231 coreos-metadata[1928]: Sep 04 23:44:38.623 INFO Fetch successful Sep 4 23:44:38.625740 coreos-metadata[1928]: Sep 04 23:44:38.623 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 23:44:38.626899 coreos-metadata[1928]: Sep 04 23:44:38.626 INFO Fetch successful Sep 4 23:44:38.626899 coreos-metadata[1928]: Sep 04 23:44:38.626 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 23:44:38.627889 coreos-metadata[1928]: Sep 04 23:44:38.627 INFO Fetch successful Sep 4 23:44:38.627889 coreos-metadata[1928]: Sep 04 23:44:38.627 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 23:44:38.629583 coreos-metadata[1928]: Sep 04 23:44:38.628 INFO Fetch failed with 404: resource not found Sep 4 23:44:38.629583 coreos-metadata[1928]: Sep 04 23:44:38.629 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 23:44:38.631916 coreos-metadata[1928]: Sep 04 23:44:38.630 INFO Fetch successful Sep 4 23:44:38.631916 coreos-metadata[1928]: Sep 04 23:44:38.631 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 23:44:38.632818 coreos-metadata[1928]: Sep 04 23:44:38.632 INFO Fetch successful Sep 4 23:44:38.632818 coreos-metadata[1928]: Sep 04 23:44:38.632 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 23:44:38.633637 coreos-metadata[1928]: Sep 04 23:44:38.633 INFO Fetch successful Sep 4 23:44:38.634835 coreos-metadata[1928]: Sep 04 23:44:38.634 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 23:44:38.635490 coreos-metadata[1928]: Sep 04 23:44:38.635 INFO Fetch successful Sep 4 23:44:38.636604 coreos-metadata[1928]: Sep 04 23:44:38.635 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 23:44:38.642202 coreos-metadata[1928]: Sep 04 23:44:38.638 INFO Fetch successful Sep 4 23:44:38.679113 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 23:44:38.691966 extend-filesystems[1979]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 23:44:38.691966 extend-filesystems[1979]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 23:44:38.691966 extend-filesystems[1979]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 23:44:38.705472 extend-filesystems[1931]: Resized filesystem in /dev/nvme0n1p9 Sep 4 23:44:38.711626 bash[1998]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:44:38.711119 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 23:44:38.712657 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 23:44:38.735319 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 23:44:38.753680 systemd[1]: Starting sshkeys.service... Sep 4 23:44:38.766434 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1726) Sep 4 23:44:38.761114 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 23:44:38.765192 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 23:44:38.839165 locksmithd[1959]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 23:44:38.855836 systemd-logind[1939]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 23:44:38.859827 systemd-logind[1939]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 23:44:38.862686 systemd-logind[1939]: New seat seat0. Sep 4 23:44:38.879151 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 23:44:38.883833 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 23:44:38.938927 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 23:44:38.942478 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 23:44:39.090369 coreos-metadata[2041]: Sep 04 23:44:39.090 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 23:44:39.091594 systemd-networkd[1868]: eth0: Gained IPv6LL Sep 4 23:44:39.097451 coreos-metadata[2041]: Sep 04 23:44:39.096 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 23:44:39.100178 coreos-metadata[2041]: Sep 04 23:44:39.099 INFO Fetch successful Sep 4 23:44:39.100178 coreos-metadata[2041]: Sep 04 23:44:39.099 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 23:44:39.101817 coreos-metadata[2041]: Sep 04 23:44:39.101 INFO Fetch successful Sep 4 23:44:39.103098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 23:44:39.107174 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 23:44:39.107665 unknown[2041]: wrote ssh authorized keys file for user: core Sep 4 23:44:39.149994 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 23:44:39.173131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:44:39.180926 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 23:44:39.364184 containerd[1963]: time="2025-09-04T23:44:39.362251583Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 4 23:44:39.366368 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 23:44:39.372057 dbus-daemon[1929]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 23:44:39.380175 dbus-daemon[1929]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1958 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 23:44:39.389953 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 23:44:39.397542 update-ssh-keys[2092]: Updated "/home/core/.ssh/authorized_keys" Sep 4 23:44:39.402023 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 23:44:39.410782 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 23:44:39.415716 systemd[1]: Finished sshkeys.service. Sep 4 23:44:39.531012 polkitd[2117]: Started polkitd version 121 Sep 4 23:44:39.574197 polkitd[2117]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 23:44:39.574340 polkitd[2117]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 23:44:39.578921 polkitd[2117]: Finished loading, compiling and executing 2 rules Sep 4 23:44:39.584956 dbus-daemon[1929]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 23:44:39.585251 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 23:44:39.604500 polkitd[2117]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: Initializing new seelog logger Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: New Seelog Logger Creation Complete Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 processing appconfig overrides Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.655064 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO Proxy environment variables: Sep 4 23:44:39.663436 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 processing appconfig overrides Sep 4 23:44:39.671438 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.671438 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.671438 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 processing appconfig overrides Sep 4 23:44:39.685560 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.685560 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 23:44:39.685560 amazon-ssm-agent[2076]: 2025/09/04 23:44:39 processing appconfig overrides Sep 4 23:44:39.688295 containerd[1963]: time="2025-09-04T23:44:39.688218865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:39.709660 containerd[1963]: time="2025-09-04T23:44:39.709550881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:39.709660 containerd[1963]: time="2025-09-04T23:44:39.709641997Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 23:44:39.709807 containerd[1963]: time="2025-09-04T23:44:39.709681177Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 23:44:39.710065 containerd[1963]: time="2025-09-04T23:44:39.710009977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 23:44:39.710164 containerd[1963]: time="2025-09-04T23:44:39.710067829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.710223565Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.710265769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.710677357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.710719045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.710750773Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.710777173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.710975281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.711430513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.711721009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.711753493Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 23:44:39.711953 containerd[1963]: time="2025-09-04T23:44:39.711956749Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 23:44:39.712539 containerd[1963]: time="2025-09-04T23:44:39.712060777Z" level=info msg="metadata content store policy set" policy=shared Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.721078753Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.721183513Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.721239877Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.721277857Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.721311961Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.721645441Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722274865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722592769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722633941Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722670109Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722701933Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722736673Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722766913Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.728442 containerd[1963]: time="2025-09-04T23:44:39.722837821Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.722870689Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.722901601Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.722930437Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.722959957Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723001813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723037333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723070789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723103249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723132505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723177661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723208201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723239113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723268981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729091 containerd[1963]: time="2025-09-04T23:44:39.723302737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.723336217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.723369013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.724193557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.724839529Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.724929289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.725101381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.725357941Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.726663229Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.726855001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.726887953Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.727188013Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.727657585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.727709929Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 23:44:39.729754 containerd[1963]: time="2025-09-04T23:44:39.727763569Z" level=info msg="NRI interface is disabled by configuration." Sep 4 23:44:39.730319 containerd[1963]: time="2025-09-04T23:44:39.727791745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 23:44:39.730383 containerd[1963]: time="2025-09-04T23:44:39.729875881Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 23:44:39.730383 containerd[1963]: time="2025-09-04T23:44:39.730001761Z" level=info msg="Connect containerd service" Sep 4 23:44:39.730383 containerd[1963]: time="2025-09-04T23:44:39.730087141Z" level=info msg="using legacy CRI server" Sep 4 23:44:39.730383 containerd[1963]: time="2025-09-04T23:44:39.730118497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 23:44:39.730383 containerd[1963]: time="2025-09-04T23:44:39.733294909Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 23:44:39.736656 systemd-hostnamed[1958]: Hostname set to (transient) Sep 4 23:44:39.740447 systemd-resolved[1869]: System hostname changed to 'ip-172-31-31-201'. Sep 4 23:44:39.744897 containerd[1963]: time="2025-09-04T23:44:39.744811717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:44:39.752192 containerd[1963]: time="2025-09-04T23:44:39.752058253Z" level=info msg="Start subscribing containerd event" Sep 4 23:44:39.766794 containerd[1963]: time="2025-09-04T23:44:39.766738537Z" level=info msg="Start recovering state" Sep 4 23:44:39.770509 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO http_proxy: Sep 4 23:44:39.770627 containerd[1963]: time="2025-09-04T23:44:39.770419009Z" level=info msg="Start event monitor" Sep 4 23:44:39.770793 containerd[1963]: time="2025-09-04T23:44:39.770472277Z" level=info msg="Start snapshots syncer" Sep 4 23:44:39.770793 containerd[1963]: time="2025-09-04T23:44:39.770732761Z" level=info msg="Start cni network conf syncer for default" Sep 4 23:44:39.771877 containerd[1963]: time="2025-09-04T23:44:39.770762185Z" level=info msg="Start streaming server" Sep 4 23:44:39.772698 containerd[1963]: time="2025-09-04T23:44:39.767004877Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 23:44:39.773342 containerd[1963]: time="2025-09-04T23:44:39.772657057Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 23:44:39.773729 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 23:44:39.782547 containerd[1963]: time="2025-09-04T23:44:39.781709701Z" level=info msg="containerd successfully booted in 0.425494s" Sep 4 23:44:39.874234 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO no_proxy: Sep 4 23:44:39.975494 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO https_proxy: Sep 4 23:44:40.071500 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO Checking if agent identity type OnPrem can be assumed Sep 4 23:44:40.170503 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO Checking if agent identity type EC2 can be assumed Sep 4 23:44:40.270557 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO Agent will take identity from EC2 Sep 4 23:44:40.370705 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:44:40.471897 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:44:40.529467 tar[1957]: linux-arm64/README.md Sep 4 23:44:40.573134 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 23:44:40.569544 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 23:44:40.670349 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 23:44:40.769189 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 23:44:40.869556 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 23:44:40.969808 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 23:44:41.065365 sshd_keygen[1971]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 23:44:41.072439 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [Registrar] Starting registrar module Sep 4 23:44:41.087261 amazon-ssm-agent[2076]: 2025-09-04 23:44:39 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 23:44:41.088649 amazon-ssm-agent[2076]: 2025-09-04 23:44:41 INFO [EC2Identity] EC2 registration was successful. Sep 4 23:44:41.088649 amazon-ssm-agent[2076]: 2025-09-04 23:44:41 INFO [CredentialRefresher] credentialRefresher has started Sep 4 23:44:41.088929 amazon-ssm-agent[2076]: 2025-09-04 23:44:41 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 23:44:41.088929 amazon-ssm-agent[2076]: 2025-09-04 23:44:41 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 23:44:41.111328 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 23:44:41.123003 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 23:44:41.135672 systemd[1]: Started sshd@0-172.31.31.201:22-139.178.89.65:39492.service - OpenSSH per-connection server daemon (139.178.89.65:39492). Sep 4 23:44:41.155333 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 23:44:41.155945 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 23:44:41.168150 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 23:44:41.172569 amazon-ssm-agent[2076]: 2025-09-04 23:44:41 INFO [CredentialRefresher] Next credential rotation will be in 30.433290142933334 minutes Sep 4 23:44:41.214128 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 23:44:41.231635 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 23:44:41.245309 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 23:44:41.249554 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 23:44:41.354645 sshd[2167]: Accepted publickey for core from 139.178.89.65 port 39492 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:41.359883 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:41.370185 ntpd[1933]: Listen normally on 6 eth0 [fe80::4dc:dff:fe7d:1911%2]:123 Sep 4 23:44:41.371864 ntpd[1933]: 4 Sep 23:44:41 ntpd[1933]: Listen normally on 6 eth0 [fe80::4dc:dff:fe7d:1911%2]:123 Sep 4 23:44:41.389524 systemd-logind[1939]: New session 1 of user core. Sep 4 23:44:41.392875 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 23:44:41.406903 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 23:44:41.434545 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 23:44:41.454850 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 23:44:41.472434 (systemd)[2178]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 23:44:41.478057 systemd-logind[1939]: New session c1 of user core. Sep 4 23:44:41.794181 systemd[2178]: Queued start job for default target default.target. Sep 4 23:44:41.805045 systemd[2178]: Created slice app.slice - User Application Slice. Sep 4 23:44:41.805115 systemd[2178]: Reached target paths.target - Paths. Sep 4 23:44:41.805203 systemd[2178]: Reached target timers.target - Timers. Sep 4 23:44:41.808273 systemd[2178]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 23:44:41.830989 systemd[2178]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 23:44:41.831290 systemd[2178]: Reached target sockets.target - Sockets. Sep 4 23:44:41.831472 systemd[2178]: Reached target basic.target - Basic System. Sep 4 23:44:41.831619 systemd[2178]: Reached target default.target - Main User Target. Sep 4 23:44:41.831696 systemd[2178]: Startup finished in 338ms. Sep 4 23:44:41.832037 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 23:44:41.849120 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 23:44:42.021819 systemd[1]: Started sshd@1-172.31.31.201:22-139.178.89.65:59182.service - OpenSSH per-connection server daemon (139.178.89.65:59182). Sep 4 23:44:42.120789 amazon-ssm-agent[2076]: 2025-09-04 23:44:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 23:44:42.222921 sshd[2189]: Accepted publickey for core from 139.178.89.65 port 59182 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:42.223487 amazon-ssm-agent[2076]: 2025-09-04 23:44:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started Sep 4 23:44:42.227462 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:42.244259 systemd-logind[1939]: New session 2 of user core. Sep 4 23:44:42.254742 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 23:44:42.323295 amazon-ssm-agent[2076]: 2025-09-04 23:44:42 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 23:44:42.390832 sshd[2198]: Connection closed by 139.178.89.65 port 59182 Sep 4 23:44:42.391814 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:42.400581 systemd[1]: sshd@1-172.31.31.201:22-139.178.89.65:59182.service: Deactivated successfully. Sep 4 23:44:42.400732 systemd-logind[1939]: Session 2 logged out. Waiting for processes to exit. Sep 4 23:44:42.406255 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 23:44:42.409017 systemd-logind[1939]: Removed session 2. Sep 4 23:44:42.439107 systemd[1]: Started sshd@2-172.31.31.201:22-139.178.89.65:59190.service - OpenSSH per-connection server daemon (139.178.89.65:59190). Sep 4 23:44:42.645699 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 59190 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:42.647944 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:42.659552 systemd-logind[1939]: New session 3 of user core. Sep 4 23:44:42.665817 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 23:44:42.796008 sshd[2210]: Connection closed by 139.178.89.65 port 59190 Sep 4 23:44:42.796777 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:42.804047 systemd[1]: sshd@2-172.31.31.201:22-139.178.89.65:59190.service: Deactivated successfully. Sep 4 23:44:42.807756 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 23:44:42.809148 systemd-logind[1939]: Session 3 logged out. Waiting for processes to exit. Sep 4 23:44:42.811910 systemd-logind[1939]: Removed session 3. Sep 4 23:44:42.982464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:44:42.986547 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 23:44:42.989753 systemd[1]: Startup finished in 1.089s (kernel) + 9.237s (initrd) + 11.012s (userspace) = 21.338s. Sep 4 23:44:43.012130 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:44:44.321063 kubelet[2220]: E0904 23:44:44.320961 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:44:44.326201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:44:44.326650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:44:44.327610 systemd[1]: kubelet.service: Consumed 1.474s CPU time, 260M memory peak. Sep 4 23:44:52.839529 systemd[1]: Started sshd@3-172.31.31.201:22-139.178.89.65:34948.service - OpenSSH per-connection server daemon (139.178.89.65:34948). Sep 4 23:44:53.027772 sshd[2232]: Accepted publickey for core from 139.178.89.65 port 34948 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:53.030227 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:53.040807 systemd-logind[1939]: New session 4 of user core. Sep 4 23:44:53.047668 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 23:44:53.170985 sshd[2234]: Connection closed by 139.178.89.65 port 34948 Sep 4 23:44:53.171831 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:53.178252 systemd[1]: sshd@3-172.31.31.201:22-139.178.89.65:34948.service: Deactivated successfully. Sep 4 23:44:53.182550 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 23:44:53.184802 systemd-logind[1939]: Session 4 logged out. Waiting for processes to exit. Sep 4 23:44:53.186769 systemd-logind[1939]: Removed session 4. Sep 4 23:44:53.219874 systemd[1]: Started sshd@4-172.31.31.201:22-139.178.89.65:34964.service - OpenSSH per-connection server daemon (139.178.89.65:34964). Sep 4 23:44:53.398315 sshd[2240]: Accepted publickey for core from 139.178.89.65 port 34964 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:53.400738 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:53.411735 systemd-logind[1939]: New session 5 of user core. Sep 4 23:44:53.417708 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 23:44:53.535417 sshd[2242]: Connection closed by 139.178.89.65 port 34964 Sep 4 23:44:53.536262 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:53.542332 systemd[1]: sshd@4-172.31.31.201:22-139.178.89.65:34964.service: Deactivated successfully. Sep 4 23:44:53.545945 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 23:44:53.547426 systemd-logind[1939]: Session 5 logged out. Waiting for processes to exit. Sep 4 23:44:53.549270 systemd-logind[1939]: Removed session 5. Sep 4 23:44:53.581869 systemd[1]: Started sshd@5-172.31.31.201:22-139.178.89.65:34968.service - OpenSSH per-connection server daemon (139.178.89.65:34968). Sep 4 23:44:53.763161 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 34968 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:53.765576 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:53.776841 systemd-logind[1939]: New session 6 of user core. Sep 4 23:44:53.783652 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 23:44:53.910419 sshd[2250]: Connection closed by 139.178.89.65 port 34968 Sep 4 23:44:53.911723 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:53.917737 systemd[1]: sshd@5-172.31.31.201:22-139.178.89.65:34968.service: Deactivated successfully. Sep 4 23:44:53.921604 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 23:44:53.924375 systemd-logind[1939]: Session 6 logged out. Waiting for processes to exit. Sep 4 23:44:53.926360 systemd-logind[1939]: Removed session 6. Sep 4 23:44:53.961894 systemd[1]: Started sshd@6-172.31.31.201:22-139.178.89.65:34984.service - OpenSSH per-connection server daemon (139.178.89.65:34984). Sep 4 23:44:54.141061 sshd[2256]: Accepted publickey for core from 139.178.89.65 port 34984 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:54.143424 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:54.153791 systemd-logind[1939]: New session 7 of user core. Sep 4 23:44:54.159705 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 23:44:54.278051 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 23:44:54.278691 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:44:54.295050 sudo[2259]: pam_unix(sudo:session): session closed for user root Sep 4 23:44:54.320420 sshd[2258]: Connection closed by 139.178.89.65 port 34984 Sep 4 23:44:54.319206 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:54.325680 systemd[1]: sshd@6-172.31.31.201:22-139.178.89.65:34984.service: Deactivated successfully. Sep 4 23:44:54.325779 systemd-logind[1939]: Session 7 logged out. Waiting for processes to exit. Sep 4 23:44:54.329002 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 23:44:54.333637 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 23:44:54.343777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:44:54.359973 systemd-logind[1939]: Removed session 7. Sep 4 23:44:54.369948 systemd[1]: Started sshd@7-172.31.31.201:22-139.178.89.65:34992.service - OpenSSH per-connection server daemon (139.178.89.65:34992). Sep 4 23:44:54.553167 sshd[2267]: Accepted publickey for core from 139.178.89.65 port 34992 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:54.555999 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:54.567526 systemd-logind[1939]: New session 8 of user core. Sep 4 23:44:54.579689 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 23:44:54.683757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:44:54.690917 sudo[2277]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 23:44:54.691567 sudo[2277]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:44:54.692934 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:44:54.703144 sudo[2277]: pam_unix(sudo:session): session closed for user root Sep 4 23:44:54.715648 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 23:44:54.716854 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:44:54.742065 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 23:44:54.857063 kubelet[2276]: E0904 23:44:54.855854 2276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:44:54.864729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:44:54.865057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:44:54.868628 systemd[1]: kubelet.service: Consumed 361ms CPU time, 107M memory peak. Sep 4 23:44:54.886081 augenrules[2306]: No rules Sep 4 23:44:54.888795 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 23:44:54.889671 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 23:44:54.891620 sudo[2275]: pam_unix(sudo:session): session closed for user root Sep 4 23:44:54.915063 sshd[2270]: Connection closed by 139.178.89.65 port 34992 Sep 4 23:44:54.916056 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Sep 4 23:44:54.922323 systemd-logind[1939]: Session 8 logged out. Waiting for processes to exit. Sep 4 23:44:54.923610 systemd[1]: sshd@7-172.31.31.201:22-139.178.89.65:34992.service: Deactivated successfully. Sep 4 23:44:54.927166 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 23:44:54.929751 systemd-logind[1939]: Removed session 8. Sep 4 23:44:54.956908 systemd[1]: Started sshd@8-172.31.31.201:22-139.178.89.65:35008.service - OpenSSH per-connection server daemon (139.178.89.65:35008). Sep 4 23:44:55.136676 sshd[2315]: Accepted publickey for core from 139.178.89.65 port 35008 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:44:55.139051 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:44:55.146453 systemd-logind[1939]: New session 9 of user core. Sep 4 23:44:55.156661 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 23:44:55.259460 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 23:44:55.260088 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 23:44:55.805907 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 23:44:55.818903 (dockerd)[2336]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 23:44:56.212690 dockerd[2336]: time="2025-09-04T23:44:56.212120384Z" level=info msg="Starting up" Sep 4 23:44:56.344023 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport625273609-merged.mount: Deactivated successfully. Sep 4 23:44:56.378637 dockerd[2336]: time="2025-09-04T23:44:56.378559671Z" level=info msg="Loading containers: start." Sep 4 23:44:56.631471 kernel: Initializing XFRM netlink socket Sep 4 23:44:56.663462 (udev-worker)[2358]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:44:56.752732 systemd-networkd[1868]: docker0: Link UP Sep 4 23:44:56.796790 dockerd[2336]: time="2025-09-04T23:44:56.796714499Z" level=info msg="Loading containers: done." Sep 4 23:44:56.820599 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck415573543-merged.mount: Deactivated successfully. Sep 4 23:44:56.829116 dockerd[2336]: time="2025-09-04T23:44:56.829041086Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 23:44:56.829322 dockerd[2336]: time="2025-09-04T23:44:56.829201714Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 4 23:44:56.829593 dockerd[2336]: time="2025-09-04T23:44:56.829540704Z" level=info msg="Daemon has completed initialization" Sep 4 23:44:56.895521 dockerd[2336]: time="2025-09-04T23:44:56.894462309Z" level=info msg="API listen on /run/docker.sock" Sep 4 23:44:56.895005 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 23:44:58.357255 containerd[1963]: time="2025-09-04T23:44:58.356914368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 4 23:44:58.958733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363288164.mount: Deactivated successfully. Sep 4 23:45:00.420985 containerd[1963]: time="2025-09-04T23:45:00.419726581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:00.422084 containerd[1963]: time="2025-09-04T23:45:00.422004587Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 4 23:45:00.424176 containerd[1963]: time="2025-09-04T23:45:00.424110668Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:00.432675 containerd[1963]: time="2025-09-04T23:45:00.432599930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:00.435080 containerd[1963]: time="2025-09-04T23:45:00.435032238Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.078061718s" Sep 4 23:45:00.435350 containerd[1963]: time="2025-09-04T23:45:00.435314703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 4 23:45:00.437803 containerd[1963]: time="2025-09-04T23:45:00.437733684Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 4 23:45:01.899341 containerd[1963]: time="2025-09-04T23:45:01.899258111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:01.901493 containerd[1963]: time="2025-09-04T23:45:01.901385262Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 4 23:45:01.903737 containerd[1963]: time="2025-09-04T23:45:01.903665454Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:01.909784 containerd[1963]: time="2025-09-04T23:45:01.909702455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:01.912108 containerd[1963]: time="2025-09-04T23:45:01.912033071Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.474232214s" Sep 4 23:45:01.912639 containerd[1963]: time="2025-09-04T23:45:01.912317950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 4 23:45:01.913546 containerd[1963]: time="2025-09-04T23:45:01.913468221Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 4 23:45:03.148422 containerd[1963]: time="2025-09-04T23:45:03.146482232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:03.149003 containerd[1963]: time="2025-09-04T23:45:03.148359418Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 4 23:45:03.149301 containerd[1963]: time="2025-09-04T23:45:03.149263878Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:03.154477 containerd[1963]: time="2025-09-04T23:45:03.154364651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:03.157519 containerd[1963]: time="2025-09-04T23:45:03.157469571Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.243928834s" Sep 4 23:45:03.157693 containerd[1963]: time="2025-09-04T23:45:03.157664993Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 4 23:45:03.158541 containerd[1963]: time="2025-09-04T23:45:03.158475146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 4 23:45:04.352997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154239854.mount: Deactivated successfully. Sep 4 23:45:04.975498 containerd[1963]: time="2025-09-04T23:45:04.975431911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:04.983467 containerd[1963]: time="2025-09-04T23:45:04.983347538Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 4 23:45:04.991707 containerd[1963]: time="2025-09-04T23:45:04.991621581Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:05.003949 containerd[1963]: time="2025-09-04T23:45:05.003862227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:05.005978 containerd[1963]: time="2025-09-04T23:45:05.005913680Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.847207924s" Sep 4 23:45:05.006100 containerd[1963]: time="2025-09-04T23:45:05.005974887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 4 23:45:05.007152 containerd[1963]: time="2025-09-04T23:45:05.006740905Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 4 23:45:05.026183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 23:45:05.034718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:05.348730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:05.350806 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:05.426564 kubelet[2600]: E0904 23:45:05.426469 2600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:05.431367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:05.431747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:05.432622 systemd[1]: kubelet.service: Consumed 282ms CPU time, 104.7M memory peak. Sep 4 23:45:05.631738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227343196.mount: Deactivated successfully. Sep 4 23:45:06.912929 containerd[1963]: time="2025-09-04T23:45:06.912835450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:06.915269 containerd[1963]: time="2025-09-04T23:45:06.915165790Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 4 23:45:06.917609 containerd[1963]: time="2025-09-04T23:45:06.917525906Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:06.924051 containerd[1963]: time="2025-09-04T23:45:06.923961314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:06.926455 containerd[1963]: time="2025-09-04T23:45:06.926374064Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.919577991s" Sep 4 23:45:06.926965 containerd[1963]: time="2025-09-04T23:45:06.926585898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 4 23:45:06.927637 containerd[1963]: time="2025-09-04T23:45:06.927591052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 23:45:07.461077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464037373.mount: Deactivated successfully. Sep 4 23:45:07.474871 containerd[1963]: time="2025-09-04T23:45:07.473341565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:07.475279 containerd[1963]: time="2025-09-04T23:45:07.475216009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 4 23:45:07.477753 containerd[1963]: time="2025-09-04T23:45:07.477686656Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:07.484442 containerd[1963]: time="2025-09-04T23:45:07.482923298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:07.484754 containerd[1963]: time="2025-09-04T23:45:07.484712832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 557.064131ms" Sep 4 23:45:07.484892 containerd[1963]: time="2025-09-04T23:45:07.484862366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 4 23:45:07.486067 containerd[1963]: time="2025-09-04T23:45:07.485996618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 4 23:45:08.075711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989436049.mount: Deactivated successfully. Sep 4 23:45:09.754098 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 23:45:10.163953 containerd[1963]: time="2025-09-04T23:45:10.163871893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:10.170718 containerd[1963]: time="2025-09-04T23:45:10.170620921Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 4 23:45:10.177680 containerd[1963]: time="2025-09-04T23:45:10.177590341Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:10.188218 containerd[1963]: time="2025-09-04T23:45:10.188145697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:10.191642 containerd[1963]: time="2025-09-04T23:45:10.191445541Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.705197006s" Sep 4 23:45:10.191642 containerd[1963]: time="2025-09-04T23:45:10.191505661Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 4 23:45:15.526510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 23:45:15.536564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:15.855795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:15.858927 (kubelet)[2750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 23:45:15.928145 kubelet[2750]: E0904 23:45:15.928058 2750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 23:45:15.932558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 23:45:15.933792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 23:45:15.936643 systemd[1]: kubelet.service: Consumed 270ms CPU time, 105.1M memory peak. Sep 4 23:45:18.399599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:18.400176 systemd[1]: kubelet.service: Consumed 270ms CPU time, 105.1M memory peak. Sep 4 23:45:18.412868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:18.471007 systemd[1]: Reload requested from client PID 2764 ('systemctl') (unit session-9.scope)... Sep 4 23:45:18.471038 systemd[1]: Reloading... Sep 4 23:45:18.755292 zram_generator::config[2818]: No configuration found. Sep 4 23:45:18.970170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:19.202798 systemd[1]: Reloading finished in 731 ms. Sep 4 23:45:19.298352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:19.307016 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:19.310746 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:19.311275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:19.311363 systemd[1]: kubelet.service: Consumed 222ms CPU time, 94.8M memory peak. Sep 4 23:45:19.317878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:19.621678 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:19.622600 (kubelet)[2874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:19.697692 kubelet[2874]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:19.699447 kubelet[2874]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:19.699447 kubelet[2874]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:19.699447 kubelet[2874]: I0904 23:45:19.698374 2874 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:19.931522 kubelet[2874]: I0904 23:45:19.931375 2874 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:45:19.931691 kubelet[2874]: I0904 23:45:19.931670 2874 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:19.932386 kubelet[2874]: I0904 23:45:19.932359 2874 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:45:19.971662 kubelet[2874]: E0904 23:45:19.971585 2874 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.201:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 23:45:19.973961 kubelet[2874]: I0904 23:45:19.973895 2874 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:19.989467 kubelet[2874]: E0904 23:45:19.989405 2874 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:19.989467 kubelet[2874]: I0904 23:45:19.989465 2874 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:19.994996 kubelet[2874]: I0904 23:45:19.994958 2874 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:19.998458 kubelet[2874]: I0904 23:45:19.997603 2874 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:19.998458 kubelet[2874]: I0904 23:45:19.997660 2874 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-201","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:19.998458 kubelet[2874]: I0904 23:45:19.998049 2874 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:19.998458 kubelet[2874]: I0904 23:45:19.998071 2874 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:45:19.998458 kubelet[2874]: I0904 23:45:19.998431 2874 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:20.006794 kubelet[2874]: I0904 23:45:20.006728 2874 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:45:20.006794 kubelet[2874]: I0904 23:45:20.006778 2874 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:20.008465 kubelet[2874]: I0904 23:45:20.006829 2874 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:45:20.008465 kubelet[2874]: I0904 23:45:20.006864 2874 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:20.013729 kubelet[2874]: E0904 23:45:20.013625 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:45:20.014416 kubelet[2874]: E0904 23:45:20.014346 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-201&limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:45:20.015004 kubelet[2874]: I0904 23:45:20.014968 2874 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:20.016227 kubelet[2874]: I0904 23:45:20.016168 2874 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:45:20.018452 kubelet[2874]: W0904 23:45:20.016431 2874 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 23:45:20.021675 kubelet[2874]: I0904 23:45:20.021518 2874 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:45:20.021675 kubelet[2874]: I0904 23:45:20.021599 2874 server.go:1289] "Started kubelet" Sep 4 23:45:20.023847 kubelet[2874]: I0904 23:45:20.023792 2874 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:20.026864 kubelet[2874]: I0904 23:45:20.026759 2874 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:20.027370 kubelet[2874]: I0904 23:45:20.027320 2874 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:20.027549 kubelet[2874]: I0904 23:45:20.027526 2874 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:45:20.033566 kubelet[2874]: I0904 23:45:20.033531 2874 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:20.034428 kubelet[2874]: E0904 23:45:20.032263 2874 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.201:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.201:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-201.1862390a10c27c32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-201,UID:ip-172-31-31-201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-201,},FirstTimestamp:2025-09-04 23:45:20.021552178 +0000 UTC m=+0.388010559,LastTimestamp:2025-09-04 23:45:20.021552178 +0000 UTC m=+0.388010559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-201,}" Sep 4 23:45:20.036318 kubelet[2874]: I0904 23:45:20.035565 2874 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:20.040099 kubelet[2874]: E0904 23:45:20.040044 2874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-201\" not found" Sep 4 23:45:20.040266 kubelet[2874]: I0904 23:45:20.040129 2874 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:45:20.040843 kubelet[2874]: I0904 23:45:20.040796 2874 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:45:20.041016 kubelet[2874]: I0904 23:45:20.040981 2874 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:20.042056 kubelet[2874]: E0904 23:45:20.041995 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:45:20.044005 kubelet[2874]: E0904 23:45:20.043904 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-201?timeout=10s\": dial tcp 172.31.31.201:6443: connect: connection refused" interval="200ms" Sep 4 23:45:20.044265 kubelet[2874]: E0904 23:45:20.044219 2874 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:20.044619 kubelet[2874]: I0904 23:45:20.044580 2874 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:45:20.045019 kubelet[2874]: I0904 23:45:20.044727 2874 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:20.047911 kubelet[2874]: I0904 23:45:20.047833 2874 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:45:20.078445 kubelet[2874]: I0904 23:45:20.078071 2874 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:45:20.078445 kubelet[2874]: I0904 23:45:20.078103 2874 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:20.078445 kubelet[2874]: I0904 23:45:20.078131 2874 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:20.084425 kubelet[2874]: I0904 23:45:20.083830 2874 policy_none.go:49] "None policy: Start" Sep 4 23:45:20.084671 kubelet[2874]: I0904 23:45:20.084646 2874 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:45:20.084982 kubelet[2874]: I0904 23:45:20.084962 2874 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:20.092610 kubelet[2874]: I0904 23:45:20.092546 2874 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:20.097216 kubelet[2874]: I0904 23:45:20.097164 2874 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:20.097216 kubelet[2874]: I0904 23:45:20.097208 2874 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:45:20.098581 kubelet[2874]: I0904 23:45:20.097240 2874 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:45:20.098581 kubelet[2874]: I0904 23:45:20.097256 2874 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:45:20.098581 kubelet[2874]: E0904 23:45:20.097323 2874 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:20.101586 kubelet[2874]: E0904 23:45:20.101535 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:45:20.111077 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 23:45:20.127346 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 23:45:20.134619 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 23:45:20.140439 kubelet[2874]: E0904 23:45:20.140307 2874 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-201\" not found" Sep 4 23:45:20.144470 kubelet[2874]: E0904 23:45:20.144201 2874 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:45:20.144619 kubelet[2874]: I0904 23:45:20.144514 2874 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:20.144619 kubelet[2874]: I0904 23:45:20.144535 2874 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:20.146933 kubelet[2874]: E0904 23:45:20.146654 2874 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:45:20.146933 kubelet[2874]: E0904 23:45:20.146743 2874 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-201\" not found" Sep 4 23:45:20.147681 kubelet[2874]: I0904 23:45:20.147086 2874 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:20.218601 systemd[1]: Created slice kubepods-burstable-pod3bfa20b07f2074f16af905145fec2c7b.slice - libcontainer container kubepods-burstable-pod3bfa20b07f2074f16af905145fec2c7b.slice. Sep 4 23:45:20.240883 kubelet[2874]: E0904 23:45:20.240383 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:20.242170 kubelet[2874]: I0904 23:45:20.242133 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:20.242338 kubelet[2874]: I0904 23:45:20.242311 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bfa20b07f2074f16af905145fec2c7b-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-201\" (UID: \"3bfa20b07f2074f16af905145fec2c7b\") " pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:20.243082 kubelet[2874]: I0904 23:45:20.242498 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bfa20b07f2074f16af905145fec2c7b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-201\" (UID: \"3bfa20b07f2074f16af905145fec2c7b\") " pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:20.243082 kubelet[2874]: I0904 23:45:20.242553 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:20.243082 kubelet[2874]: I0904 23:45:20.242600 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:20.243082 kubelet[2874]: I0904 23:45:20.242646 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1943241386cbc2d6c210ede02eccdedf-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-201\" (UID: \"1943241386cbc2d6c210ede02eccdedf\") " pod="kube-system/kube-scheduler-ip-172-31-31-201" Sep 4 23:45:20.243082 kubelet[2874]: I0904 23:45:20.242684 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bfa20b07f2074f16af905145fec2c7b-ca-certs\") pod \"kube-apiserver-ip-172-31-31-201\" (UID: \"3bfa20b07f2074f16af905145fec2c7b\") " pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:20.243388 kubelet[2874]: I0904 23:45:20.242722 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:20.243388 kubelet[2874]: I0904 23:45:20.242758 2874 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:20.245242 kubelet[2874]: E0904 23:45:20.245172 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-201?timeout=10s\": dial tcp 172.31.31.201:6443: connect: connection refused" interval="400ms" Sep 4 23:45:20.247629 kubelet[2874]: I0904 23:45:20.247573 2874 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-201" Sep 4 23:45:20.248303 kubelet[2874]: E0904 23:45:20.248257 2874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.201:6443/api/v1/nodes\": dial tcp 172.31.31.201:6443: connect: connection refused" node="ip-172-31-31-201" Sep 4 23:45:20.251266 systemd[1]: Created slice kubepods-burstable-pod958a9aff5bbacc6b5c3735543e49843e.slice - libcontainer container kubepods-burstable-pod958a9aff5bbacc6b5c3735543e49843e.slice. Sep 4 23:45:20.262364 kubelet[2874]: E0904 23:45:20.262029 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:20.270096 systemd[1]: Created slice kubepods-burstable-pod1943241386cbc2d6c210ede02eccdedf.slice - libcontainer container kubepods-burstable-pod1943241386cbc2d6c210ede02eccdedf.slice. Sep 4 23:45:20.273764 kubelet[2874]: E0904 23:45:20.273707 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:20.451838 kubelet[2874]: I0904 23:45:20.451325 2874 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-201" Sep 4 23:45:20.451838 kubelet[2874]: E0904 23:45:20.451793 2874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.201:6443/api/v1/nodes\": dial tcp 172.31.31.201:6443: connect: connection refused" node="ip-172-31-31-201" Sep 4 23:45:20.542611 containerd[1963]: time="2025-09-04T23:45:20.542200393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-201,Uid:3bfa20b07f2074f16af905145fec2c7b,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:20.564311 containerd[1963]: time="2025-09-04T23:45:20.564231877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-201,Uid:958a9aff5bbacc6b5c3735543e49843e,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:20.576313 containerd[1963]: time="2025-09-04T23:45:20.575815765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-201,Uid:1943241386cbc2d6c210ede02eccdedf,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:20.646626 kubelet[2874]: E0904 23:45:20.646576 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-201?timeout=10s\": dial tcp 172.31.31.201:6443: connect: connection refused" interval="800ms" Sep 4 23:45:20.855008 kubelet[2874]: I0904 23:45:20.854857 2874 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-201" Sep 4 23:45:20.855588 kubelet[2874]: E0904 23:45:20.855297 2874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.201:6443/api/v1/nodes\": dial tcp 172.31.31.201:6443: connect: connection refused" node="ip-172-31-31-201" Sep 4 23:45:21.012134 kubelet[2874]: E0904 23:45:21.012069 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 23:45:21.015265 kubelet[2874]: E0904 23:45:21.015195 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 23:45:21.083925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108174135.mount: Deactivated successfully. Sep 4 23:45:21.116749 containerd[1963]: time="2025-09-04T23:45:21.115556424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:21.121861 containerd[1963]: time="2025-09-04T23:45:21.121774512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 23:45:21.139143 containerd[1963]: time="2025-09-04T23:45:21.139041168Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:21.144068 kubelet[2874]: E0904 23:45:21.144011 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 23:45:21.145766 containerd[1963]: time="2025-09-04T23:45:21.145683948Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:21.148106 containerd[1963]: time="2025-09-04T23:45:21.148043196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:21.149925 containerd[1963]: time="2025-09-04T23:45:21.149779920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:21.153283 containerd[1963]: time="2025-09-04T23:45:21.153223956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 23:45:21.155507 containerd[1963]: time="2025-09-04T23:45:21.155168256Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 612.856515ms" Sep 4 23:45:21.155507 containerd[1963]: time="2025-09-04T23:45:21.155340360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 23:45:21.167314 containerd[1963]: time="2025-09-04T23:45:21.167257836Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 602.912739ms" Sep 4 23:45:21.168032 containerd[1963]: time="2025-09-04T23:45:21.167878788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.950895ms" Sep 4 23:45:21.202626 kubelet[2874]: E0904 23:45:21.202558 2874 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-201&limit=500&resourceVersion=0\": dial tcp 172.31.31.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 23:45:21.359068 containerd[1963]: time="2025-09-04T23:45:21.358333357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:21.359068 containerd[1963]: time="2025-09-04T23:45:21.358519225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:21.359068 containerd[1963]: time="2025-09-04T23:45:21.358557949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:21.359068 containerd[1963]: time="2025-09-04T23:45:21.358730029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:21.363058 containerd[1963]: time="2025-09-04T23:45:21.362736265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:21.363058 containerd[1963]: time="2025-09-04T23:45:21.362915569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:21.363431 containerd[1963]: time="2025-09-04T23:45:21.362989909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:21.364806 containerd[1963]: time="2025-09-04T23:45:21.364648309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:21.364806 containerd[1963]: time="2025-09-04T23:45:21.364741273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:21.365098 containerd[1963]: time="2025-09-04T23:45:21.364887253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:21.366008 containerd[1963]: time="2025-09-04T23:45:21.365873389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:21.366535 containerd[1963]: time="2025-09-04T23:45:21.366259405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:21.407414 systemd[1]: Started cri-containerd-a3f593262a9cdd82cfca5b1b3433f2fd170118455adba227adbc7eeeb6630bee.scope - libcontainer container a3f593262a9cdd82cfca5b1b3433f2fd170118455adba227adbc7eeeb6630bee. Sep 4 23:45:21.427942 systemd[1]: Started cri-containerd-c312e4695752f470437b352b4d98d05484299209ec7b2710b8722f6b03824558.scope - libcontainer container c312e4695752f470437b352b4d98d05484299209ec7b2710b8722f6b03824558. Sep 4 23:45:21.447720 kubelet[2874]: E0904 23:45:21.447648 2874 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-201?timeout=10s\": dial tcp 172.31.31.201:6443: connect: connection refused" interval="1.6s" Sep 4 23:45:21.448722 systemd[1]: Started cri-containerd-d73e76902032501650852389b4582f309508decd0158a7830333ceb4c45515f4.scope - libcontainer container d73e76902032501650852389b4582f309508decd0158a7830333ceb4c45515f4. Sep 4 23:45:21.514446 containerd[1963]: time="2025-09-04T23:45:21.512754422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-201,Uid:3bfa20b07f2074f16af905145fec2c7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3f593262a9cdd82cfca5b1b3433f2fd170118455adba227adbc7eeeb6630bee\"" Sep 4 23:45:21.527829 containerd[1963]: time="2025-09-04T23:45:21.527762462Z" level=info msg="CreateContainer within sandbox \"a3f593262a9cdd82cfca5b1b3433f2fd170118455adba227adbc7eeeb6630bee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 23:45:21.569174 containerd[1963]: time="2025-09-04T23:45:21.568701134Z" level=info msg="CreateContainer within sandbox \"a3f593262a9cdd82cfca5b1b3433f2fd170118455adba227adbc7eeeb6630bee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"440e33860b8c061b0ff997853f7666f4046609c5b5391396b5425f46caa05f09\"" Sep 4 23:45:21.573664 containerd[1963]: time="2025-09-04T23:45:21.573430526Z" level=info msg="StartContainer for \"440e33860b8c061b0ff997853f7666f4046609c5b5391396b5425f46caa05f09\"" Sep 4 23:45:21.576833 containerd[1963]: time="2025-09-04T23:45:21.576784706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-201,Uid:958a9aff5bbacc6b5c3735543e49843e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d73e76902032501650852389b4582f309508decd0158a7830333ceb4c45515f4\"" Sep 4 23:45:21.583606 containerd[1963]: time="2025-09-04T23:45:21.583323698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-201,Uid:1943241386cbc2d6c210ede02eccdedf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c312e4695752f470437b352b4d98d05484299209ec7b2710b8722f6b03824558\"" Sep 4 23:45:21.591965 containerd[1963]: time="2025-09-04T23:45:21.591814970Z" level=info msg="CreateContainer within sandbox \"d73e76902032501650852389b4582f309508decd0158a7830333ceb4c45515f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 23:45:21.595501 containerd[1963]: time="2025-09-04T23:45:21.595174574Z" level=info msg="CreateContainer within sandbox \"c312e4695752f470437b352b4d98d05484299209ec7b2710b8722f6b03824558\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 23:45:21.630941 containerd[1963]: time="2025-09-04T23:45:21.630883706Z" level=info msg="CreateContainer within sandbox \"c312e4695752f470437b352b4d98d05484299209ec7b2710b8722f6b03824558\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b\"" Sep 4 23:45:21.633240 containerd[1963]: time="2025-09-04T23:45:21.632713370Z" level=info msg="StartContainer for \"24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b\"" Sep 4 23:45:21.641696 systemd[1]: Started cri-containerd-440e33860b8c061b0ff997853f7666f4046609c5b5391396b5425f46caa05f09.scope - libcontainer container 440e33860b8c061b0ff997853f7666f4046609c5b5391396b5425f46caa05f09. Sep 4 23:45:21.643144 containerd[1963]: time="2025-09-04T23:45:21.642953150Z" level=info msg="CreateContainer within sandbox \"d73e76902032501650852389b4582f309508decd0158a7830333ceb4c45515f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d\"" Sep 4 23:45:21.644086 containerd[1963]: time="2025-09-04T23:45:21.644043038Z" level=info msg="StartContainer for \"8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d\"" Sep 4 23:45:21.663602 kubelet[2874]: I0904 23:45:21.663368 2874 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-201" Sep 4 23:45:21.666432 kubelet[2874]: E0904 23:45:21.665207 2874 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.201:6443/api/v1/nodes\": dial tcp 172.31.31.201:6443: connect: connection refused" node="ip-172-31-31-201" Sep 4 23:45:21.719804 systemd[1]: Started cri-containerd-24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b.scope - libcontainer container 24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b. Sep 4 23:45:21.764753 systemd[1]: Started cri-containerd-8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d.scope - libcontainer container 8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d. Sep 4 23:45:21.775429 containerd[1963]: time="2025-09-04T23:45:21.774543135Z" level=info msg="StartContainer for \"440e33860b8c061b0ff997853f7666f4046609c5b5391396b5425f46caa05f09\" returns successfully" Sep 4 23:45:21.877961 containerd[1963]: time="2025-09-04T23:45:21.877661979Z" level=info msg="StartContainer for \"24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b\" returns successfully" Sep 4 23:45:21.897080 containerd[1963]: time="2025-09-04T23:45:21.896975620Z" level=info msg="StartContainer for \"8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d\" returns successfully" Sep 4 23:45:22.112937 kubelet[2874]: E0904 23:45:22.112886 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:22.120944 kubelet[2874]: E0904 23:45:22.120892 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:22.126982 kubelet[2874]: E0904 23:45:22.126932 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:23.133441 kubelet[2874]: E0904 23:45:23.131036 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:23.136435 kubelet[2874]: E0904 23:45:23.135025 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:23.136435 kubelet[2874]: E0904 23:45:23.136004 2874 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:23.268623 kubelet[2874]: I0904 23:45:23.268571 2874 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-201" Sep 4 23:45:23.654336 update_engine[1941]: I20250904 23:45:23.653434 1941 update_attempter.cc:509] Updating boot flags... Sep 4 23:45:23.782562 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3166) Sep 4 23:45:27.568991 kubelet[2874]: E0904 23:45:27.568921 2874 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-201\" not found" node="ip-172-31-31-201" Sep 4 23:45:27.764693 kubelet[2874]: E0904 23:45:27.764470 2874 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-201.1862390a10c27c32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-201,UID:ip-172-31-31-201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-201,},FirstTimestamp:2025-09-04 23:45:20.021552178 +0000 UTC m=+0.388010559,LastTimestamp:2025-09-04 23:45:20.021552178 +0000 UTC m=+0.388010559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-201,}" Sep 4 23:45:27.820858 kubelet[2874]: I0904 23:45:27.820050 2874 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-201" Sep 4 23:45:27.820858 kubelet[2874]: E0904 23:45:27.820116 2874 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-201\": node \"ip-172-31-31-201\" not found" Sep 4 23:45:27.832537 kubelet[2874]: E0904 23:45:27.832088 2874 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-201.1862390a121bd6f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-201,UID:ip-172-31-31-201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-31-201,},FirstTimestamp:2025-09-04 23:45:20.04418533 +0000 UTC m=+0.410643699,LastTimestamp:2025-09-04 23:45:20.04418533 +0000 UTC m=+0.410643699,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-201,}" Sep 4 23:45:27.846437 kubelet[2874]: I0904 23:45:27.843205 2874 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-201" Sep 4 23:45:27.856661 kubelet[2874]: E0904 23:45:27.856603 2874 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-201\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-201" Sep 4 23:45:27.856661 kubelet[2874]: I0904 23:45:27.856653 2874 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:27.860181 kubelet[2874]: E0904 23:45:27.859825 2874 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-201\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:27.860181 kubelet[2874]: I0904 23:45:27.859875 2874 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:27.863031 kubelet[2874]: E0904 23:45:27.862982 2874 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-201\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:28.015899 kubelet[2874]: I0904 23:45:28.015472 2874 apiserver.go:52] "Watching apiserver" Sep 4 23:45:28.041513 kubelet[2874]: I0904 23:45:28.041474 2874 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:45:30.128034 systemd[1]: Reload requested from client PID 3254 ('systemctl') (unit session-9.scope)... Sep 4 23:45:30.128073 systemd[1]: Reloading... Sep 4 23:45:30.393452 zram_generator::config[3308]: No configuration found. Sep 4 23:45:30.687537 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 23:45:31.007966 systemd[1]: Reloading finished in 878 ms. Sep 4 23:45:31.063767 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:31.082117 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 23:45:31.082721 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:31.082827 systemd[1]: kubelet.service: Consumed 1.183s CPU time, 129.7M memory peak. Sep 4 23:45:31.090344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 23:45:31.466698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 23:45:31.482345 (kubelet)[3364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 23:45:31.570616 kubelet[3364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:31.571724 kubelet[3364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 23:45:31.571724 kubelet[3364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 23:45:31.571724 kubelet[3364]: I0904 23:45:31.571326 3364 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 23:45:31.596878 kubelet[3364]: I0904 23:45:31.596831 3364 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 23:45:31.597166 kubelet[3364]: I0904 23:45:31.597094 3364 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 23:45:31.598042 kubelet[3364]: I0904 23:45:31.597907 3364 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 23:45:31.601427 kubelet[3364]: I0904 23:45:31.601053 3364 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 4 23:45:31.607080 kubelet[3364]: I0904 23:45:31.607039 3364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 23:45:31.614654 kubelet[3364]: E0904 23:45:31.614527 3364 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 4 23:45:31.614654 kubelet[3364]: I0904 23:45:31.614649 3364 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 4 23:45:31.621504 kubelet[3364]: I0904 23:45:31.621447 3364 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 23:45:31.623313 kubelet[3364]: I0904 23:45:31.623233 3364 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 23:45:31.625734 kubelet[3364]: I0904 23:45:31.623303 3364 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-201","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 23:45:31.625988 kubelet[3364]: I0904 23:45:31.625754 3364 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 23:45:31.625988 kubelet[3364]: I0904 23:45:31.625781 3364 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 23:45:31.625988 kubelet[3364]: I0904 23:45:31.625872 3364 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:31.627460 kubelet[3364]: I0904 23:45:31.626327 3364 kubelet.go:480] "Attempting to sync node with API server" Sep 4 23:45:31.627460 kubelet[3364]: I0904 23:45:31.626375 3364 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 23:45:31.627460 kubelet[3364]: I0904 23:45:31.626462 3364 kubelet.go:386] "Adding apiserver pod source" Sep 4 23:45:31.627460 kubelet[3364]: I0904 23:45:31.626494 3364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 23:45:31.634908 sudo[3378]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 23:45:31.637280 sudo[3378]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 23:45:31.658705 kubelet[3364]: I0904 23:45:31.658649 3364 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 4 23:45:31.660611 kubelet[3364]: I0904 23:45:31.659685 3364 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 23:45:31.663826 kubelet[3364]: I0904 23:45:31.663597 3364 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 23:45:31.663826 kubelet[3364]: I0904 23:45:31.663670 3364 server.go:1289] "Started kubelet" Sep 4 23:45:31.667418 kubelet[3364]: I0904 23:45:31.667125 3364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 23:45:31.686382 kubelet[3364]: I0904 23:45:31.686297 3364 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 23:45:31.688565 kubelet[3364]: I0904 23:45:31.688505 3364 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 23:45:31.689812 kubelet[3364]: E0904 23:45:31.689215 3364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-201\" not found" Sep 4 23:45:31.692257 kubelet[3364]: I0904 23:45:31.692038 3364 server.go:317] "Adding debug handlers to kubelet server" Sep 4 23:45:31.709871 kubelet[3364]: I0904 23:45:31.692251 3364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 23:45:31.710297 kubelet[3364]: I0904 23:45:31.710241 3364 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 23:45:31.710414 kubelet[3364]: I0904 23:45:31.693476 3364 reconciler.go:26] "Reconciler: start to sync state" Sep 4 23:45:31.710414 kubelet[3364]: I0904 23:45:31.708618 3364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 23:45:31.712252 kubelet[3364]: I0904 23:45:31.711084 3364 factory.go:223] Registration of the systemd container factory successfully Sep 4 23:45:31.712252 kubelet[3364]: I0904 23:45:31.711494 3364 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 23:45:31.716430 kubelet[3364]: I0904 23:45:31.693257 3364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 23:45:31.777745 kubelet[3364]: E0904 23:45:31.777705 3364 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 23:45:31.801050 kubelet[3364]: I0904 23:45:31.800586 3364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 23:45:31.806380 kubelet[3364]: I0904 23:45:31.806343 3364 factory.go:223] Registration of the containerd container factory successfully Sep 4 23:45:31.817683 kubelet[3364]: I0904 23:45:31.806712 3364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 23:45:31.817683 kubelet[3364]: I0904 23:45:31.817137 3364 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 23:45:31.817683 kubelet[3364]: I0904 23:45:31.817178 3364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 23:45:31.817683 kubelet[3364]: I0904 23:45:31.817193 3364 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 23:45:31.817683 kubelet[3364]: E0904 23:45:31.817273 3364 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 23:45:31.917498 kubelet[3364]: E0904 23:45:31.917452 3364 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938064 3364 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938092 3364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938126 3364 state_mem.go:36] "Initialized new in-memory state store" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938342 3364 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938363 3364 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938439 3364 policy_none.go:49] "None policy: Start" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938461 3364 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938481 3364 state_mem.go:35] "Initializing new in-memory state store" Sep 4 23:45:31.939482 kubelet[3364]: I0904 23:45:31.938647 3364 state_mem.go:75] "Updated machine memory state" Sep 4 23:45:31.949011 kubelet[3364]: E0904 23:45:31.948800 3364 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 23:45:31.951625 kubelet[3364]: I0904 23:45:31.951065 3364 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 23:45:31.951625 kubelet[3364]: I0904 23:45:31.951100 3364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 23:45:31.952991 kubelet[3364]: I0904 23:45:31.952963 3364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 23:45:31.960587 kubelet[3364]: E0904 23:45:31.957652 3364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 23:45:32.081918 kubelet[3364]: I0904 23:45:32.079815 3364 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-201" Sep 4 23:45:32.097436 kubelet[3364]: I0904 23:45:32.097102 3364 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-201" Sep 4 23:45:32.097810 kubelet[3364]: I0904 23:45:32.097747 3364 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-201" Sep 4 23:45:32.120459 kubelet[3364]: I0904 23:45:32.120352 3364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:32.123588 kubelet[3364]: I0904 23:45:32.123556 3364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:32.123920 kubelet[3364]: I0904 23:45:32.123505 3364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-201" Sep 4 23:45:32.231838 kubelet[3364]: I0904 23:45:32.230975 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bfa20b07f2074f16af905145fec2c7b-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-201\" (UID: \"3bfa20b07f2074f16af905145fec2c7b\") " pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:32.231838 kubelet[3364]: I0904 23:45:32.231039 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bfa20b07f2074f16af905145fec2c7b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-201\" (UID: \"3bfa20b07f2074f16af905145fec2c7b\") " pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:32.231838 kubelet[3364]: I0904 23:45:32.231079 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:32.231838 kubelet[3364]: I0904 23:45:32.231114 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:32.231838 kubelet[3364]: I0904 23:45:32.231154 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:32.232245 kubelet[3364]: I0904 23:45:32.231191 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:32.232245 kubelet[3364]: I0904 23:45:32.231235 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1943241386cbc2d6c210ede02eccdedf-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-201\" (UID: \"1943241386cbc2d6c210ede02eccdedf\") " pod="kube-system/kube-scheduler-ip-172-31-31-201" Sep 4 23:45:32.232245 kubelet[3364]: I0904 23:45:32.231271 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bfa20b07f2074f16af905145fec2c7b-ca-certs\") pod \"kube-apiserver-ip-172-31-31-201\" (UID: \"3bfa20b07f2074f16af905145fec2c7b\") " pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:32.232245 kubelet[3364]: I0904 23:45:32.231325 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/958a9aff5bbacc6b5c3735543e49843e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-201\" (UID: \"958a9aff5bbacc6b5c3735543e49843e\") " pod="kube-system/kube-controller-manager-ip-172-31-31-201" Sep 4 23:45:32.579103 sudo[3378]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:32.647185 kubelet[3364]: I0904 23:45:32.646702 3364 apiserver.go:52] "Watching apiserver" Sep 4 23:45:32.715305 kubelet[3364]: I0904 23:45:32.715223 3364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 23:45:32.865892 kubelet[3364]: I0904 23:45:32.865694 3364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:32.866616 kubelet[3364]: I0904 23:45:32.866346 3364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-201" Sep 4 23:45:32.880215 kubelet[3364]: E0904 23:45:32.879600 3364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-201\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-201" Sep 4 23:45:32.883777 kubelet[3364]: E0904 23:45:32.883728 3364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-201\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-201" Sep 4 23:45:32.928843 kubelet[3364]: I0904 23:45:32.928751 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-201" podStartSLOduration=0.928730006 podStartE2EDuration="928.730006ms" podCreationTimestamp="2025-09-04 23:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:32.911752106 +0000 UTC m=+1.418927588" watchObservedRunningTime="2025-09-04 23:45:32.928730006 +0000 UTC m=+1.435905500" Sep 4 23:45:32.931897 kubelet[3364]: I0904 23:45:32.930558 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-201" podStartSLOduration=0.93048083 podStartE2EDuration="930.48083ms" podCreationTimestamp="2025-09-04 23:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:32.92953937 +0000 UTC m=+1.436714876" watchObservedRunningTime="2025-09-04 23:45:32.93048083 +0000 UTC m=+1.437656324" Sep 4 23:45:33.496511 kubelet[3364]: I0904 23:45:33.496123 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-201" podStartSLOduration=1.496104265 podStartE2EDuration="1.496104265s" podCreationTimestamp="2025-09-04 23:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:32.95075411 +0000 UTC m=+1.457929580" watchObservedRunningTime="2025-09-04 23:45:33.496104265 +0000 UTC m=+2.003279747" Sep 4 23:45:35.407853 kubelet[3364]: I0904 23:45:35.407644 3364 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 23:45:35.410166 containerd[1963]: time="2025-09-04T23:45:35.408892023Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 23:45:35.411503 kubelet[3364]: I0904 23:45:35.409547 3364 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 23:45:35.878688 sudo[2318]: pam_unix(sudo:session): session closed for user root Sep 4 23:45:35.901493 sshd[2317]: Connection closed by 139.178.89.65 port 35008 Sep 4 23:45:35.902324 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Sep 4 23:45:35.909830 systemd[1]: sshd@8-172.31.31.201:22-139.178.89.65:35008.service: Deactivated successfully. Sep 4 23:45:35.915511 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 23:45:35.916098 systemd[1]: session-9.scope: Consumed 12.378s CPU time, 265.4M memory peak. Sep 4 23:45:35.920905 systemd-logind[1939]: Session 9 logged out. Waiting for processes to exit. Sep 4 23:45:35.925738 systemd-logind[1939]: Removed session 9. Sep 4 23:45:36.487907 systemd[1]: Created slice kubepods-besteffort-pod359af3c5_0691_46d7_99f0_8b736e188568.slice - libcontainer container kubepods-besteffort-pod359af3c5_0691_46d7_99f0_8b736e188568.slice. Sep 4 23:45:36.519195 systemd[1]: Created slice kubepods-burstable-pode2b2a501_19dc_429e_8d11_892c8816450f.slice - libcontainer container kubepods-burstable-pode2b2a501_19dc_429e_8d11_892c8816450f.slice. Sep 4 23:45:36.559717 kubelet[3364]: I0904 23:45:36.559637 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-xtables-lock\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560353 kubelet[3364]: I0904 23:45:36.559724 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-config-path\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560353 kubelet[3364]: I0904 23:45:36.559767 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-net\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560353 kubelet[3364]: I0904 23:45:36.559802 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-hubble-tls\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560353 kubelet[3364]: I0904 23:45:36.559836 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-run\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560663 kubelet[3364]: I0904 23:45:36.560506 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cni-path\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560663 kubelet[3364]: I0904 23:45:36.560549 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-kernel\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560663 kubelet[3364]: I0904 23:45:36.560605 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlg9\" (UniqueName: \"kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-kube-api-access-2dlg9\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560818 kubelet[3364]: I0904 23:45:36.560671 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-bpf-maps\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560818 kubelet[3364]: I0904 23:45:36.560708 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-cgroup\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.560818 kubelet[3364]: I0904 23:45:36.560746 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/359af3c5-0691-46d7-99f0-8b736e188568-kube-proxy\") pod \"kube-proxy-v5pfz\" (UID: \"359af3c5-0691-46d7-99f0-8b736e188568\") " pod="kube-system/kube-proxy-v5pfz" Sep 4 23:45:36.560818 kubelet[3364]: I0904 23:45:36.560779 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/359af3c5-0691-46d7-99f0-8b736e188568-xtables-lock\") pod \"kube-proxy-v5pfz\" (UID: \"359af3c5-0691-46d7-99f0-8b736e188568\") " pod="kube-system/kube-proxy-v5pfz" Sep 4 23:45:36.561008 kubelet[3364]: I0904 23:45:36.560820 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ztcr\" (UniqueName: \"kubernetes.io/projected/359af3c5-0691-46d7-99f0-8b736e188568-kube-api-access-2ztcr\") pod \"kube-proxy-v5pfz\" (UID: \"359af3c5-0691-46d7-99f0-8b736e188568\") " pod="kube-system/kube-proxy-v5pfz" Sep 4 23:45:36.561008 kubelet[3364]: I0904 23:45:36.560856 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-etc-cni-netd\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.561008 kubelet[3364]: I0904 23:45:36.560890 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-lib-modules\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.561008 kubelet[3364]: I0904 23:45:36.560940 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2b2a501-19dc-429e-8d11-892c8816450f-clustermesh-secrets\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.561008 kubelet[3364]: I0904 23:45:36.560978 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/359af3c5-0691-46d7-99f0-8b736e188568-lib-modules\") pod \"kube-proxy-v5pfz\" (UID: \"359af3c5-0691-46d7-99f0-8b736e188568\") " pod="kube-system/kube-proxy-v5pfz" Sep 4 23:45:36.561260 kubelet[3364]: I0904 23:45:36.561013 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-hostproc\") pod \"cilium-kxw4b\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " pod="kube-system/cilium-kxw4b" Sep 4 23:45:36.738192 systemd[1]: Created slice kubepods-besteffort-pod95ead1e6_5789_479a_b083_619020aed508.slice - libcontainer container kubepods-besteffort-pod95ead1e6_5789_479a_b083_619020aed508.slice. Sep 4 23:45:36.767739 kubelet[3364]: I0904 23:45:36.767670 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95ead1e6-5789-479a-b083-619020aed508-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7tn6d\" (UID: \"95ead1e6-5789-479a-b083-619020aed508\") " pod="kube-system/cilium-operator-6c4d7847fc-7tn6d" Sep 4 23:45:36.767911 kubelet[3364]: I0904 23:45:36.767776 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngn97\" (UniqueName: \"kubernetes.io/projected/95ead1e6-5789-479a-b083-619020aed508-kube-api-access-ngn97\") pod \"cilium-operator-6c4d7847fc-7tn6d\" (UID: \"95ead1e6-5789-479a-b083-619020aed508\") " pod="kube-system/cilium-operator-6c4d7847fc-7tn6d" Sep 4 23:45:36.799684 containerd[1963]: time="2025-09-04T23:45:36.799592922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v5pfz,Uid:359af3c5-0691-46d7-99f0-8b736e188568,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:36.834995 containerd[1963]: time="2025-09-04T23:45:36.834946674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kxw4b,Uid:e2b2a501-19dc-429e-8d11-892c8816450f,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:36.852174 containerd[1963]: time="2025-09-04T23:45:36.852004314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:36.852174 containerd[1963]: time="2025-09-04T23:45:36.852125706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:36.852594 containerd[1963]: time="2025-09-04T23:45:36.852163134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:36.852594 containerd[1963]: time="2025-09-04T23:45:36.852345810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:36.906776 systemd[1]: Started cri-containerd-c41bed036fddbc64b5f534493f2e57cdc3c8c093794edafbd00f86b874d436fc.scope - libcontainer container c41bed036fddbc64b5f534493f2e57cdc3c8c093794edafbd00f86b874d436fc. Sep 4 23:45:36.916728 containerd[1963]: time="2025-09-04T23:45:36.914830374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:36.916728 containerd[1963]: time="2025-09-04T23:45:36.914945238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:36.916728 containerd[1963]: time="2025-09-04T23:45:36.914999574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:36.916728 containerd[1963]: time="2025-09-04T23:45:36.915151182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:36.963752 systemd[1]: Started cri-containerd-7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172.scope - libcontainer container 7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172. Sep 4 23:45:36.977793 containerd[1963]: time="2025-09-04T23:45:36.977249862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v5pfz,Uid:359af3c5-0691-46d7-99f0-8b736e188568,Namespace:kube-system,Attempt:0,} returns sandbox id \"c41bed036fddbc64b5f534493f2e57cdc3c8c093794edafbd00f86b874d436fc\"" Sep 4 23:45:36.994363 containerd[1963]: time="2025-09-04T23:45:36.994070899Z" level=info msg="CreateContainer within sandbox \"c41bed036fddbc64b5f534493f2e57cdc3c8c093794edafbd00f86b874d436fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 23:45:37.030255 containerd[1963]: time="2025-09-04T23:45:37.030083931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kxw4b,Uid:e2b2a501-19dc-429e-8d11-892c8816450f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\"" Sep 4 23:45:37.035255 containerd[1963]: time="2025-09-04T23:45:37.034892919Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 23:45:37.036660 containerd[1963]: time="2025-09-04T23:45:37.036438411Z" level=info msg="CreateContainer within sandbox \"c41bed036fddbc64b5f534493f2e57cdc3c8c093794edafbd00f86b874d436fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dbc8376abc244476deef9d61f5217e94aa44ec0942c571576a32abba593973c9\"" Sep 4 23:45:37.037706 containerd[1963]: time="2025-09-04T23:45:37.037343319Z" level=info msg="StartContainer for \"dbc8376abc244476deef9d61f5217e94aa44ec0942c571576a32abba593973c9\"" Sep 4 23:45:37.064985 containerd[1963]: time="2025-09-04T23:45:37.064471767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7tn6d,Uid:95ead1e6-5789-479a-b083-619020aed508,Namespace:kube-system,Attempt:0,}" Sep 4 23:45:37.090896 systemd[1]: Started cri-containerd-dbc8376abc244476deef9d61f5217e94aa44ec0942c571576a32abba593973c9.scope - libcontainer container dbc8376abc244476deef9d61f5217e94aa44ec0942c571576a32abba593973c9. Sep 4 23:45:37.135796 containerd[1963]: time="2025-09-04T23:45:37.135518835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:45:37.135796 containerd[1963]: time="2025-09-04T23:45:37.135724863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:45:37.136017 containerd[1963]: time="2025-09-04T23:45:37.135797655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:37.136076 containerd[1963]: time="2025-09-04T23:45:37.136010487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:45:37.185982 systemd[1]: Started cri-containerd-8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564.scope - libcontainer container 8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564. Sep 4 23:45:37.193328 containerd[1963]: time="2025-09-04T23:45:37.193238896Z" level=info msg="StartContainer for \"dbc8376abc244476deef9d61f5217e94aa44ec0942c571576a32abba593973c9\" returns successfully" Sep 4 23:45:37.263306 containerd[1963]: time="2025-09-04T23:45:37.262930096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7tn6d,Uid:95ead1e6-5789-479a-b083-619020aed508,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\"" Sep 4 23:45:42.694206 kubelet[3364]: I0904 23:45:42.692347 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v5pfz" podStartSLOduration=6.692327039 podStartE2EDuration="6.692327039s" podCreationTimestamp="2025-09-04 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:45:37.935712175 +0000 UTC m=+6.442887681" watchObservedRunningTime="2025-09-04 23:45:42.692327039 +0000 UTC m=+11.199502533" Sep 4 23:45:56.177302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293786721.mount: Deactivated successfully. Sep 4 23:45:58.895216 containerd[1963]: time="2025-09-04T23:45:58.895129551Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:58.897222 containerd[1963]: time="2025-09-04T23:45:58.897143223Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 4 23:45:58.899958 containerd[1963]: time="2025-09-04T23:45:58.899878359Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:45:58.903517 containerd[1963]: time="2025-09-04T23:45:58.903122439Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 21.86816916s" Sep 4 23:45:58.903517 containerd[1963]: time="2025-09-04T23:45:58.903188643Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 4 23:45:58.908111 containerd[1963]: time="2025-09-04T23:45:58.907740471Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 23:45:58.914320 containerd[1963]: time="2025-09-04T23:45:58.914269335Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:45:58.943629 containerd[1963]: time="2025-09-04T23:45:58.943213612Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\"" Sep 4 23:45:58.944547 containerd[1963]: time="2025-09-04T23:45:58.944488756Z" level=info msg="StartContainer for \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\"" Sep 4 23:45:59.006120 systemd[1]: run-containerd-runc-k8s.io-5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6-runc.o2sBtb.mount: Deactivated successfully. Sep 4 23:45:59.021735 systemd[1]: Started cri-containerd-5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6.scope - libcontainer container 5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6. Sep 4 23:45:59.073973 containerd[1963]: time="2025-09-04T23:45:59.073905216Z" level=info msg="StartContainer for \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\" returns successfully" Sep 4 23:45:59.098506 systemd[1]: cri-containerd-5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6.scope: Deactivated successfully. Sep 4 23:45:59.825735 containerd[1963]: time="2025-09-04T23:45:59.825621592Z" level=info msg="shim disconnected" id=5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6 namespace=k8s.io Sep 4 23:45:59.826148 containerd[1963]: time="2025-09-04T23:45:59.825877792Z" level=warning msg="cleaning up after shim disconnected" id=5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6 namespace=k8s.io Sep 4 23:45:59.826148 containerd[1963]: time="2025-09-04T23:45:59.825901144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:45:59.933447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6-rootfs.mount: Deactivated successfully. Sep 4 23:45:59.970433 containerd[1963]: time="2025-09-04T23:45:59.969716309Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:46:00.007695 containerd[1963]: time="2025-09-04T23:46:00.007498897Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\"" Sep 4 23:46:00.008448 containerd[1963]: time="2025-09-04T23:46:00.008367937Z" level=info msg="StartContainer for \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\"" Sep 4 23:46:00.073699 systemd[1]: Started cri-containerd-a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7.scope - libcontainer container a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7. Sep 4 23:46:00.124329 containerd[1963]: time="2025-09-04T23:46:00.123910237Z" level=info msg="StartContainer for \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\" returns successfully" Sep 4 23:46:00.150560 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 23:46:00.151041 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:00.152675 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:00.161525 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 23:46:00.162086 systemd[1]: cri-containerd-a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7.scope: Deactivated successfully. Sep 4 23:46:00.211609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 23:46:00.226215 containerd[1963]: time="2025-09-04T23:46:00.225916778Z" level=info msg="shim disconnected" id=a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7 namespace=k8s.io Sep 4 23:46:00.226215 containerd[1963]: time="2025-09-04T23:46:00.225991814Z" level=warning msg="cleaning up after shim disconnected" id=a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7 namespace=k8s.io Sep 4 23:46:00.226215 containerd[1963]: time="2025-09-04T23:46:00.226011470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:00.934787 systemd[1]: run-containerd-runc-k8s.io-a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7-runc.MBghqO.mount: Deactivated successfully. Sep 4 23:46:00.934994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7-rootfs.mount: Deactivated successfully. Sep 4 23:46:00.980447 containerd[1963]: time="2025-09-04T23:46:00.980317218Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:46:01.030653 containerd[1963]: time="2025-09-04T23:46:01.030577694Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\"" Sep 4 23:46:01.034280 containerd[1963]: time="2025-09-04T23:46:01.034225238Z" level=info msg="StartContainer for \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\"" Sep 4 23:46:01.101723 systemd[1]: Started cri-containerd-ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136.scope - libcontainer container ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136. Sep 4 23:46:01.164644 containerd[1963]: time="2025-09-04T23:46:01.162985911Z" level=info msg="StartContainer for \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\" returns successfully" Sep 4 23:46:01.168803 systemd[1]: cri-containerd-ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136.scope: Deactivated successfully. Sep 4 23:46:01.226813 containerd[1963]: time="2025-09-04T23:46:01.226243659Z" level=info msg="shim disconnected" id=ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136 namespace=k8s.io Sep 4 23:46:01.226813 containerd[1963]: time="2025-09-04T23:46:01.226317591Z" level=warning msg="cleaning up after shim disconnected" id=ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136 namespace=k8s.io Sep 4 23:46:01.226813 containerd[1963]: time="2025-09-04T23:46:01.226339503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:01.935875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136-rootfs.mount: Deactivated successfully. Sep 4 23:46:01.984305 containerd[1963]: time="2025-09-04T23:46:01.984107827Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:46:02.020637 containerd[1963]: time="2025-09-04T23:46:02.020574195Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\"" Sep 4 23:46:02.024964 containerd[1963]: time="2025-09-04T23:46:02.021579567Z" level=info msg="StartContainer for \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\"" Sep 4 23:46:02.103740 systemd[1]: Started cri-containerd-3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421.scope - libcontainer container 3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421. Sep 4 23:46:02.155923 systemd[1]: cri-containerd-3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421.scope: Deactivated successfully. Sep 4 23:46:02.162034 containerd[1963]: time="2025-09-04T23:46:02.161967988Z" level=info msg="StartContainer for \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\" returns successfully" Sep 4 23:46:02.222708 containerd[1963]: time="2025-09-04T23:46:02.222264820Z" level=info msg="shim disconnected" id=3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421 namespace=k8s.io Sep 4 23:46:02.222708 containerd[1963]: time="2025-09-04T23:46:02.222337516Z" level=warning msg="cleaning up after shim disconnected" id=3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421 namespace=k8s.io Sep 4 23:46:02.222708 containerd[1963]: time="2025-09-04T23:46:02.222358888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:46:02.936136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421-rootfs.mount: Deactivated successfully. Sep 4 23:46:02.996726 containerd[1963]: time="2025-09-04T23:46:02.996153344Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:46:03.049560 containerd[1963]: time="2025-09-04T23:46:03.049006240Z" level=info msg="CreateContainer within sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\"" Sep 4 23:46:03.054073 containerd[1963]: time="2025-09-04T23:46:03.051669868Z" level=info msg="StartContainer for \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\"" Sep 4 23:46:03.059872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288690366.mount: Deactivated successfully. Sep 4 23:46:03.133731 systemd[1]: Started cri-containerd-07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62.scope - libcontainer container 07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62. Sep 4 23:46:03.235600 containerd[1963]: time="2025-09-04T23:46:03.235098461Z" level=info msg="StartContainer for \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\" returns successfully" Sep 4 23:46:03.596431 kubelet[3364]: I0904 23:46:03.595260 3364 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 23:46:03.713443 systemd[1]: Created slice kubepods-burstable-pod26cff8ce_8ac3_46e8_b1b8_ce88504d2a82.slice - libcontainer container kubepods-burstable-pod26cff8ce_8ac3_46e8_b1b8_ce88504d2a82.slice. Sep 4 23:46:03.756640 systemd[1]: Created slice kubepods-burstable-pod287246bb_e98d_4a39_a948_897a064f15d7.slice - libcontainer container kubepods-burstable-pod287246bb_e98d_4a39_a948_897a064f15d7.slice. Sep 4 23:46:03.769562 kubelet[3364]: I0904 23:46:03.768802 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26cff8ce-8ac3-46e8-b1b8-ce88504d2a82-config-volume\") pod \"coredns-674b8bbfcf-v9jwt\" (UID: \"26cff8ce-8ac3-46e8-b1b8-ce88504d2a82\") " pod="kube-system/coredns-674b8bbfcf-v9jwt" Sep 4 23:46:03.769562 kubelet[3364]: I0904 23:46:03.768870 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkwz\" (UniqueName: \"kubernetes.io/projected/26cff8ce-8ac3-46e8-b1b8-ce88504d2a82-kube-api-access-sbkwz\") pod \"coredns-674b8bbfcf-v9jwt\" (UID: \"26cff8ce-8ac3-46e8-b1b8-ce88504d2a82\") " pod="kube-system/coredns-674b8bbfcf-v9jwt" Sep 4 23:46:03.769562 kubelet[3364]: I0904 23:46:03.768945 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk8ws\" (UniqueName: \"kubernetes.io/projected/287246bb-e98d-4a39-a948-897a064f15d7-kube-api-access-hk8ws\") pod \"coredns-674b8bbfcf-rsxhg\" (UID: \"287246bb-e98d-4a39-a948-897a064f15d7\") " pod="kube-system/coredns-674b8bbfcf-rsxhg" Sep 4 23:46:03.769562 kubelet[3364]: I0904 23:46:03.769012 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/287246bb-e98d-4a39-a948-897a064f15d7-config-volume\") pod \"coredns-674b8bbfcf-rsxhg\" (UID: \"287246bb-e98d-4a39-a948-897a064f15d7\") " pod="kube-system/coredns-674b8bbfcf-rsxhg" Sep 4 23:46:04.037202 containerd[1963]: time="2025-09-04T23:46:04.037129433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v9jwt,Uid:26cff8ce-8ac3-46e8-b1b8-ce88504d2a82,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:04.057337 kubelet[3364]: I0904 23:46:04.056722 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kxw4b" podStartSLOduration=6.183118305 podStartE2EDuration="28.056233649s" podCreationTimestamp="2025-09-04 23:45:36 +0000 UTC" firstStartedPulling="2025-09-04 23:45:37.032387787 +0000 UTC m=+5.539563257" lastFinishedPulling="2025-09-04 23:45:58.905503071 +0000 UTC m=+27.412678601" observedRunningTime="2025-09-04 23:46:04.054943889 +0000 UTC m=+32.562119395" watchObservedRunningTime="2025-09-04 23:46:04.056233649 +0000 UTC m=+32.563409143" Sep 4 23:46:04.090964 containerd[1963]: time="2025-09-04T23:46:04.088890665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rsxhg,Uid:287246bb-e98d-4a39-a948-897a064f15d7,Namespace:kube-system,Attempt:0,}" Sep 4 23:46:04.332469 containerd[1963]: time="2025-09-04T23:46:04.332238090Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.335110 containerd[1963]: time="2025-09-04T23:46:04.334993566Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 4 23:46:04.338974 containerd[1963]: time="2025-09-04T23:46:04.338298390Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 23:46:04.341926 containerd[1963]: time="2025-09-04T23:46:04.341873322Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.434070583s" Sep 4 23:46:04.342203 containerd[1963]: time="2025-09-04T23:46:04.342169134Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 4 23:46:04.354046 containerd[1963]: time="2025-09-04T23:46:04.353796690Z" level=info msg="CreateContainer within sandbox \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 23:46:04.391213 containerd[1963]: time="2025-09-04T23:46:04.391148803Z" level=info msg="CreateContainer within sandbox \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\"" Sep 4 23:46:04.393568 containerd[1963]: time="2025-09-04T23:46:04.392677495Z" level=info msg="StartContainer for \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\"" Sep 4 23:46:04.455733 systemd[1]: Started cri-containerd-b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265.scope - libcontainer container b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265. Sep 4 23:46:04.546917 containerd[1963]: time="2025-09-04T23:46:04.546291547Z" level=info msg="StartContainer for \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\" returns successfully" Sep 4 23:46:08.869596 systemd-networkd[1868]: cilium_host: Link UP Sep 4 23:46:08.876498 systemd-networkd[1868]: cilium_net: Link UP Sep 4 23:46:08.878106 (udev-worker)[4205]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:08.878541 systemd-networkd[1868]: cilium_net: Gained carrier Sep 4 23:46:08.879674 (udev-worker)[4204]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:08.880849 systemd-networkd[1868]: cilium_host: Gained carrier Sep 4 23:46:09.063820 (udev-worker)[4219]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:09.090003 systemd-networkd[1868]: cilium_vxlan: Link UP Sep 4 23:46:09.090016 systemd-networkd[1868]: cilium_vxlan: Gained carrier Sep 4 23:46:09.522631 systemd-networkd[1868]: cilium_net: Gained IPv6LL Sep 4 23:46:09.716217 kernel: NET: Registered PF_ALG protocol family Sep 4 23:46:09.779029 systemd-networkd[1868]: cilium_host: Gained IPv6LL Sep 4 23:46:09.804035 systemd[1]: Started sshd@9-172.31.31.201:22-139.178.89.65:41726.service - OpenSSH per-connection server daemon (139.178.89.65:41726). Sep 4 23:46:10.017777 sshd[4312]: Accepted publickey for core from 139.178.89.65 port 41726 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:10.015979 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:10.029825 systemd-logind[1939]: New session 10 of user core. Sep 4 23:46:10.035427 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 23:46:10.358318 sshd[4314]: Connection closed by 139.178.89.65 port 41726 Sep 4 23:46:10.359812 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:10.368906 systemd[1]: sshd@9-172.31.31.201:22-139.178.89.65:41726.service: Deactivated successfully. Sep 4 23:46:10.376377 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 23:46:10.382224 systemd-logind[1939]: Session 10 logged out. Waiting for processes to exit. Sep 4 23:46:10.386869 systemd-logind[1939]: Removed session 10. Sep 4 23:46:10.994809 systemd-networkd[1868]: cilium_vxlan: Gained IPv6LL Sep 4 23:46:11.235133 systemd-networkd[1868]: lxc_health: Link UP Sep 4 23:46:11.249789 systemd-networkd[1868]: lxc_health: Gained carrier Sep 4 23:46:11.700755 systemd-networkd[1868]: lxcd367176b8c7e: Link UP Sep 4 23:46:11.702174 kernel: eth0: renamed from tmpffccb Sep 4 23:46:11.709837 systemd-networkd[1868]: lxcd367176b8c7e: Gained carrier Sep 4 23:46:11.759349 systemd-networkd[1868]: lxc291de0805023: Link UP Sep 4 23:46:11.769455 kernel: eth0: renamed from tmp66573 Sep 4 23:46:11.779775 systemd-networkd[1868]: lxc291de0805023: Gained carrier Sep 4 23:46:11.781877 (udev-worker)[4568]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:46:12.871218 kubelet[3364]: I0904 23:46:12.871111 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7tn6d" podStartSLOduration=9.793555863 podStartE2EDuration="36.871089833s" podCreationTimestamp="2025-09-04 23:45:36 +0000 UTC" firstStartedPulling="2025-09-04 23:45:37.267083548 +0000 UTC m=+5.774259030" lastFinishedPulling="2025-09-04 23:46:04.34461753 +0000 UTC m=+32.851793000" observedRunningTime="2025-09-04 23:46:05.101656122 +0000 UTC m=+33.608831616" watchObservedRunningTime="2025-09-04 23:46:12.871089833 +0000 UTC m=+41.378265315" Sep 4 23:46:13.170624 systemd-networkd[1868]: lxc291de0805023: Gained IPv6LL Sep 4 23:46:13.235668 systemd-networkd[1868]: lxc_health: Gained IPv6LL Sep 4 23:46:13.746759 systemd-networkd[1868]: lxcd367176b8c7e: Gained IPv6LL Sep 4 23:46:15.402017 systemd[1]: Started sshd@10-172.31.31.201:22-139.178.89.65:56962.service - OpenSSH per-connection server daemon (139.178.89.65:56962). Sep 4 23:46:15.600430 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 56962 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:15.600999 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:15.611361 systemd-logind[1939]: New session 11 of user core. Sep 4 23:46:15.619764 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 23:46:15.900349 sshd[4591]: Connection closed by 139.178.89.65 port 56962 Sep 4 23:46:15.902732 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:15.909010 systemd-logind[1939]: Session 11 logged out. Waiting for processes to exit. Sep 4 23:46:15.913506 systemd[1]: sshd@10-172.31.31.201:22-139.178.89.65:56962.service: Deactivated successfully. Sep 4 23:46:15.921062 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 23:46:15.926948 systemd-logind[1939]: Removed session 11. Sep 4 23:46:16.370348 ntpd[1933]: Listen normally on 7 cilium_host 192.168.0.18:123 Sep 4 23:46:16.370916 ntpd[1933]: 4 Sep 23:46:16 ntpd[1933]: Listen normally on 7 cilium_host 192.168.0.18:123 Sep 4 23:46:16.370916 ntpd[1933]: 4 Sep 23:46:16 ntpd[1933]: Listen normally on 8 cilium_net [fe80::28f3:40ff:fe6c:7f9a%4]:123 Sep 4 23:46:16.370916 ntpd[1933]: 4 Sep 23:46:16 ntpd[1933]: Listen normally on 9 cilium_host [fe80::9863:e2ff:fef8:78ec%5]:123 Sep 4 23:46:16.370916 ntpd[1933]: 4 Sep 23:46:16 ntpd[1933]: Listen normally on 10 cilium_vxlan [fe80::6426:a7ff:fe97:bf3b%6]:123 Sep 4 23:46:16.370916 ntpd[1933]: 4 Sep 23:46:16 ntpd[1933]: Listen normally on 11 lxc_health [fe80::7890:29ff:fe86:44d%8]:123 Sep 4 23:46:16.370916 ntpd[1933]: 4 Sep 23:46:16 ntpd[1933]: Listen normally on 12 lxcd367176b8c7e [fe80::d4fd:4dff:fe05:600e%10]:123 Sep 4 23:46:16.370916 ntpd[1933]: 4 Sep 23:46:16 ntpd[1933]: Listen normally on 13 lxc291de0805023 [fe80::2c00:bdff:fee5:2a5f%12]:123 Sep 4 23:46:16.370520 ntpd[1933]: Listen normally on 8 cilium_net [fe80::28f3:40ff:fe6c:7f9a%4]:123 Sep 4 23:46:16.370603 ntpd[1933]: Listen normally on 9 cilium_host [fe80::9863:e2ff:fef8:78ec%5]:123 Sep 4 23:46:16.370672 ntpd[1933]: Listen normally on 10 cilium_vxlan [fe80::6426:a7ff:fe97:bf3b%6]:123 Sep 4 23:46:16.370739 ntpd[1933]: Listen normally on 11 lxc_health [fe80::7890:29ff:fe86:44d%8]:123 Sep 4 23:46:16.370812 ntpd[1933]: Listen normally on 12 lxcd367176b8c7e [fe80::d4fd:4dff:fe05:600e%10]:123 Sep 4 23:46:16.370881 ntpd[1933]: Listen normally on 13 lxc291de0805023 [fe80::2c00:bdff:fee5:2a5f%12]:123 Sep 4 23:46:20.395613 containerd[1963]: time="2025-09-04T23:46:20.395189542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:20.395613 containerd[1963]: time="2025-09-04T23:46:20.395298646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:20.395613 containerd[1963]: time="2025-09-04T23:46:20.395329498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:20.397476 containerd[1963]: time="2025-09-04T23:46:20.396143794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:20.437525 containerd[1963]: time="2025-09-04T23:46:20.437258482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:46:20.437525 containerd[1963]: time="2025-09-04T23:46:20.437362090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:46:20.437734 containerd[1963]: time="2025-09-04T23:46:20.437412694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:20.439461 containerd[1963]: time="2025-09-04T23:46:20.437768230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:46:20.502228 systemd[1]: run-containerd-runc-k8s.io-66573d76707c6c74e1eb72ae36c4a2e9efb719b02a066a61d9c2c8e216339bd5-runc.5GHHsU.mount: Deactivated successfully. Sep 4 23:46:20.522047 systemd[1]: Started cri-containerd-66573d76707c6c74e1eb72ae36c4a2e9efb719b02a066a61d9c2c8e216339bd5.scope - libcontainer container 66573d76707c6c74e1eb72ae36c4a2e9efb719b02a066a61d9c2c8e216339bd5. Sep 4 23:46:20.530638 systemd[1]: Started cri-containerd-ffccb4989240c0bc91bcdbb707134cae92bbef3877070ba0c7a5c6bdf862b3f0.scope - libcontainer container ffccb4989240c0bc91bcdbb707134cae92bbef3877070ba0c7a5c6bdf862b3f0. Sep 4 23:46:20.674384 containerd[1963]: time="2025-09-04T23:46:20.673984259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rsxhg,Uid:287246bb-e98d-4a39-a948-897a064f15d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"66573d76707c6c74e1eb72ae36c4a2e9efb719b02a066a61d9c2c8e216339bd5\"" Sep 4 23:46:20.701983 containerd[1963]: time="2025-09-04T23:46:20.700738392Z" level=info msg="CreateContainer within sandbox \"66573d76707c6c74e1eb72ae36c4a2e9efb719b02a066a61d9c2c8e216339bd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:20.709483 containerd[1963]: time="2025-09-04T23:46:20.707948520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v9jwt,Uid:26cff8ce-8ac3-46e8-b1b8-ce88504d2a82,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffccb4989240c0bc91bcdbb707134cae92bbef3877070ba0c7a5c6bdf862b3f0\"" Sep 4 23:46:20.726699 containerd[1963]: time="2025-09-04T23:46:20.726621936Z" level=info msg="CreateContainer within sandbox \"ffccb4989240c0bc91bcdbb707134cae92bbef3877070ba0c7a5c6bdf862b3f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 23:46:20.750633 containerd[1963]: time="2025-09-04T23:46:20.750550896Z" level=info msg="CreateContainer within sandbox \"66573d76707c6c74e1eb72ae36c4a2e9efb719b02a066a61d9c2c8e216339bd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea24f4d3cf9d8662e9caff9839523451deada56f24bee18e96e14631d85aefc2\"" Sep 4 23:46:20.757046 containerd[1963]: time="2025-09-04T23:46:20.753520476Z" level=info msg="StartContainer for \"ea24f4d3cf9d8662e9caff9839523451deada56f24bee18e96e14631d85aefc2\"" Sep 4 23:46:20.779382 containerd[1963]: time="2025-09-04T23:46:20.779311044Z" level=info msg="CreateContainer within sandbox \"ffccb4989240c0bc91bcdbb707134cae92bbef3877070ba0c7a5c6bdf862b3f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4199bec5340262cfca53219e260d90ce320a030430786daa62ad7a4f4b43eb3\"" Sep 4 23:46:20.782501 containerd[1963]: time="2025-09-04T23:46:20.782381520Z" level=info msg="StartContainer for \"f4199bec5340262cfca53219e260d90ce320a030430786daa62ad7a4f4b43eb3\"" Sep 4 23:46:20.828053 systemd[1]: Started cri-containerd-ea24f4d3cf9d8662e9caff9839523451deada56f24bee18e96e14631d85aefc2.scope - libcontainer container ea24f4d3cf9d8662e9caff9839523451deada56f24bee18e96e14631d85aefc2. Sep 4 23:46:20.870732 systemd[1]: Started cri-containerd-f4199bec5340262cfca53219e260d90ce320a030430786daa62ad7a4f4b43eb3.scope - libcontainer container f4199bec5340262cfca53219e260d90ce320a030430786daa62ad7a4f4b43eb3. Sep 4 23:46:20.926942 containerd[1963]: time="2025-09-04T23:46:20.925742725Z" level=info msg="StartContainer for \"ea24f4d3cf9d8662e9caff9839523451deada56f24bee18e96e14631d85aefc2\" returns successfully" Sep 4 23:46:20.950067 systemd[1]: Started sshd@11-172.31.31.201:22-139.178.89.65:44606.service - OpenSSH per-connection server daemon (139.178.89.65:44606). Sep 4 23:46:20.966592 containerd[1963]: time="2025-09-04T23:46:20.966534853Z" level=info msg="StartContainer for \"f4199bec5340262cfca53219e260d90ce320a030430786daa62ad7a4f4b43eb3\" returns successfully" Sep 4 23:46:21.153275 sshd[4758]: Accepted publickey for core from 139.178.89.65 port 44606 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:21.157012 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:21.171264 systemd-logind[1939]: New session 12 of user core. Sep 4 23:46:21.182726 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 23:46:21.196289 kubelet[3364]: I0904 23:46:21.196192 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v9jwt" podStartSLOduration=45.196168954 podStartE2EDuration="45.196168954s" podCreationTimestamp="2025-09-04 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:21.19200643 +0000 UTC m=+49.699181924" watchObservedRunningTime="2025-09-04 23:46:21.196168954 +0000 UTC m=+49.703344436" Sep 4 23:46:21.198511 kubelet[3364]: I0904 23:46:21.196373 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rsxhg" podStartSLOduration=45.196360738 podStartE2EDuration="45.196360738s" podCreationTimestamp="2025-09-04 23:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:46:21.148614454 +0000 UTC m=+49.655789936" watchObservedRunningTime="2025-09-04 23:46:21.196360738 +0000 UTC m=+49.703536232" Sep 4 23:46:21.418605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605874493.mount: Deactivated successfully. Sep 4 23:46:21.446569 sshd[4771]: Connection closed by 139.178.89.65 port 44606 Sep 4 23:46:21.447449 sshd-session[4758]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:21.453139 systemd-logind[1939]: Session 12 logged out. Waiting for processes to exit. Sep 4 23:46:21.454731 systemd[1]: sshd@11-172.31.31.201:22-139.178.89.65:44606.service: Deactivated successfully. Sep 4 23:46:21.459301 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 23:46:21.464137 systemd-logind[1939]: Removed session 12. Sep 4 23:46:26.493924 systemd[1]: Started sshd@12-172.31.31.201:22-139.178.89.65:44620.service - OpenSSH per-connection server daemon (139.178.89.65:44620). Sep 4 23:46:26.692001 sshd[4792]: Accepted publickey for core from 139.178.89.65 port 44620 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:26.694916 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:26.702854 systemd-logind[1939]: New session 13 of user core. Sep 4 23:46:26.714684 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 23:46:26.964550 sshd[4794]: Connection closed by 139.178.89.65 port 44620 Sep 4 23:46:26.963502 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:26.968532 systemd[1]: sshd@12-172.31.31.201:22-139.178.89.65:44620.service: Deactivated successfully. Sep 4 23:46:26.975686 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 23:46:26.979847 systemd-logind[1939]: Session 13 logged out. Waiting for processes to exit. Sep 4 23:46:26.982363 systemd-logind[1939]: Removed session 13. Sep 4 23:46:32.010003 systemd[1]: Started sshd@13-172.31.31.201:22-139.178.89.65:59124.service - OpenSSH per-connection server daemon (139.178.89.65:59124). Sep 4 23:46:32.190757 sshd[4813]: Accepted publickey for core from 139.178.89.65 port 59124 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:32.193314 sshd-session[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:32.202677 systemd-logind[1939]: New session 14 of user core. Sep 4 23:46:32.208688 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 23:46:32.460463 sshd[4815]: Connection closed by 139.178.89.65 port 59124 Sep 4 23:46:32.461422 sshd-session[4813]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:32.469586 systemd-logind[1939]: Session 14 logged out. Waiting for processes to exit. Sep 4 23:46:32.471057 systemd[1]: sshd@13-172.31.31.201:22-139.178.89.65:59124.service: Deactivated successfully. Sep 4 23:46:32.475193 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 23:46:32.478167 systemd-logind[1939]: Removed session 14. Sep 4 23:46:32.498944 systemd[1]: Started sshd@14-172.31.31.201:22-139.178.89.65:59130.service - OpenSSH per-connection server daemon (139.178.89.65:59130). Sep 4 23:46:32.696898 sshd[4827]: Accepted publickey for core from 139.178.89.65 port 59130 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:32.699512 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:32.709154 systemd-logind[1939]: New session 15 of user core. Sep 4 23:46:32.715698 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 23:46:33.038845 sshd[4829]: Connection closed by 139.178.89.65 port 59130 Sep 4 23:46:33.040338 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:33.053613 systemd-logind[1939]: Session 15 logged out. Waiting for processes to exit. Sep 4 23:46:33.054179 systemd[1]: sshd@14-172.31.31.201:22-139.178.89.65:59130.service: Deactivated successfully. Sep 4 23:46:33.061997 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 23:46:33.084619 systemd-logind[1939]: Removed session 15. Sep 4 23:46:33.096603 systemd[1]: Started sshd@15-172.31.31.201:22-139.178.89.65:59134.service - OpenSSH per-connection server daemon (139.178.89.65:59134). Sep 4 23:46:33.304436 sshd[4838]: Accepted publickey for core from 139.178.89.65 port 59134 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:33.306982 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:33.316718 systemd-logind[1939]: New session 16 of user core. Sep 4 23:46:33.323718 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 23:46:33.576963 sshd[4841]: Connection closed by 139.178.89.65 port 59134 Sep 4 23:46:33.578136 sshd-session[4838]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:33.584259 systemd[1]: sshd@15-172.31.31.201:22-139.178.89.65:59134.service: Deactivated successfully. Sep 4 23:46:33.588277 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 23:46:33.590431 systemd-logind[1939]: Session 16 logged out. Waiting for processes to exit. Sep 4 23:46:33.593034 systemd-logind[1939]: Removed session 16. Sep 4 23:46:38.624934 systemd[1]: Started sshd@16-172.31.31.201:22-139.178.89.65:59140.service - OpenSSH per-connection server daemon (139.178.89.65:59140). Sep 4 23:46:38.809527 sshd[4856]: Accepted publickey for core from 139.178.89.65 port 59140 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:38.812995 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:38.822490 systemd-logind[1939]: New session 17 of user core. Sep 4 23:46:38.828875 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 23:46:39.075868 sshd[4858]: Connection closed by 139.178.89.65 port 59140 Sep 4 23:46:39.076831 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:39.083720 systemd[1]: sshd@16-172.31.31.201:22-139.178.89.65:59140.service: Deactivated successfully. Sep 4 23:46:39.088815 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 23:46:39.090630 systemd-logind[1939]: Session 17 logged out. Waiting for processes to exit. Sep 4 23:46:39.093366 systemd-logind[1939]: Removed session 17. Sep 4 23:46:44.120927 systemd[1]: Started sshd@17-172.31.31.201:22-139.178.89.65:56718.service - OpenSSH per-connection server daemon (139.178.89.65:56718). Sep 4 23:46:44.303980 sshd[4872]: Accepted publickey for core from 139.178.89.65 port 56718 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:44.306436 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:44.314910 systemd-logind[1939]: New session 18 of user core. Sep 4 23:46:44.327689 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 23:46:44.577051 sshd[4875]: Connection closed by 139.178.89.65 port 56718 Sep 4 23:46:44.578100 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:44.585117 systemd[1]: sshd@17-172.31.31.201:22-139.178.89.65:56718.service: Deactivated successfully. Sep 4 23:46:44.589844 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 23:46:44.592003 systemd-logind[1939]: Session 18 logged out. Waiting for processes to exit. Sep 4 23:46:44.593958 systemd-logind[1939]: Removed session 18. Sep 4 23:46:49.621957 systemd[1]: Started sshd@18-172.31.31.201:22-139.178.89.65:56726.service - OpenSSH per-connection server daemon (139.178.89.65:56726). Sep 4 23:46:49.810630 sshd[4887]: Accepted publickey for core from 139.178.89.65 port 56726 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:49.813305 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:49.827339 systemd-logind[1939]: New session 19 of user core. Sep 4 23:46:49.831708 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 23:46:50.082597 sshd[4889]: Connection closed by 139.178.89.65 port 56726 Sep 4 23:46:50.083479 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:50.090149 systemd[1]: sshd@18-172.31.31.201:22-139.178.89.65:56726.service: Deactivated successfully. Sep 4 23:46:50.095585 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 23:46:50.098882 systemd-logind[1939]: Session 19 logged out. Waiting for processes to exit. Sep 4 23:46:50.101229 systemd-logind[1939]: Removed session 19. Sep 4 23:46:50.130953 systemd[1]: Started sshd@19-172.31.31.201:22-139.178.89.65:32850.service - OpenSSH per-connection server daemon (139.178.89.65:32850). Sep 4 23:46:50.310078 sshd[4901]: Accepted publickey for core from 139.178.89.65 port 32850 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:50.312613 sshd-session[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:50.322266 systemd-logind[1939]: New session 20 of user core. Sep 4 23:46:50.332724 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 23:46:50.662695 sshd[4903]: Connection closed by 139.178.89.65 port 32850 Sep 4 23:46:50.663561 sshd-session[4901]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:50.676030 systemd[1]: sshd@19-172.31.31.201:22-139.178.89.65:32850.service: Deactivated successfully. Sep 4 23:46:50.680966 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 23:46:50.682509 systemd-logind[1939]: Session 20 logged out. Waiting for processes to exit. Sep 4 23:46:50.703966 systemd[1]: Started sshd@20-172.31.31.201:22-139.178.89.65:32852.service - OpenSSH per-connection server daemon (139.178.89.65:32852). Sep 4 23:46:50.705591 systemd-logind[1939]: Removed session 20. Sep 4 23:46:50.891166 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 32852 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:50.894065 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:50.905694 systemd-logind[1939]: New session 21 of user core. Sep 4 23:46:50.914698 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 23:46:51.878915 sshd[4915]: Connection closed by 139.178.89.65 port 32852 Sep 4 23:46:51.880134 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:51.888951 systemd[1]: sshd@20-172.31.31.201:22-139.178.89.65:32852.service: Deactivated successfully. Sep 4 23:46:51.898896 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 23:46:51.904495 systemd-logind[1939]: Session 21 logged out. Waiting for processes to exit. Sep 4 23:46:51.941956 systemd[1]: Started sshd@21-172.31.31.201:22-139.178.89.65:32860.service - OpenSSH per-connection server daemon (139.178.89.65:32860). Sep 4 23:46:51.944646 systemd-logind[1939]: Removed session 21. Sep 4 23:46:52.139726 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 32860 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:52.142574 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:52.151283 systemd-logind[1939]: New session 22 of user core. Sep 4 23:46:52.167720 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 23:46:52.694975 sshd[4935]: Connection closed by 139.178.89.65 port 32860 Sep 4 23:46:52.694848 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:52.708790 systemd-logind[1939]: Session 22 logged out. Waiting for processes to exit. Sep 4 23:46:52.711040 systemd[1]: sshd@21-172.31.31.201:22-139.178.89.65:32860.service: Deactivated successfully. Sep 4 23:46:52.717600 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 23:46:52.742186 systemd[1]: Started sshd@22-172.31.31.201:22-139.178.89.65:32868.service - OpenSSH per-connection server daemon (139.178.89.65:32868). Sep 4 23:46:52.745233 systemd-logind[1939]: Removed session 22. Sep 4 23:46:52.926454 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 32868 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:52.929058 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:52.938038 systemd-logind[1939]: New session 23 of user core. Sep 4 23:46:52.947703 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 23:46:53.193424 sshd[4947]: Connection closed by 139.178.89.65 port 32868 Sep 4 23:46:53.194260 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:53.201174 systemd[1]: sshd@22-172.31.31.201:22-139.178.89.65:32868.service: Deactivated successfully. Sep 4 23:46:53.205675 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 23:46:53.207588 systemd-logind[1939]: Session 23 logged out. Waiting for processes to exit. Sep 4 23:46:53.209504 systemd-logind[1939]: Removed session 23. Sep 4 23:46:58.239989 systemd[1]: Started sshd@23-172.31.31.201:22-139.178.89.65:32872.service - OpenSSH per-connection server daemon (139.178.89.65:32872). Sep 4 23:46:58.421771 sshd[4959]: Accepted publickey for core from 139.178.89.65 port 32872 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:46:58.424294 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:46:58.434149 systemd-logind[1939]: New session 24 of user core. Sep 4 23:46:58.440720 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 23:46:58.705644 sshd[4961]: Connection closed by 139.178.89.65 port 32872 Sep 4 23:46:58.706535 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Sep 4 23:46:58.711192 systemd[1]: sshd@23-172.31.31.201:22-139.178.89.65:32872.service: Deactivated successfully. Sep 4 23:46:58.715065 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 23:46:58.718628 systemd-logind[1939]: Session 24 logged out. Waiting for processes to exit. Sep 4 23:46:58.721230 systemd-logind[1939]: Removed session 24. Sep 4 23:47:03.746992 systemd[1]: Started sshd@24-172.31.31.201:22-139.178.89.65:54338.service - OpenSSH per-connection server daemon (139.178.89.65:54338). Sep 4 23:47:03.939456 sshd[4975]: Accepted publickey for core from 139.178.89.65 port 54338 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:03.942940 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:03.951757 systemd-logind[1939]: New session 25 of user core. Sep 4 23:47:03.962705 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 23:47:04.213082 sshd[4977]: Connection closed by 139.178.89.65 port 54338 Sep 4 23:47:04.214047 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:04.219637 systemd[1]: sshd@24-172.31.31.201:22-139.178.89.65:54338.service: Deactivated successfully. Sep 4 23:47:04.223825 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 23:47:04.228362 systemd-logind[1939]: Session 25 logged out. Waiting for processes to exit. Sep 4 23:47:04.230892 systemd-logind[1939]: Removed session 25. Sep 4 23:47:09.258011 systemd[1]: Started sshd@25-172.31.31.201:22-139.178.89.65:54354.service - OpenSSH per-connection server daemon (139.178.89.65:54354). Sep 4 23:47:09.443002 sshd[4991]: Accepted publickey for core from 139.178.89.65 port 54354 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:09.446320 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:09.453940 systemd-logind[1939]: New session 26 of user core. Sep 4 23:47:09.467676 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 23:47:09.709707 sshd[4993]: Connection closed by 139.178.89.65 port 54354 Sep 4 23:47:09.708562 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:09.715292 systemd[1]: sshd@25-172.31.31.201:22-139.178.89.65:54354.service: Deactivated successfully. Sep 4 23:47:09.719207 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 23:47:09.721489 systemd-logind[1939]: Session 26 logged out. Waiting for processes to exit. Sep 4 23:47:09.724608 systemd-logind[1939]: Removed session 26. Sep 4 23:47:09.749960 systemd[1]: Started sshd@26-172.31.31.201:22-139.178.89.65:54364.service - OpenSSH per-connection server daemon (139.178.89.65:54364). Sep 4 23:47:09.946300 sshd[5005]: Accepted publickey for core from 139.178.89.65 port 54364 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:09.949486 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:09.957032 systemd-logind[1939]: New session 27 of user core. Sep 4 23:47:09.970681 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 23:47:12.589847 containerd[1963]: time="2025-09-04T23:47:12.589748473Z" level=info msg="StopContainer for \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\" with timeout 30 (s)" Sep 4 23:47:12.591814 containerd[1963]: time="2025-09-04T23:47:12.590933413Z" level=info msg="Stop container \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\" with signal terminated" Sep 4 23:47:12.621970 containerd[1963]: time="2025-09-04T23:47:12.621889034Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 23:47:12.626736 systemd[1]: cri-containerd-b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265.scope: Deactivated successfully. Sep 4 23:47:12.646088 containerd[1963]: time="2025-09-04T23:47:12.645837350Z" level=info msg="StopContainer for \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\" with timeout 2 (s)" Sep 4 23:47:12.647872 containerd[1963]: time="2025-09-04T23:47:12.647762306Z" level=info msg="Stop container \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\" with signal terminated" Sep 4 23:47:12.675913 systemd-networkd[1868]: lxc_health: Link DOWN Sep 4 23:47:12.675926 systemd-networkd[1868]: lxc_health: Lost carrier Sep 4 23:47:12.703186 systemd[1]: cri-containerd-07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62.scope: Deactivated successfully. Sep 4 23:47:12.703868 systemd[1]: cri-containerd-07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62.scope: Consumed 14.750s CPU time, 125.6M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:47:12.725051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265-rootfs.mount: Deactivated successfully. Sep 4 23:47:12.747279 containerd[1963]: time="2025-09-04T23:47:12.747177926Z" level=info msg="shim disconnected" id=b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265 namespace=k8s.io Sep 4 23:47:12.747736 containerd[1963]: time="2025-09-04T23:47:12.747279926Z" level=warning msg="cleaning up after shim disconnected" id=b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265 namespace=k8s.io Sep 4 23:47:12.747736 containerd[1963]: time="2025-09-04T23:47:12.747302906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:12.761195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62-rootfs.mount: Deactivated successfully. Sep 4 23:47:12.776489 containerd[1963]: time="2025-09-04T23:47:12.775999862Z" level=info msg="shim disconnected" id=07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62 namespace=k8s.io Sep 4 23:47:12.776489 containerd[1963]: time="2025-09-04T23:47:12.776168198Z" level=warning msg="cleaning up after shim disconnected" id=07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62 namespace=k8s.io Sep 4 23:47:12.776489 containerd[1963]: time="2025-09-04T23:47:12.776187506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:12.799059 containerd[1963]: time="2025-09-04T23:47:12.798972218Z" level=info msg="StopContainer for \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\" returns successfully" Sep 4 23:47:12.800974 containerd[1963]: time="2025-09-04T23:47:12.800929382Z" level=info msg="StopPodSandbox for \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\"" Sep 4 23:47:12.801142 containerd[1963]: time="2025-09-04T23:47:12.801112310Z" level=info msg="Container to stop \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:12.806200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564-shm.mount: Deactivated successfully. Sep 4 23:47:12.826840 systemd[1]: cri-containerd-8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564.scope: Deactivated successfully. Sep 4 23:47:12.827556 containerd[1963]: time="2025-09-04T23:47:12.827181303Z" level=info msg="StopContainer for \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\" returns successfully" Sep 4 23:47:12.829218 containerd[1963]: time="2025-09-04T23:47:12.829157655Z" level=info msg="StopPodSandbox for \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\"" Sep 4 23:47:12.829710 containerd[1963]: time="2025-09-04T23:47:12.829346943Z" level=info msg="Container to stop \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:12.829813 containerd[1963]: time="2025-09-04T23:47:12.829702023Z" level=info msg="Container to stop \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:12.829813 containerd[1963]: time="2025-09-04T23:47:12.829739019Z" level=info msg="Container to stop \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:12.829813 containerd[1963]: time="2025-09-04T23:47:12.829790787Z" level=info msg="Container to stop \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:12.830063 containerd[1963]: time="2025-09-04T23:47:12.829812951Z" level=info msg="Container to stop \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 23:47:12.850011 systemd[1]: cri-containerd-7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172.scope: Deactivated successfully. Sep 4 23:47:12.898621 containerd[1963]: time="2025-09-04T23:47:12.898543095Z" level=info msg="shim disconnected" id=8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564 namespace=k8s.io Sep 4 23:47:12.900826 containerd[1963]: time="2025-09-04T23:47:12.900750207Z" level=warning msg="cleaning up after shim disconnected" id=8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564 namespace=k8s.io Sep 4 23:47:12.900826 containerd[1963]: time="2025-09-04T23:47:12.900808467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:12.908876 containerd[1963]: time="2025-09-04T23:47:12.908556303Z" level=info msg="shim disconnected" id=7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172 namespace=k8s.io Sep 4 23:47:12.908876 containerd[1963]: time="2025-09-04T23:47:12.908631531Z" level=warning msg="cleaning up after shim disconnected" id=7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172 namespace=k8s.io Sep 4 23:47:12.908876 containerd[1963]: time="2025-09-04T23:47:12.908650359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:12.938766 containerd[1963]: time="2025-09-04T23:47:12.938546883Z" level=info msg="TearDown network for sandbox \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" successfully" Sep 4 23:47:12.938766 containerd[1963]: time="2025-09-04T23:47:12.938607327Z" level=info msg="StopPodSandbox for \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" returns successfully" Sep 4 23:47:12.945935 containerd[1963]: time="2025-09-04T23:47:12.945855243Z" level=info msg="TearDown network for sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" successfully" Sep 4 23:47:12.946115 containerd[1963]: time="2025-09-04T23:47:12.946022895Z" level=info msg="StopPodSandbox for \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" returns successfully" Sep 4 23:47:13.040977 kubelet[3364]: I0904 23:47:13.040249 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-config-path\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.040977 kubelet[3364]: I0904 23:47:13.040319 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-etc-cni-netd\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.040977 kubelet[3364]: I0904 23:47:13.040427 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dlg9\" (UniqueName: \"kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-kube-api-access-2dlg9\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.040977 kubelet[3364]: I0904 23:47:13.040468 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-cgroup\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.040977 kubelet[3364]: I0904 23:47:13.040509 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-kernel\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.040977 kubelet[3364]: I0904 23:47:13.040546 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cni-path\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.041852 kubelet[3364]: I0904 23:47:13.040592 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.041852 kubelet[3364]: I0904 23:47:13.040675 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.045170 kubelet[3364]: I0904 23:47:13.044461 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.045170 kubelet[3364]: I0904 23:47:13.044543 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-lib-modules\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045170 kubelet[3364]: I0904 23:47:13.044615 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-net\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045170 kubelet[3364]: I0904 23:47:13.044657 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-bpf-maps\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045170 kubelet[3364]: I0904 23:47:13.044694 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-xtables-lock\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045170 kubelet[3364]: I0904 23:47:13.044753 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2b2a501-19dc-429e-8d11-892c8816450f-clustermesh-secrets\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045618 kubelet[3364]: I0904 23:47:13.044788 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-run\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045618 kubelet[3364]: I0904 23:47:13.044826 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95ead1e6-5789-479a-b083-619020aed508-cilium-config-path\") pod \"95ead1e6-5789-479a-b083-619020aed508\" (UID: \"95ead1e6-5789-479a-b083-619020aed508\") " Sep 4 23:47:13.045618 kubelet[3364]: I0904 23:47:13.044858 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-hostproc\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045618 kubelet[3364]: I0904 23:47:13.044904 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-hubble-tls\") pod \"e2b2a501-19dc-429e-8d11-892c8816450f\" (UID: \"e2b2a501-19dc-429e-8d11-892c8816450f\") " Sep 4 23:47:13.045618 kubelet[3364]: I0904 23:47:13.044941 3364 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngn97\" (UniqueName: \"kubernetes.io/projected/95ead1e6-5789-479a-b083-619020aed508-kube-api-access-ngn97\") pod \"95ead1e6-5789-479a-b083-619020aed508\" (UID: \"95ead1e6-5789-479a-b083-619020aed508\") " Sep 4 23:47:13.045618 kubelet[3364]: I0904 23:47:13.045016 3364 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-etc-cni-netd\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.045618 kubelet[3364]: I0904 23:47:13.045040 3364 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-cgroup\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.045970 kubelet[3364]: I0904 23:47:13.045068 3364 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cni-path\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.046589 kubelet[3364]: I0904 23:47:13.044542 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.046589 kubelet[3364]: I0904 23:47:13.046532 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.046774 kubelet[3364]: I0904 23:47:13.046625 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.046774 kubelet[3364]: I0904 23:47:13.046664 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.046774 kubelet[3364]: I0904 23:47:13.046701 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.050056 kubelet[3364]: I0904 23:47:13.049775 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.050597 kubelet[3364]: I0904 23:47:13.050541 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 23:47:13.053433 kubelet[3364]: I0904 23:47:13.053286 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-kube-api-access-2dlg9" (OuterVolumeSpecName: "kube-api-access-2dlg9") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "kube-api-access-2dlg9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:47:13.057975 kubelet[3364]: I0904 23:47:13.056762 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2b2a501-19dc-429e-8d11-892c8816450f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 23:47:13.062667 kubelet[3364]: I0904 23:47:13.062076 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:47:13.065539 kubelet[3364]: I0904 23:47:13.065470 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ead1e6-5789-479a-b083-619020aed508-kube-api-access-ngn97" (OuterVolumeSpecName: "kube-api-access-ngn97") pod "95ead1e6-5789-479a-b083-619020aed508" (UID: "95ead1e6-5789-479a-b083-619020aed508"). InnerVolumeSpecName "kube-api-access-ngn97". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:47:13.066145 kubelet[3364]: I0904 23:47:13.066089 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2b2a501-19dc-429e-8d11-892c8816450f" (UID: "e2b2a501-19dc-429e-8d11-892c8816450f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 23:47:13.066840 kubelet[3364]: I0904 23:47:13.066784 3364 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95ead1e6-5789-479a-b083-619020aed508-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "95ead1e6-5789-479a-b083-619020aed508" (UID: "95ead1e6-5789-479a-b083-619020aed508"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 23:47:13.145878 kubelet[3364]: I0904 23:47:13.145695 3364 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2b2a501-19dc-429e-8d11-892c8816450f-clustermesh-secrets\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.145878 kubelet[3364]: I0904 23:47:13.145752 3364 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-run\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.145878 kubelet[3364]: I0904 23:47:13.145777 3364 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/95ead1e6-5789-479a-b083-619020aed508-cilium-config-path\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.145878 kubelet[3364]: I0904 23:47:13.145801 3364 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-hostproc\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.145878 kubelet[3364]: I0904 23:47:13.145823 3364 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-hubble-tls\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.145878 kubelet[3364]: I0904 23:47:13.145848 3364 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngn97\" (UniqueName: \"kubernetes.io/projected/95ead1e6-5789-479a-b083-619020aed508-kube-api-access-ngn97\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.147811 kubelet[3364]: I0904 23:47:13.147506 3364 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2b2a501-19dc-429e-8d11-892c8816450f-cilium-config-path\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.147811 kubelet[3364]: I0904 23:47:13.147541 3364 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2dlg9\" (UniqueName: \"kubernetes.io/projected/e2b2a501-19dc-429e-8d11-892c8816450f-kube-api-access-2dlg9\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.147811 kubelet[3364]: I0904 23:47:13.147565 3364 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-kernel\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.147811 kubelet[3364]: I0904 23:47:13.147588 3364 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-lib-modules\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.147811 kubelet[3364]: I0904 23:47:13.147613 3364 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-host-proc-sys-net\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.147811 kubelet[3364]: I0904 23:47:13.147633 3364 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-bpf-maps\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.147811 kubelet[3364]: I0904 23:47:13.147655 3364 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2b2a501-19dc-429e-8d11-892c8816450f-xtables-lock\") on node \"ip-172-31-31-201\" DevicePath \"\"" Sep 4 23:47:13.233179 kubelet[3364]: I0904 23:47:13.232573 3364 scope.go:117] "RemoveContainer" containerID="07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62" Sep 4 23:47:13.241240 containerd[1963]: time="2025-09-04T23:47:13.241161865Z" level=info msg="RemoveContainer for \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\"" Sep 4 23:47:13.258168 containerd[1963]: time="2025-09-04T23:47:13.257114545Z" level=info msg="RemoveContainer for \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\" returns successfully" Sep 4 23:47:13.258351 systemd[1]: Removed slice kubepods-burstable-pode2b2a501_19dc_429e_8d11_892c8816450f.slice - libcontainer container kubepods-burstable-pode2b2a501_19dc_429e_8d11_892c8816450f.slice. Sep 4 23:47:13.258615 systemd[1]: kubepods-burstable-pode2b2a501_19dc_429e_8d11_892c8816450f.slice: Consumed 14.908s CPU time, 126.1M memory peak, 128K read from disk, 12.9M written to disk. Sep 4 23:47:13.260168 kubelet[3364]: I0904 23:47:13.259867 3364 scope.go:117] "RemoveContainer" containerID="3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421" Sep 4 23:47:13.266014 containerd[1963]: time="2025-09-04T23:47:13.265962253Z" level=info msg="RemoveContainer for \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\"" Sep 4 23:47:13.272908 systemd[1]: Removed slice kubepods-besteffort-pod95ead1e6_5789_479a_b083_619020aed508.slice - libcontainer container kubepods-besteffort-pod95ead1e6_5789_479a_b083_619020aed508.slice. Sep 4 23:47:13.278914 containerd[1963]: time="2025-09-04T23:47:13.278834377Z" level=info msg="RemoveContainer for \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\" returns successfully" Sep 4 23:47:13.279745 kubelet[3364]: I0904 23:47:13.279358 3364 scope.go:117] "RemoveContainer" containerID="ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136" Sep 4 23:47:13.282689 containerd[1963]: time="2025-09-04T23:47:13.282636853Z" level=info msg="RemoveContainer for \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\"" Sep 4 23:47:13.293292 containerd[1963]: time="2025-09-04T23:47:13.292993717Z" level=info msg="RemoveContainer for \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\" returns successfully" Sep 4 23:47:13.295884 kubelet[3364]: I0904 23:47:13.295187 3364 scope.go:117] "RemoveContainer" containerID="a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7" Sep 4 23:47:13.301974 containerd[1963]: time="2025-09-04T23:47:13.301035373Z" level=info msg="RemoveContainer for \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\"" Sep 4 23:47:13.314712 containerd[1963]: time="2025-09-04T23:47:13.314647153Z" level=info msg="RemoveContainer for \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\" returns successfully" Sep 4 23:47:13.315211 kubelet[3364]: I0904 23:47:13.315165 3364 scope.go:117] "RemoveContainer" containerID="5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6" Sep 4 23:47:13.318871 containerd[1963]: time="2025-09-04T23:47:13.318151249Z" level=info msg="RemoveContainer for \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\"" Sep 4 23:47:13.327193 containerd[1963]: time="2025-09-04T23:47:13.327140989Z" level=info msg="RemoveContainer for \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\" returns successfully" Sep 4 23:47:13.327710 kubelet[3364]: I0904 23:47:13.327656 3364 scope.go:117] "RemoveContainer" containerID="07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62" Sep 4 23:47:13.328180 containerd[1963]: time="2025-09-04T23:47:13.328059565Z" level=error msg="ContainerStatus for \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\": not found" Sep 4 23:47:13.328445 kubelet[3364]: E0904 23:47:13.328336 3364 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\": not found" containerID="07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62" Sep 4 23:47:13.328626 kubelet[3364]: I0904 23:47:13.328505 3364 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62"} err="failed to get container status \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\": rpc error: code = NotFound desc = an error occurred when try to find container \"07ff70a8717f13be622e08e4124ea6346db03a1be7e2431d00bea876fdb8bf62\": not found" Sep 4 23:47:13.328723 kubelet[3364]: I0904 23:47:13.328634 3364 scope.go:117] "RemoveContainer" containerID="3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421" Sep 4 23:47:13.329657 containerd[1963]: time="2025-09-04T23:47:13.329549257Z" level=error msg="ContainerStatus for \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\": not found" Sep 4 23:47:13.330066 kubelet[3364]: E0904 23:47:13.329994 3364 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\": not found" containerID="3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421" Sep 4 23:47:13.330164 kubelet[3364]: I0904 23:47:13.330083 3364 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421"} err="failed to get container status \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\": rpc error: code = NotFound desc = an error occurred when try to find container \"3744a2391247f4bc5ec5252534d31a66f7a7015660345860ccac7da516fb9421\": not found" Sep 4 23:47:13.330164 kubelet[3364]: I0904 23:47:13.330123 3364 scope.go:117] "RemoveContainer" containerID="ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136" Sep 4 23:47:13.330716 containerd[1963]: time="2025-09-04T23:47:13.330560377Z" level=error msg="ContainerStatus for \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\": not found" Sep 4 23:47:13.330955 kubelet[3364]: E0904 23:47:13.330916 3364 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\": not found" containerID="ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136" Sep 4 23:47:13.331085 kubelet[3364]: I0904 23:47:13.330967 3364 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136"} err="failed to get container status \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffac5a58b6c5c80bdf0595ef821934d936a5336a53f7158b4bd2d9feea1c2136\": not found" Sep 4 23:47:13.331085 kubelet[3364]: I0904 23:47:13.331018 3364 scope.go:117] "RemoveContainer" containerID="a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7" Sep 4 23:47:13.331669 containerd[1963]: time="2025-09-04T23:47:13.331570213Z" level=error msg="ContainerStatus for \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\": not found" Sep 4 23:47:13.332127 kubelet[3364]: E0904 23:47:13.331907 3364 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\": not found" containerID="a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7" Sep 4 23:47:13.332127 kubelet[3364]: I0904 23:47:13.331959 3364 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7"} err="failed to get container status \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a33f07f36d4a74699e84872aafab893ce72f45943c2baf967d007b8835cd97f7\": not found" Sep 4 23:47:13.332127 kubelet[3364]: I0904 23:47:13.331993 3364 scope.go:117] "RemoveContainer" containerID="5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6" Sep 4 23:47:13.332794 containerd[1963]: time="2025-09-04T23:47:13.332650381Z" level=error msg="ContainerStatus for \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\": not found" Sep 4 23:47:13.332998 kubelet[3364]: E0904 23:47:13.332953 3364 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\": not found" containerID="5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6" Sep 4 23:47:13.333094 kubelet[3364]: I0904 23:47:13.333043 3364 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6"} err="failed to get container status \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bb0a178f2cea6472c6dae998bb89c1922b5296120c30d145153140ff6c844e6\": not found" Sep 4 23:47:13.333094 kubelet[3364]: I0904 23:47:13.333084 3364 scope.go:117] "RemoveContainer" containerID="b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265" Sep 4 23:47:13.335460 containerd[1963]: time="2025-09-04T23:47:13.335377873Z" level=info msg="RemoveContainer for \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\"" Sep 4 23:47:13.341896 containerd[1963]: time="2025-09-04T23:47:13.341651533Z" level=info msg="RemoveContainer for \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\" returns successfully" Sep 4 23:47:13.342633 kubelet[3364]: I0904 23:47:13.342167 3364 scope.go:117] "RemoveContainer" containerID="b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265" Sep 4 23:47:13.342732 containerd[1963]: time="2025-09-04T23:47:13.342530569Z" level=error msg="ContainerStatus for \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\": not found" Sep 4 23:47:13.342998 kubelet[3364]: E0904 23:47:13.342968 3364 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\": not found" containerID="b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265" Sep 4 23:47:13.343197 kubelet[3364]: I0904 23:47:13.343112 3364 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265"} err="failed to get container status \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\": rpc error: code = NotFound desc = an error occurred when try to find container \"b219e42169ba49dd6a6ef02ffaafbc8e744def35a1102c45028490728c12a265\": not found" Sep 4 23:47:13.559635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564-rootfs.mount: Deactivated successfully. Sep 4 23:47:13.559853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172-rootfs.mount: Deactivated successfully. Sep 4 23:47:13.559996 systemd[1]: var-lib-kubelet-pods-95ead1e6\x2d5789\x2d479a\x2db083\x2d619020aed508-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngn97.mount: Deactivated successfully. Sep 4 23:47:13.560135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172-shm.mount: Deactivated successfully. Sep 4 23:47:13.560282 systemd[1]: var-lib-kubelet-pods-e2b2a501\x2d19dc\x2d429e\x2d8d11\x2d892c8816450f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dlg9.mount: Deactivated successfully. Sep 4 23:47:13.560445 systemd[1]: var-lib-kubelet-pods-e2b2a501\x2d19dc\x2d429e\x2d8d11\x2d892c8816450f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 23:47:13.560593 systemd[1]: var-lib-kubelet-pods-e2b2a501\x2d19dc\x2d429e\x2d8d11\x2d892c8816450f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 23:47:13.824677 kubelet[3364]: I0904 23:47:13.822846 3364 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ead1e6-5789-479a-b083-619020aed508" path="/var/lib/kubelet/pods/95ead1e6-5789-479a-b083-619020aed508/volumes" Sep 4 23:47:13.824677 kubelet[3364]: I0904 23:47:13.823832 3364 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b2a501-19dc-429e-8d11-892c8816450f" path="/var/lib/kubelet/pods/e2b2a501-19dc-429e-8d11-892c8816450f/volumes" Sep 4 23:47:14.466613 sshd[5007]: Connection closed by 139.178.89.65 port 54364 Sep 4 23:47:14.467130 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:14.473586 systemd[1]: sshd@26-172.31.31.201:22-139.178.89.65:54364.service: Deactivated successfully. Sep 4 23:47:14.480796 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 23:47:14.481647 systemd[1]: session-27.scope: Consumed 1.811s CPU time, 23.7M memory peak. Sep 4 23:47:14.484908 systemd-logind[1939]: Session 27 logged out. Waiting for processes to exit. Sep 4 23:47:14.487360 systemd-logind[1939]: Removed session 27. Sep 4 23:47:14.512448 systemd[1]: Started sshd@27-172.31.31.201:22-139.178.89.65:46072.service - OpenSSH per-connection server daemon (139.178.89.65:46072). Sep 4 23:47:14.699858 sshd[5167]: Accepted publickey for core from 139.178.89.65 port 46072 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:14.703005 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:14.711602 systemd-logind[1939]: New session 28 of user core. Sep 4 23:47:14.721674 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 23:47:15.370207 ntpd[1933]: Deleting interface #11 lxc_health, fe80::7890:29ff:fe86:44d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Sep 4 23:47:15.370783 ntpd[1933]: 4 Sep 23:47:15 ntpd[1933]: Deleting interface #11 lxc_health, fe80::7890:29ff:fe86:44d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Sep 4 23:47:16.988708 kubelet[3364]: E0904 23:47:16.988300 3364 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 23:47:17.132940 sshd[5169]: Connection closed by 139.178.89.65 port 46072 Sep 4 23:47:17.136997 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:17.149240 systemd[1]: sshd@27-172.31.31.201:22-139.178.89.65:46072.service: Deactivated successfully. Sep 4 23:47:17.154638 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 23:47:17.156699 systemd[1]: session-28.scope: Consumed 2.136s CPU time, 25.7M memory peak. Sep 4 23:47:17.159826 systemd-logind[1939]: Session 28 logged out. Waiting for processes to exit. Sep 4 23:47:17.188269 systemd[1]: Started sshd@28-172.31.31.201:22-139.178.89.65:46084.service - OpenSSH per-connection server daemon (139.178.89.65:46084). Sep 4 23:47:17.191825 systemd-logind[1939]: Removed session 28. Sep 4 23:47:17.227704 systemd[1]: Created slice kubepods-burstable-podd282ae3f_030f_4746_8f6f_33a0b57b3149.slice - libcontainer container kubepods-burstable-podd282ae3f_030f_4746_8f6f_33a0b57b3149.slice. Sep 4 23:47:17.276807 kubelet[3364]: I0904 23:47:17.276759 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d282ae3f-030f-4746-8f6f-33a0b57b3149-hubble-tls\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.277076 kubelet[3364]: I0904 23:47:17.277044 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-cilium-run\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.277292 kubelet[3364]: I0904 23:47:17.277265 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-host-proc-sys-kernel\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.277474 kubelet[3364]: I0904 23:47:17.277451 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-bpf-maps\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.277913 kubelet[3364]: I0904 23:47:17.277882 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-lib-modules\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.278295 kubelet[3364]: I0904 23:47:17.278174 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d282ae3f-030f-4746-8f6f-33a0b57b3149-cilium-ipsec-secrets\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.278655 kubelet[3364]: I0904 23:47:17.278602 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-hostproc\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280536 kubelet[3364]: I0904 23:47:17.278846 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d282ae3f-030f-4746-8f6f-33a0b57b3149-clustermesh-secrets\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280536 kubelet[3364]: I0904 23:47:17.278889 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-cilium-cgroup\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280536 kubelet[3364]: I0904 23:47:17.278923 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-cni-path\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280536 kubelet[3364]: I0904 23:47:17.278960 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d282ae3f-030f-4746-8f6f-33a0b57b3149-cilium-config-path\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280536 kubelet[3364]: I0904 23:47:17.278994 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7bdv\" (UniqueName: \"kubernetes.io/projected/d282ae3f-030f-4746-8f6f-33a0b57b3149-kube-api-access-j7bdv\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280536 kubelet[3364]: I0904 23:47:17.279054 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-etc-cni-netd\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280890 kubelet[3364]: I0904 23:47:17.279093 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-xtables-lock\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.280890 kubelet[3364]: I0904 23:47:17.279126 3364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d282ae3f-030f-4746-8f6f-33a0b57b3149-host-proc-sys-net\") pod \"cilium-z7mcw\" (UID: \"d282ae3f-030f-4746-8f6f-33a0b57b3149\") " pod="kube-system/cilium-z7mcw" Sep 4 23:47:17.427488 sshd[5179]: Accepted publickey for core from 139.178.89.65 port 46084 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:17.430777 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:17.450456 systemd-logind[1939]: New session 29 of user core. Sep 4 23:47:17.457668 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 23:47:17.535879 containerd[1963]: time="2025-09-04T23:47:17.535719426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7mcw,Uid:d282ae3f-030f-4746-8f6f-33a0b57b3149,Namespace:kube-system,Attempt:0,}" Sep 4 23:47:17.580475 sshd[5186]: Connection closed by 139.178.89.65 port 46084 Sep 4 23:47:17.580227 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:17.590676 systemd[1]: sshd@28-172.31.31.201:22-139.178.89.65:46084.service: Deactivated successfully. Sep 4 23:47:17.595545 containerd[1963]: time="2025-09-04T23:47:17.595274298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 23:47:17.596723 containerd[1963]: time="2025-09-04T23:47:17.595411662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 23:47:17.596723 containerd[1963]: time="2025-09-04T23:47:17.596549910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:17.598896 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 23:47:17.600505 containerd[1963]: time="2025-09-04T23:47:17.600353886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 23:47:17.601003 systemd-logind[1939]: Session 29 logged out. Waiting for processes to exit. Sep 4 23:47:17.632022 systemd[1]: Started sshd@29-172.31.31.201:22-139.178.89.65:46100.service - OpenSSH per-connection server daemon (139.178.89.65:46100). Sep 4 23:47:17.635377 systemd-logind[1939]: Removed session 29. Sep 4 23:47:17.662742 systemd[1]: Started cri-containerd-633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244.scope - libcontainer container 633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244. Sep 4 23:47:17.724834 containerd[1963]: time="2025-09-04T23:47:17.724760263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z7mcw,Uid:d282ae3f-030f-4746-8f6f-33a0b57b3149,Namespace:kube-system,Attempt:0,} returns sandbox id \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\"" Sep 4 23:47:17.737029 containerd[1963]: time="2025-09-04T23:47:17.736961155Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 23:47:17.759133 containerd[1963]: time="2025-09-04T23:47:17.759055711Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb\"" Sep 4 23:47:17.760649 containerd[1963]: time="2025-09-04T23:47:17.760546219Z" level=info msg="StartContainer for \"ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb\"" Sep 4 23:47:17.808464 systemd[1]: Started cri-containerd-ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb.scope - libcontainer container ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb. Sep 4 23:47:17.859908 sshd[5215]: Accepted publickey for core from 139.178.89.65 port 46100 ssh2: RSA SHA256:wIWgASbTOEOY+RJ7C8r7IgT0+0t1r6NIK7DD/aRqsNo Sep 4 23:47:17.862062 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 23:47:17.879466 containerd[1963]: time="2025-09-04T23:47:17.877771052Z" level=info msg="StartContainer for \"ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb\" returns successfully" Sep 4 23:47:17.883758 systemd-logind[1939]: New session 30 of user core. Sep 4 23:47:17.884978 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 4 23:47:17.903735 systemd[1]: cri-containerd-ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb.scope: Deactivated successfully. Sep 4 23:47:17.961467 containerd[1963]: time="2025-09-04T23:47:17.961350752Z" level=info msg="shim disconnected" id=ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb namespace=k8s.io Sep 4 23:47:17.961467 containerd[1963]: time="2025-09-04T23:47:17.961464392Z" level=warning msg="cleaning up after shim disconnected" id=ea1828c0f3323052394c880cf0ff97a0137206718cafe52f25d319a97a81bdcb namespace=k8s.io Sep 4 23:47:17.961988 containerd[1963]: time="2025-09-04T23:47:17.961488356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:18.279606 containerd[1963]: time="2025-09-04T23:47:18.279518898Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 23:47:18.309131 containerd[1963]: time="2025-09-04T23:47:18.308938566Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929\"" Sep 4 23:47:18.311188 containerd[1963]: time="2025-09-04T23:47:18.309823986Z" level=info msg="StartContainer for \"62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929\"" Sep 4 23:47:18.362741 systemd[1]: Started cri-containerd-62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929.scope - libcontainer container 62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929. Sep 4 23:47:18.423230 containerd[1963]: time="2025-09-04T23:47:18.423167934Z" level=info msg="StartContainer for \"62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929\" returns successfully" Sep 4 23:47:18.438261 systemd[1]: cri-containerd-62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929.scope: Deactivated successfully. Sep 4 23:47:18.474954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929-rootfs.mount: Deactivated successfully. Sep 4 23:47:18.489636 containerd[1963]: time="2025-09-04T23:47:18.489230659Z" level=info msg="shim disconnected" id=62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929 namespace=k8s.io Sep 4 23:47:18.489636 containerd[1963]: time="2025-09-04T23:47:18.489301555Z" level=warning msg="cleaning up after shim disconnected" id=62fd4f98db161ecb24e16547b979277ce778d5a44cd1b00a73e2a7e8a6364929 namespace=k8s.io Sep 4 23:47:18.489636 containerd[1963]: time="2025-09-04T23:47:18.489338179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:19.285075 containerd[1963]: time="2025-09-04T23:47:19.284710939Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 23:47:19.327516 containerd[1963]: time="2025-09-04T23:47:19.327457831Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead\"" Sep 4 23:47:19.329332 containerd[1963]: time="2025-09-04T23:47:19.329242123Z" level=info msg="StartContainer for \"0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead\"" Sep 4 23:47:19.385706 systemd[1]: Started cri-containerd-0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead.scope - libcontainer container 0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead. Sep 4 23:47:19.462464 containerd[1963]: time="2025-09-04T23:47:19.462368395Z" level=info msg="StartContainer for \"0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead\" returns successfully" Sep 4 23:47:19.468196 systemd[1]: cri-containerd-0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead.scope: Deactivated successfully. Sep 4 23:47:19.509066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead-rootfs.mount: Deactivated successfully. Sep 4 23:47:19.520369 containerd[1963]: time="2025-09-04T23:47:19.520291844Z" level=info msg="shim disconnected" id=0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead namespace=k8s.io Sep 4 23:47:19.520997 containerd[1963]: time="2025-09-04T23:47:19.520699184Z" level=warning msg="cleaning up after shim disconnected" id=0964abb796dec78f40588f55a9804010eada8fdf65e9903def8776ebc1feaead namespace=k8s.io Sep 4 23:47:19.520997 containerd[1963]: time="2025-09-04T23:47:19.520727420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:20.294735 containerd[1963]: time="2025-09-04T23:47:20.294543752Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 23:47:20.336441 containerd[1963]: time="2025-09-04T23:47:20.335587124Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530\"" Sep 4 23:47:20.337521 containerd[1963]: time="2025-09-04T23:47:20.337304756Z" level=info msg="StartContainer for \"16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530\"" Sep 4 23:47:20.395710 systemd[1]: Started cri-containerd-16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530.scope - libcontainer container 16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530. Sep 4 23:47:20.454047 systemd[1]: cri-containerd-16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530.scope: Deactivated successfully. Sep 4 23:47:20.459593 containerd[1963]: time="2025-09-04T23:47:20.459543800Z" level=info msg="StartContainer for \"16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530\" returns successfully" Sep 4 23:47:20.500189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530-rootfs.mount: Deactivated successfully. Sep 4 23:47:20.507024 containerd[1963]: time="2025-09-04T23:47:20.506901621Z" level=info msg="shim disconnected" id=16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530 namespace=k8s.io Sep 4 23:47:20.507461 containerd[1963]: time="2025-09-04T23:47:20.507243009Z" level=warning msg="cleaning up after shim disconnected" id=16fc2182f1709217c4c59b94a9b77e179c782688aa324054f50c841d06bb7530 namespace=k8s.io Sep 4 23:47:20.507461 containerd[1963]: time="2025-09-04T23:47:20.507270453Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:21.304022 containerd[1963]: time="2025-09-04T23:47:21.303944181Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 23:47:21.339996 containerd[1963]: time="2025-09-04T23:47:21.339727353Z" level=info msg="CreateContainer within sandbox \"633f1dad1a9b73d451a16397388ee2c409243a0844081d0828ece9d3fb55b244\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"16d8d220a9dd5c58805b827ef2cd3291a8c900397eca04eaee0e344d979450d8\"" Sep 4 23:47:21.340634 containerd[1963]: time="2025-09-04T23:47:21.340559841Z" level=info msg="StartContainer for \"16d8d220a9dd5c58805b827ef2cd3291a8c900397eca04eaee0e344d979450d8\"" Sep 4 23:47:21.401715 systemd[1]: Started cri-containerd-16d8d220a9dd5c58805b827ef2cd3291a8c900397eca04eaee0e344d979450d8.scope - libcontainer container 16d8d220a9dd5c58805b827ef2cd3291a8c900397eca04eaee0e344d979450d8. Sep 4 23:47:21.464057 containerd[1963]: time="2025-09-04T23:47:21.463990257Z" level=info msg="StartContainer for \"16d8d220a9dd5c58805b827ef2cd3291a8c900397eca04eaee0e344d979450d8\" returns successfully" Sep 4 23:47:22.345448 kubelet[3364]: I0904 23:47:22.344953 3364 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z7mcw" podStartSLOduration=5.344930038 podStartE2EDuration="5.344930038s" podCreationTimestamp="2025-09-04 23:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 23:47:22.343518214 +0000 UTC m=+110.850693744" watchObservedRunningTime="2025-09-04 23:47:22.344930038 +0000 UTC m=+110.852105520" Sep 4 23:47:22.380448 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 4 23:47:24.463979 kubelet[3364]: E0904 23:47:24.463853 3364 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42914->127.0.0.1:45335: write tcp 127.0.0.1:42914->127.0.0.1:45335: write: broken pipe Sep 4 23:47:26.820637 (udev-worker)[6021]: Network interface NamePolicy= disabled on kernel command line. Sep 4 23:47:26.821171 systemd-networkd[1868]: lxc_health: Link UP Sep 4 23:47:26.830140 systemd-networkd[1868]: lxc_health: Gained carrier Sep 4 23:47:28.499688 systemd-networkd[1868]: lxc_health: Gained IPv6LL Sep 4 23:47:31.276010 systemd[1]: run-containerd-runc-k8s.io-16d8d220a9dd5c58805b827ef2cd3291a8c900397eca04eaee0e344d979450d8-runc.BmvfAu.mount: Deactivated successfully. Sep 4 23:47:31.371642 ntpd[1933]: Listen normally on 14 lxc_health [fe80::a071:71ff:fe72:39f4%14]:123 Sep 4 23:47:31.372593 ntpd[1933]: 4 Sep 23:47:31 ntpd[1933]: Listen normally on 14 lxc_health [fe80::a071:71ff:fe72:39f4%14]:123 Sep 4 23:47:31.378708 kubelet[3364]: E0904 23:47:31.378648 3364 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55898->127.0.0.1:45335: write tcp 127.0.0.1:55898->127.0.0.1:45335: write: broken pipe Sep 4 23:47:31.742810 containerd[1963]: time="2025-09-04T23:47:31.742641356Z" level=info msg="StopPodSandbox for \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\"" Sep 4 23:47:31.743355 containerd[1963]: time="2025-09-04T23:47:31.742816712Z" level=info msg="TearDown network for sandbox \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" successfully" Sep 4 23:47:31.743355 containerd[1963]: time="2025-09-04T23:47:31.742844048Z" level=info msg="StopPodSandbox for \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" returns successfully" Sep 4 23:47:31.744858 containerd[1963]: time="2025-09-04T23:47:31.744790760Z" level=info msg="RemovePodSandbox for \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\"" Sep 4 23:47:31.745005 containerd[1963]: time="2025-09-04T23:47:31.744854048Z" level=info msg="Forcibly stopping sandbox \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\"" Sep 4 23:47:31.745069 containerd[1963]: time="2025-09-04T23:47:31.745006376Z" level=info msg="TearDown network for sandbox \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" successfully" Sep 4 23:47:31.753651 containerd[1963]: time="2025-09-04T23:47:31.753522009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:47:31.753832 containerd[1963]: time="2025-09-04T23:47:31.753684669Z" level=info msg="RemovePodSandbox \"8d8923d2f3f0e6f06f227578b16a0b24256e48766c2395215e21b0584583f564\" returns successfully" Sep 4 23:47:31.754568 containerd[1963]: time="2025-09-04T23:47:31.754500633Z" level=info msg="StopPodSandbox for \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\"" Sep 4 23:47:31.754709 containerd[1963]: time="2025-09-04T23:47:31.754655373Z" level=info msg="TearDown network for sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" successfully" Sep 4 23:47:31.754709 containerd[1963]: time="2025-09-04T23:47:31.754681941Z" level=info msg="StopPodSandbox for \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" returns successfully" Sep 4 23:47:31.756044 containerd[1963]: time="2025-09-04T23:47:31.755962761Z" level=info msg="RemovePodSandbox for \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\"" Sep 4 23:47:31.756186 containerd[1963]: time="2025-09-04T23:47:31.756044121Z" level=info msg="Forcibly stopping sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\"" Sep 4 23:47:31.756246 containerd[1963]: time="2025-09-04T23:47:31.756194121Z" level=info msg="TearDown network for sandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" successfully" Sep 4 23:47:31.765437 containerd[1963]: time="2025-09-04T23:47:31.764119953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 23:47:31.765437 containerd[1963]: time="2025-09-04T23:47:31.764269005Z" level=info msg="RemovePodSandbox \"7053a6a6b76bd1f17cf085616cace7b4e4215ed3001e2ec3910a5d1fd59ad172\" returns successfully" Sep 4 23:47:33.568672 systemd[1]: run-containerd-runc-k8s.io-16d8d220a9dd5c58805b827ef2cd3291a8c900397eca04eaee0e344d979450d8-runc.koxqLr.mount: Deactivated successfully. Sep 4 23:47:33.690981 sshd[5271]: Connection closed by 139.178.89.65 port 46100 Sep 4 23:47:33.690833 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Sep 4 23:47:33.702102 systemd-logind[1939]: Session 30 logged out. Waiting for processes to exit. Sep 4 23:47:33.704497 systemd[1]: sshd@29-172.31.31.201:22-139.178.89.65:46100.service: Deactivated successfully. Sep 4 23:47:33.712042 systemd[1]: session-30.scope: Deactivated successfully. Sep 4 23:47:33.718362 systemd-logind[1939]: Removed session 30. Sep 4 23:47:47.111813 systemd[1]: cri-containerd-8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d.scope: Deactivated successfully. Sep 4 23:47:47.113675 systemd[1]: cri-containerd-8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d.scope: Consumed 5.978s CPU time, 54.6M memory peak. Sep 4 23:47:47.158618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d-rootfs.mount: Deactivated successfully. Sep 4 23:47:47.169632 containerd[1963]: time="2025-09-04T23:47:47.169256925Z" level=info msg="shim disconnected" id=8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d namespace=k8s.io Sep 4 23:47:47.169632 containerd[1963]: time="2025-09-04T23:47:47.169331781Z" level=warning msg="cleaning up after shim disconnected" id=8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d namespace=k8s.io Sep 4 23:47:47.169632 containerd[1963]: time="2025-09-04T23:47:47.169351077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:47.378775 kubelet[3364]: I0904 23:47:47.377979 3364 scope.go:117] "RemoveContainer" containerID="8677a9c7d8ae6fba4ab307e49e2a5ae0c6cedd7c5a2d8bb144e3b98e62f4c22d" Sep 4 23:47:47.382740 containerd[1963]: time="2025-09-04T23:47:47.382381978Z" level=info msg="CreateContainer within sandbox \"d73e76902032501650852389b4582f309508decd0158a7830333ceb4c45515f4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 23:47:47.403917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441987163.mount: Deactivated successfully. Sep 4 23:47:47.413815 containerd[1963]: time="2025-09-04T23:47:47.413731702Z" level=info msg="CreateContainer within sandbox \"d73e76902032501650852389b4582f309508decd0158a7830333ceb4c45515f4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3989976dcb63c5a05b72cea0ee33b4316b4ec8372ff447319ef5172b67a32e2c\"" Sep 4 23:47:47.414559 containerd[1963]: time="2025-09-04T23:47:47.414497482Z" level=info msg="StartContainer for \"3989976dcb63c5a05b72cea0ee33b4316b4ec8372ff447319ef5172b67a32e2c\"" Sep 4 23:47:47.475696 systemd[1]: Started cri-containerd-3989976dcb63c5a05b72cea0ee33b4316b4ec8372ff447319ef5172b67a32e2c.scope - libcontainer container 3989976dcb63c5a05b72cea0ee33b4316b4ec8372ff447319ef5172b67a32e2c. Sep 4 23:47:47.555610 containerd[1963]: time="2025-09-04T23:47:47.555356483Z" level=info msg="StartContainer for \"3989976dcb63c5a05b72cea0ee33b4316b4ec8372ff447319ef5172b67a32e2c\" returns successfully" Sep 4 23:47:52.878018 systemd[1]: cri-containerd-24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b.scope: Deactivated successfully. Sep 4 23:47:52.879089 systemd[1]: cri-containerd-24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b.scope: Consumed 3.566s CPU time, 22.3M memory peak. Sep 4 23:47:52.919300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b-rootfs.mount: Deactivated successfully. Sep 4 23:47:52.933352 containerd[1963]: time="2025-09-04T23:47:52.933214746Z" level=info msg="shim disconnected" id=24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b namespace=k8s.io Sep 4 23:47:52.934385 containerd[1963]: time="2025-09-04T23:47:52.934093530Z" level=warning msg="cleaning up after shim disconnected" id=24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b namespace=k8s.io Sep 4 23:47:52.934385 containerd[1963]: time="2025-09-04T23:47:52.934129842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 23:47:53.397373 kubelet[3364]: I0904 23:47:53.397303 3364 scope.go:117] "RemoveContainer" containerID="24e4ccf81e2a7fd45f5f9a8b442b5a088fa9f0892439ecfa31ccf1a1f50a004b" Sep 4 23:47:53.401002 containerd[1963]: time="2025-09-04T23:47:53.400670404Z" level=info msg="CreateContainer within sandbox \"c312e4695752f470437b352b4d98d05484299209ec7b2710b8722f6b03824558\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 23:47:53.429536 containerd[1963]: time="2025-09-04T23:47:53.429456472Z" level=info msg="CreateContainer within sandbox \"c312e4695752f470437b352b4d98d05484299209ec7b2710b8722f6b03824558\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d111d087b1edddbdb4bbc889d6c87a930f67eb750a72e7eedb2ca7ca3353f4c8\"" Sep 4 23:47:53.430433 containerd[1963]: time="2025-09-04T23:47:53.430283680Z" level=info msg="StartContainer for \"d111d087b1edddbdb4bbc889d6c87a930f67eb750a72e7eedb2ca7ca3353f4c8\"" Sep 4 23:47:53.492724 systemd[1]: Started cri-containerd-d111d087b1edddbdb4bbc889d6c87a930f67eb750a72e7eedb2ca7ca3353f4c8.scope - libcontainer container d111d087b1edddbdb4bbc889d6c87a930f67eb750a72e7eedb2ca7ca3353f4c8. Sep 4 23:47:53.556023 containerd[1963]: time="2025-09-04T23:47:53.555658961Z" level=info msg="StartContainer for \"d111d087b1edddbdb4bbc889d6c87a930f67eb750a72e7eedb2ca7ca3353f4c8\" returns successfully" Sep 4 23:47:54.170236 kubelet[3364]: E0904 23:47:54.170050 3364 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-201?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 4 23:48:04.171864 kubelet[3364]: E0904 23:48:04.171209 3364 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-201?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"