Apr 29 23:56:17.236081 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 29 23:56:17.236127 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Tue Apr 29 22:24:03 -00 2025 Apr 29 23:56:17.236151 kernel: KASLR disabled due to lack of seed Apr 29 23:56:17.236167 kernel: efi: EFI v2.7 by EDK II Apr 29 23:56:17.236183 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Apr 29 23:56:17.236198 kernel: secureboot: Secure boot disabled Apr 29 23:56:17.236215 kernel: ACPI: Early table checksum verification disabled Apr 29 23:56:17.236230 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 29 23:56:17.236245 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 29 23:56:17.236260 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 29 23:56:17.236280 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Apr 29 23:56:17.236296 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 29 23:56:17.236311 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 29 23:56:17.236326 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 29 23:56:17.236344 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 29 23:56:17.236365 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 29 23:56:17.236382 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 29 23:56:17.236399 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 29 23:56:17.236415 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 29 23:56:17.236431 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 29 23:56:17.236447 kernel: printk: bootconsole [uart0] enabled Apr 29 23:56:17.236464 kernel: NUMA: Failed to initialise from firmware Apr 29 23:56:17.236482 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 29 23:56:17.236544 kernel: NUMA: NODE_DATA [mem 0x4b5840800-0x4b5845fff] Apr 29 23:56:17.236563 kernel: Zone ranges: Apr 29 23:56:17.236580 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 29 23:56:17.236605 kernel: DMA32 empty Apr 29 23:56:17.236623 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 29 23:56:17.236641 kernel: Movable zone start for each node Apr 29 23:56:17.236660 kernel: Early memory node ranges Apr 29 23:56:17.236677 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 29 23:56:17.236693 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 29 23:56:17.236711 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 29 23:56:17.236728 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 29 23:56:17.236745 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 29 23:56:17.236761 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 29 23:56:17.236779 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 29 23:56:17.236796 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 29 23:56:17.236819 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 29 23:56:17.236838 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 29 23:56:17.236867 kernel: psci: probing for conduit method from ACPI. Apr 29 23:56:17.236885 kernel: psci: PSCIv1.0 detected in firmware. Apr 29 23:56:17.236903 kernel: psci: Using standard PSCI v0.2 function IDs Apr 29 23:56:17.236925 kernel: psci: Trusted OS migration not required Apr 29 23:56:17.236943 kernel: psci: SMC Calling Convention v1.1 Apr 29 23:56:17.236961 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 29 23:56:17.236978 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 29 23:56:17.236997 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 29 23:56:17.237014 kernel: Detected PIPT I-cache on CPU0 Apr 29 23:56:17.237033 kernel: CPU features: detected: GIC system register CPU interface Apr 29 23:56:17.237050 kernel: CPU features: detected: Spectre-v2 Apr 29 23:56:17.237068 kernel: CPU features: detected: Spectre-v3a Apr 29 23:56:17.237087 kernel: CPU features: detected: Spectre-BHB Apr 29 23:56:17.237105 kernel: CPU features: detected: ARM erratum 1742098 Apr 29 23:56:17.237124 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 29 23:56:17.237151 kernel: alternatives: applying boot alternatives Apr 29 23:56:17.237173 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 29 23:56:17.237193 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 29 23:56:17.237212 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 29 23:56:17.237231 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 29 23:56:17.237250 kernel: Fallback order for Node 0: 0 Apr 29 23:56:17.237267 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 29 23:56:17.237284 kernel: Policy zone: Normal Apr 29 23:56:17.237301 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 29 23:56:17.237318 kernel: software IO TLB: area num 2. Apr 29 23:56:17.237341 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 29 23:56:17.237359 kernel: Memory: 3819836K/4030464K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 210628K reserved, 0K cma-reserved) Apr 29 23:56:17.237377 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 29 23:56:17.237394 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 29 23:56:17.237413 kernel: rcu: RCU event tracing is enabled. Apr 29 23:56:17.237431 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 29 23:56:17.237449 kernel: Trampoline variant of Tasks RCU enabled. Apr 29 23:56:17.237466 kernel: Tracing variant of Tasks RCU enabled. Apr 29 23:56:17.240533 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 29 23:56:17.240587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 29 23:56:17.240606 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 29 23:56:17.240634 kernel: GICv3: 96 SPIs implemented Apr 29 23:56:17.240651 kernel: GICv3: 0 Extended SPIs implemented Apr 29 23:56:17.240668 kernel: Root IRQ handler: gic_handle_irq Apr 29 23:56:17.240685 kernel: GICv3: GICv3 features: 16 PPIs Apr 29 23:56:17.240702 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 29 23:56:17.240719 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 29 23:56:17.240736 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 29 23:56:17.240754 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 29 23:56:17.240772 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 29 23:56:17.240789 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 29 23:56:17.240807 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 29 23:56:17.240824 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 29 23:56:17.240847 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 29 23:56:17.240865 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 29 23:56:17.240882 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 29 23:56:17.240900 kernel: Console: colour dummy device 80x25 Apr 29 23:56:17.240918 kernel: printk: console [tty1] enabled Apr 29 23:56:17.240937 kernel: ACPI: Core revision 20230628 Apr 29 23:56:17.240955 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 29 23:56:17.240974 kernel: pid_max: default: 32768 minimum: 301 Apr 29 23:56:17.240992 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 29 23:56:17.241009 kernel: landlock: Up and running. Apr 29 23:56:17.241034 kernel: SELinux: Initializing. Apr 29 23:56:17.241052 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 29 23:56:17.241070 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 29 23:56:17.241088 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 29 23:56:17.241106 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 29 23:56:17.241123 kernel: rcu: Hierarchical SRCU implementation. Apr 29 23:56:17.241142 kernel: rcu: Max phase no-delay instances is 400. Apr 29 23:56:17.241159 kernel: Platform MSI: ITS@0x10080000 domain created Apr 29 23:56:17.241181 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 29 23:56:17.241200 kernel: Remapping and enabling EFI services. Apr 29 23:56:17.241217 kernel: smp: Bringing up secondary CPUs ... Apr 29 23:56:17.241235 kernel: Detected PIPT I-cache on CPU1 Apr 29 23:56:17.241252 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 29 23:56:17.241270 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 29 23:56:17.241287 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 29 23:56:17.241304 kernel: smp: Brought up 1 node, 2 CPUs Apr 29 23:56:17.241322 kernel: SMP: Total of 2 processors activated. Apr 29 23:56:17.241339 kernel: CPU features: detected: 32-bit EL0 Support Apr 29 23:56:17.241361 kernel: CPU features: detected: 32-bit EL1 Support Apr 29 23:56:17.241379 kernel: CPU features: detected: CRC32 instructions Apr 29 23:56:17.241408 kernel: CPU: All CPU(s) started at EL1 Apr 29 23:56:17.241431 kernel: alternatives: applying system-wide alternatives Apr 29 23:56:17.241449 kernel: devtmpfs: initialized Apr 29 23:56:17.241467 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 29 23:56:17.241515 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 29 23:56:17.241544 kernel: pinctrl core: initialized pinctrl subsystem Apr 29 23:56:17.241563 kernel: SMBIOS 3.0.0 present. Apr 29 23:56:17.241590 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 29 23:56:17.241609 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 29 23:56:17.241628 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 29 23:56:17.241646 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 29 23:56:17.241665 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 29 23:56:17.241683 kernel: audit: initializing netlink subsys (disabled) Apr 29 23:56:17.241702 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Apr 29 23:56:17.241725 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 29 23:56:17.241744 kernel: cpuidle: using governor menu Apr 29 23:56:17.241762 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 29 23:56:17.241781 kernel: ASID allocator initialised with 65536 entries Apr 29 23:56:17.241799 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 29 23:56:17.241818 kernel: Serial: AMBA PL011 UART driver Apr 29 23:56:17.241836 kernel: Modules: 17408 pages in range for non-PLT usage Apr 29 23:56:17.241854 kernel: Modules: 508928 pages in range for PLT usage Apr 29 23:56:17.241872 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 29 23:56:17.241896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 29 23:56:17.241916 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 29 23:56:17.241934 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 29 23:56:17.241952 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 29 23:56:17.241970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 29 23:56:17.241988 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 29 23:56:17.242006 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 29 23:56:17.242025 kernel: ACPI: Added _OSI(Module Device) Apr 29 23:56:17.242043 kernel: ACPI: Added _OSI(Processor Device) Apr 29 23:56:17.242066 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 29 23:56:17.242085 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 29 23:56:17.242103 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 29 23:56:17.242121 kernel: ACPI: Interpreter enabled Apr 29 23:56:17.242139 kernel: ACPI: Using GIC for interrupt routing Apr 29 23:56:17.242157 kernel: ACPI: MCFG table detected, 1 entries Apr 29 23:56:17.242175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Apr 29 23:56:17.244635 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 29 23:56:17.244939 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 29 23:56:17.245149 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 29 23:56:17.245366 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Apr 29 23:56:17.245848 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Apr 29 23:56:17.245889 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 29 23:56:17.245909 kernel: acpiphp: Slot [1] registered Apr 29 23:56:17.245928 kernel: acpiphp: Slot [2] registered Apr 29 23:56:17.245947 kernel: acpiphp: Slot [3] registered Apr 29 23:56:17.245977 kernel: acpiphp: Slot [4] registered Apr 29 23:56:17.245996 kernel: acpiphp: Slot [5] registered Apr 29 23:56:17.246014 kernel: acpiphp: Slot [6] registered Apr 29 23:56:17.246032 kernel: acpiphp: Slot [7] registered Apr 29 23:56:17.246050 kernel: acpiphp: Slot [8] registered Apr 29 23:56:17.246067 kernel: acpiphp: Slot [9] registered Apr 29 23:56:17.246085 kernel: acpiphp: Slot [10] registered Apr 29 23:56:17.246104 kernel: acpiphp: Slot [11] registered Apr 29 23:56:17.246122 kernel: acpiphp: Slot [12] registered Apr 29 23:56:17.246140 kernel: acpiphp: Slot [13] registered Apr 29 23:56:17.246164 kernel: acpiphp: Slot [14] registered Apr 29 23:56:17.246182 kernel: acpiphp: Slot [15] registered Apr 29 23:56:17.246199 kernel: acpiphp: Slot [16] registered Apr 29 23:56:17.246218 kernel: acpiphp: Slot [17] registered Apr 29 23:56:17.246236 kernel: acpiphp: Slot [18] registered Apr 29 23:56:17.246254 kernel: acpiphp: Slot [19] registered Apr 29 23:56:17.246272 kernel: acpiphp: Slot [20] registered Apr 29 23:56:17.246290 kernel: acpiphp: Slot [21] registered Apr 29 23:56:17.246308 kernel: acpiphp: Slot [22] registered Apr 29 23:56:17.246331 kernel: acpiphp: Slot [23] registered Apr 29 23:56:17.246350 kernel: acpiphp: Slot [24] registered Apr 29 23:56:17.246368 kernel: acpiphp: Slot [25] registered Apr 29 23:56:17.246386 kernel: acpiphp: Slot [26] registered Apr 29 23:56:17.246404 kernel: acpiphp: Slot [27] registered Apr 29 23:56:17.246422 kernel: acpiphp: Slot [28] registered Apr 29 23:56:17.246441 kernel: acpiphp: Slot [29] registered Apr 29 23:56:17.246459 kernel: acpiphp: Slot [30] registered Apr 29 23:56:17.246477 kernel: acpiphp: Slot [31] registered Apr 29 23:56:17.247702 kernel: PCI host bridge to bus 0000:00 Apr 29 23:56:17.247990 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 29 23:56:17.248227 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 29 23:56:17.248452 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 29 23:56:17.248735 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Apr 29 23:56:17.249019 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 29 23:56:17.249289 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 29 23:56:17.251296 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 29 23:56:17.252041 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 29 23:56:17.252291 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 29 23:56:17.253608 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 29 23:56:17.253936 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 29 23:56:17.254164 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 29 23:56:17.254371 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 29 23:56:17.254718 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 29 23:56:17.254959 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 29 23:56:17.255175 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Apr 29 23:56:17.255388 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Apr 29 23:56:17.255713 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Apr 29 23:56:17.255939 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Apr 29 23:56:17.256166 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Apr 29 23:56:17.256527 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 29 23:56:17.256739 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 29 23:56:17.256944 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 29 23:56:17.256973 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 29 23:56:17.256992 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 29 23:56:17.257012 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 29 23:56:17.257030 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 29 23:56:17.257049 kernel: iommu: Default domain type: Translated Apr 29 23:56:17.257080 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 29 23:56:17.257098 kernel: efivars: Registered efivars operations Apr 29 23:56:17.257116 kernel: vgaarb: loaded Apr 29 23:56:17.257134 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 29 23:56:17.257152 kernel: VFS: Disk quotas dquot_6.6.0 Apr 29 23:56:17.257171 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 29 23:56:17.257189 kernel: pnp: PnP ACPI init Apr 29 23:56:17.257414 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 29 23:56:17.257451 kernel: pnp: PnP ACPI: found 1 devices Apr 29 23:56:17.257470 kernel: NET: Registered PF_INET protocol family Apr 29 23:56:17.257521 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 29 23:56:17.257544 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 29 23:56:17.257563 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 29 23:56:17.257582 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 29 23:56:17.257601 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 29 23:56:17.257619 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 29 23:56:17.257637 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 29 23:56:17.257664 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 29 23:56:17.257683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 29 23:56:17.257701 kernel: PCI: CLS 0 bytes, default 64 Apr 29 23:56:17.257719 kernel: kvm [1]: HYP mode not available Apr 29 23:56:17.257737 kernel: Initialise system trusted keyrings Apr 29 23:56:17.257755 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 29 23:56:17.257773 kernel: Key type asymmetric registered Apr 29 23:56:17.257791 kernel: Asymmetric key parser 'x509' registered Apr 29 23:56:17.257809 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 29 23:56:17.257833 kernel: io scheduler mq-deadline registered Apr 29 23:56:17.257852 kernel: io scheduler kyber registered Apr 29 23:56:17.257870 kernel: io scheduler bfq registered Apr 29 23:56:17.258130 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 29 23:56:17.258161 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 29 23:56:17.258180 kernel: ACPI: button: Power Button [PWRB] Apr 29 23:56:17.258199 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 29 23:56:17.258217 kernel: ACPI: button: Sleep Button [SLPB] Apr 29 23:56:17.258243 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 29 23:56:17.258263 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 29 23:56:17.258534 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 29 23:56:17.258567 kernel: printk: console [ttyS0] disabled Apr 29 23:56:17.258586 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 29 23:56:17.258605 kernel: printk: console [ttyS0] enabled Apr 29 23:56:17.258624 kernel: printk: bootconsole [uart0] disabled Apr 29 23:56:17.258642 kernel: thunder_xcv, ver 1.0 Apr 29 23:56:17.258660 kernel: thunder_bgx, ver 1.0 Apr 29 23:56:17.258678 kernel: nicpf, ver 1.0 Apr 29 23:56:17.258727 kernel: nicvf, ver 1.0 Apr 29 23:56:17.259019 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 29 23:56:17.259249 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-29T23:56:16 UTC (1745970976) Apr 29 23:56:17.259282 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 29 23:56:17.259307 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 29 23:56:17.259326 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 29 23:56:17.259344 kernel: watchdog: Hard watchdog permanently disabled Apr 29 23:56:17.259377 kernel: NET: Registered PF_INET6 protocol family Apr 29 23:56:17.259400 kernel: Segment Routing with IPv6 Apr 29 23:56:17.259422 kernel: In-situ OAM (IOAM) with IPv6 Apr 29 23:56:17.259444 kernel: NET: Registered PF_PACKET protocol family Apr 29 23:56:17.259464 kernel: Key type dns_resolver registered Apr 29 23:56:17.259482 kernel: registered taskstats version 1 Apr 29 23:56:17.259551 kernel: Loading compiled-in X.509 certificates Apr 29 23:56:17.259571 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: bbef389676bd9584646af24e9e264c7789f8630f' Apr 29 23:56:17.259590 kernel: Key type .fscrypt registered Apr 29 23:56:17.259608 kernel: Key type fscrypt-provisioning registered Apr 29 23:56:17.259639 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 29 23:56:17.259659 kernel: ima: Allocated hash algorithm: sha1 Apr 29 23:56:17.259682 kernel: ima: No architecture policies found Apr 29 23:56:17.259700 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 29 23:56:17.259719 kernel: clk: Disabling unused clocks Apr 29 23:56:17.259741 kernel: Freeing unused kernel memory: 39744K Apr 29 23:56:17.259761 kernel: Run /init as init process Apr 29 23:56:17.259780 kernel: with arguments: Apr 29 23:56:17.259799 kernel: /init Apr 29 23:56:17.259824 kernel: with environment: Apr 29 23:56:17.259847 kernel: HOME=/ Apr 29 23:56:17.259865 kernel: TERM=linux Apr 29 23:56:17.259883 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 29 23:56:17.259907 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 29 23:56:17.259932 systemd[1]: Detected virtualization amazon. Apr 29 23:56:17.259952 systemd[1]: Detected architecture arm64. Apr 29 23:56:17.259982 systemd[1]: Running in initrd. Apr 29 23:56:17.260004 systemd[1]: No hostname configured, using default hostname. Apr 29 23:56:17.260028 systemd[1]: Hostname set to . Apr 29 23:56:17.260054 systemd[1]: Initializing machine ID from VM UUID. Apr 29 23:56:17.260076 systemd[1]: Queued start job for default target initrd.target. Apr 29 23:56:17.260100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 29 23:56:17.260120 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 29 23:56:17.260141 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 29 23:56:17.260167 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 29 23:56:17.260188 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 29 23:56:17.260209 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 29 23:56:17.260231 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 29 23:56:17.260252 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 29 23:56:17.260273 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 29 23:56:17.260293 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 29 23:56:17.260317 systemd[1]: Reached target paths.target - Path Units. Apr 29 23:56:17.260337 systemd[1]: Reached target slices.target - Slice Units. Apr 29 23:56:17.260357 systemd[1]: Reached target swap.target - Swaps. Apr 29 23:56:17.260377 systemd[1]: Reached target timers.target - Timer Units. Apr 29 23:56:17.260397 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 29 23:56:17.260417 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 29 23:56:17.260437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 29 23:56:17.260457 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 29 23:56:17.260477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 29 23:56:17.260593 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 29 23:56:17.260614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 29 23:56:17.260635 systemd[1]: Reached target sockets.target - Socket Units. Apr 29 23:56:17.260655 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 29 23:56:17.260676 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 29 23:56:17.260696 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 29 23:56:17.260716 systemd[1]: Starting systemd-fsck-usr.service... Apr 29 23:56:17.260737 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 29 23:56:17.260766 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 29 23:56:17.260792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 29 23:56:17.260815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 29 23:56:17.260838 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 29 23:56:17.260860 systemd[1]: Finished systemd-fsck-usr.service. Apr 29 23:56:17.260950 systemd-journald[251]: Collecting audit messages is disabled. Apr 29 23:56:17.261008 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 29 23:56:17.261030 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 29 23:56:17.261051 systemd-journald[251]: Journal started Apr 29 23:56:17.261093 systemd-journald[251]: Runtime Journal (/run/log/journal/ec203ed51828a08f4e8393183005fea8) is 8.0M, max 75.3M, 67.3M free. Apr 29 23:56:17.262993 kernel: Bridge firewalling registered Apr 29 23:56:17.263061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 29 23:56:17.223074 systemd-modules-load[252]: Inserted module 'overlay' Apr 29 23:56:17.263635 systemd-modules-load[252]: Inserted module 'br_netfilter' Apr 29 23:56:17.273708 systemd[1]: Started systemd-journald.service - Journal Service. Apr 29 23:56:17.276150 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 29 23:56:17.291812 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 29 23:56:17.302848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 29 23:56:17.309788 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 29 23:56:17.328690 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 29 23:56:17.353035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 29 23:56:17.360130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 29 23:56:17.369075 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 29 23:56:17.376342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 29 23:56:17.389906 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 29 23:56:17.399811 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 29 23:56:17.411327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 29 23:56:17.426416 dracut-cmdline[287]: dracut-dracut-053 Apr 29 23:56:17.433880 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6e9bced8073e517a5f5178e5412663c3084f53d67852b3dfe0380ce71e6d0edd Apr 29 23:56:17.493595 systemd-resolved[288]: Positive Trust Anchors: Apr 29 23:56:17.493658 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 29 23:56:17.493728 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 29 23:56:17.596526 kernel: SCSI subsystem initialized Apr 29 23:56:17.606518 kernel: Loading iSCSI transport class v2.0-870. Apr 29 23:56:17.617539 kernel: iscsi: registered transport (tcp) Apr 29 23:56:17.639528 kernel: iscsi: registered transport (qla4xxx) Apr 29 23:56:17.639604 kernel: QLogic iSCSI HBA Driver Apr 29 23:56:17.724524 kernel: random: crng init done Apr 29 23:56:17.724877 systemd-resolved[288]: Defaulting to hostname 'linux'. Apr 29 23:56:17.728319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 29 23:56:17.732774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 29 23:56:17.755508 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 29 23:56:17.764826 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 29 23:56:17.807522 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 29 23:56:17.808521 kernel: device-mapper: uevent: version 1.0.3 Apr 29 23:56:17.810549 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 29 23:56:17.878543 kernel: raid6: neonx8 gen() 6635 MB/s Apr 29 23:56:17.894541 kernel: raid6: neonx4 gen() 6484 MB/s Apr 29 23:56:17.911539 kernel: raid6: neonx2 gen() 5440 MB/s Apr 29 23:56:17.928545 kernel: raid6: neonx1 gen() 3934 MB/s Apr 29 23:56:17.945545 kernel: raid6: int64x8 gen() 3770 MB/s Apr 29 23:56:17.962540 kernel: raid6: int64x4 gen() 3676 MB/s Apr 29 23:56:17.979545 kernel: raid6: int64x2 gen() 3575 MB/s Apr 29 23:56:17.997374 kernel: raid6: int64x1 gen() 2749 MB/s Apr 29 23:56:17.997445 kernel: raid6: using algorithm neonx8 gen() 6635 MB/s Apr 29 23:56:18.015356 kernel: raid6: .... xor() 4839 MB/s, rmw enabled Apr 29 23:56:18.015442 kernel: raid6: using neon recovery algorithm Apr 29 23:56:18.023540 kernel: xor: measuring software checksum speed Apr 29 23:56:18.024535 kernel: 8regs : 9926 MB/sec Apr 29 23:56:18.026809 kernel: 32regs : 10634 MB/sec Apr 29 23:56:18.026871 kernel: arm64_neon : 9534 MB/sec Apr 29 23:56:18.026896 kernel: xor: using function: 32regs (10634 MB/sec) Apr 29 23:56:18.113540 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 29 23:56:18.134552 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 29 23:56:18.151796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 29 23:56:18.189975 systemd-udevd[471]: Using default interface naming scheme 'v255'. Apr 29 23:56:18.199942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 29 23:56:18.211834 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 29 23:56:18.259842 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Apr 29 23:56:18.325205 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 29 23:56:18.337841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 29 23:56:18.458896 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 29 23:56:18.480012 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 29 23:56:18.519093 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 29 23:56:18.522879 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 29 23:56:18.531022 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 29 23:56:18.535249 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 29 23:56:18.557082 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 29 23:56:18.593549 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 29 23:56:18.698655 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 29 23:56:18.698762 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 29 23:56:18.727533 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 29 23:56:18.727826 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 29 23:56:18.728079 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ba:88:52:2f:f1 Apr 29 23:56:18.715595 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 29 23:56:18.715836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 29 23:56:18.718854 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 29 23:56:18.722839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 29 23:56:18.745761 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 29 23:56:18.745807 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 29 23:56:18.723141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 29 23:56:18.725409 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 29 23:56:18.737635 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:56:18.759637 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 29 23:56:18.741719 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 29 23:56:18.772041 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 29 23:56:18.772112 kernel: GPT:9289727 != 16777215 Apr 29 23:56:18.773790 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 29 23:56:18.775670 kernel: GPT:9289727 != 16777215 Apr 29 23:56:18.775758 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 29 23:56:18.775787 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 29 23:56:18.787975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 29 23:56:18.798830 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 29 23:56:18.856542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 29 23:56:18.878554 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (540) Apr 29 23:56:18.924598 kernel: BTRFS: device fsid 9647859b-527c-478f-8aa1-9dfa3fa871e3 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (545) Apr 29 23:56:19.007402 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 29 23:56:19.029143 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 29 23:56:19.068846 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 29 23:56:19.074754 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 29 23:56:19.092996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 29 23:56:19.106807 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 29 23:56:19.121943 disk-uuid[663]: Primary Header is updated. Apr 29 23:56:19.121943 disk-uuid[663]: Secondary Entries is updated. Apr 29 23:56:19.121943 disk-uuid[663]: Secondary Header is updated. Apr 29 23:56:19.134522 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 29 23:56:20.152406 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 29 23:56:20.152478 disk-uuid[664]: The operation has completed successfully. Apr 29 23:56:20.343339 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 29 23:56:20.343935 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 29 23:56:20.396888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 29 23:56:20.406691 sh[924]: Success Apr 29 23:56:20.433543 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 29 23:56:20.549763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 29 23:56:20.575819 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 29 23:56:20.585938 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 29 23:56:20.610089 kernel: BTRFS info (device dm-0): first mount of filesystem 9647859b-527c-478f-8aa1-9dfa3fa871e3 Apr 29 23:56:20.610174 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 29 23:56:20.612088 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 29 23:56:20.614614 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 29 23:56:20.614716 kernel: BTRFS info (device dm-0): using free space tree Apr 29 23:56:20.714523 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 29 23:56:20.755791 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 29 23:56:20.759940 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 29 23:56:20.777821 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 29 23:56:20.783758 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 29 23:56:20.813022 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 29 23:56:20.813117 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 29 23:56:20.814524 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 29 23:56:20.825521 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 29 23:56:20.846207 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 29 23:56:20.848571 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 29 23:56:20.860849 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 29 23:56:20.873953 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 29 23:56:20.974032 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 29 23:56:20.991846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 29 23:56:21.048103 systemd-networkd[1116]: lo: Link UP Apr 29 23:56:21.048129 systemd-networkd[1116]: lo: Gained carrier Apr 29 23:56:21.052222 systemd-networkd[1116]: Enumeration completed Apr 29 23:56:21.052963 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 29 23:56:21.052970 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 29 23:56:21.054578 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 29 23:56:21.060660 systemd-networkd[1116]: eth0: Link UP Apr 29 23:56:21.060669 systemd-networkd[1116]: eth0: Gained carrier Apr 29 23:56:21.060689 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 29 23:56:21.075075 systemd[1]: Reached target network.target - Network. Apr 29 23:56:21.096613 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.28.53/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 29 23:56:21.256953 ignition[1037]: Ignition 2.20.0 Apr 29 23:56:21.256982 ignition[1037]: Stage: fetch-offline Apr 29 23:56:21.257455 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Apr 29 23:56:21.262066 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 29 23:56:21.257892 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 29 23:56:21.258844 ignition[1037]: Ignition finished successfully Apr 29 23:56:21.281923 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 29 23:56:21.303977 ignition[1128]: Ignition 2.20.0 Apr 29 23:56:21.304009 ignition[1128]: Stage: fetch Apr 29 23:56:21.305397 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Apr 29 23:56:21.305426 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 29 23:56:21.305719 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 29 23:56:21.328080 ignition[1128]: PUT result: OK Apr 29 23:56:21.331268 ignition[1128]: parsed url from cmdline: "" Apr 29 23:56:21.331291 ignition[1128]: no config URL provided Apr 29 23:56:21.331307 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" Apr 29 23:56:21.331334 ignition[1128]: no config at "/usr/lib/ignition/user.ign" Apr 29 23:56:21.331368 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 29 23:56:21.332994 ignition[1128]: PUT result: OK Apr 29 23:56:21.335017 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 29 23:56:21.342534 ignition[1128]: GET result: OK Apr 29 23:56:21.343998 ignition[1128]: parsing config with SHA512: dda276b43c6726d54b5a99809371efffa3b802c82b458268b000030e9a6cb83dae2561fed870a297de3cd411f6d298977e861ada411b53d86686b50186f02c02 Apr 29 23:56:21.359125 unknown[1128]: fetched base config from "system" Apr 29 23:56:21.359160 unknown[1128]: fetched base config from "system" Apr 29 23:56:21.359175 unknown[1128]: fetched user config from "aws" Apr 29 23:56:21.364561 ignition[1128]: fetch: fetch complete Apr 29 23:56:21.364580 ignition[1128]: fetch: fetch passed Apr 29 23:56:21.364680 ignition[1128]: Ignition finished successfully Apr 29 23:56:21.370355 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 29 23:56:21.379842 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 29 23:56:21.414056 ignition[1135]: Ignition 2.20.0 Apr 29 23:56:21.414085 ignition[1135]: Stage: kargs Apr 29 23:56:21.415086 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Apr 29 23:56:21.415113 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 29 23:56:21.415265 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 29 23:56:21.418630 ignition[1135]: PUT result: OK Apr 29 23:56:21.425730 ignition[1135]: kargs: kargs passed Apr 29 23:56:21.425940 ignition[1135]: Ignition finished successfully Apr 29 23:56:21.431000 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 29 23:56:21.440767 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 29 23:56:21.472052 ignition[1141]: Ignition 2.20.0 Apr 29 23:56:21.472860 ignition[1141]: Stage: disks Apr 29 23:56:21.473512 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Apr 29 23:56:21.473542 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 29 23:56:21.473713 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 29 23:56:21.475980 ignition[1141]: PUT result: OK Apr 29 23:56:21.486284 ignition[1141]: disks: disks passed Apr 29 23:56:21.486389 ignition[1141]: Ignition finished successfully Apr 29 23:56:21.490621 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 29 23:56:21.491167 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 29 23:56:21.492058 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 29 23:56:21.493364 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 29 23:56:21.495163 systemd[1]: Reached target sysinit.target - System Initialization. Apr 29 23:56:21.495461 systemd[1]: Reached target basic.target - Basic System. Apr 29 23:56:21.515078 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 29 23:56:21.558063 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 29 23:56:21.561783 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 29 23:56:21.570791 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 29 23:56:21.671530 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cd2ccabc-5b27-4350-bc86-21c9a8411827 r/w with ordered data mode. Quota mode: none. Apr 29 23:56:21.672117 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 29 23:56:21.675924 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 29 23:56:21.687719 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 29 23:56:21.693228 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 29 23:56:21.699745 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 29 23:56:21.700799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 29 23:56:21.700864 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 29 23:56:21.722530 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1168) Apr 29 23:56:21.728557 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 29 23:56:21.728641 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 29 23:56:21.728683 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 29 23:56:21.730954 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 29 23:56:21.739517 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 29 23:56:21.741814 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 29 23:56:21.750100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 29 23:56:22.118191 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Apr 29 23:56:22.137745 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Apr 29 23:56:22.147237 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Apr 29 23:56:22.157548 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Apr 29 23:56:22.314702 systemd-networkd[1116]: eth0: Gained IPv6LL Apr 29 23:56:22.526154 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 29 23:56:22.534736 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 29 23:56:22.550920 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 29 23:56:22.567883 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 29 23:56:22.570633 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 29 23:56:22.604075 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 29 23:56:22.616618 ignition[1281]: INFO : Ignition 2.20.0 Apr 29 23:56:22.616618 ignition[1281]: INFO : Stage: mount Apr 29 23:56:22.619991 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 29 23:56:22.619991 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 29 23:56:22.624338 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 29 23:56:22.627403 ignition[1281]: INFO : PUT result: OK Apr 29 23:56:22.632981 ignition[1281]: INFO : mount: mount passed Apr 29 23:56:22.635664 ignition[1281]: INFO : Ignition finished successfully Apr 29 23:56:22.638608 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 29 23:56:22.646719 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 29 23:56:22.687976 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 29 23:56:22.711553 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1292) Apr 29 23:56:22.715644 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a221b5e-9ac2-4c84-b127-2e52009cde8a Apr 29 23:56:22.715719 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 29 23:56:22.715747 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 29 23:56:22.722533 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 29 23:56:22.726552 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 29 23:56:22.762744 ignition[1309]: INFO : Ignition 2.20.0 Apr 29 23:56:22.765939 ignition[1309]: INFO : Stage: files Apr 29 23:56:22.765939 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 29 23:56:22.765939 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 29 23:56:22.765939 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 29 23:56:22.774621 ignition[1309]: INFO : PUT result: OK Apr 29 23:56:22.779251 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Apr 29 23:56:22.794317 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 29 23:56:22.794317 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 29 23:56:22.848376 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 29 23:56:22.851644 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 29 23:56:22.854583 unknown[1309]: wrote ssh authorized keys file for user: core Apr 29 23:56:22.857042 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 29 23:56:22.863538 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 29 23:56:22.866894 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 29 23:56:22.866894 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 29 23:56:22.866894 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 29 23:56:22.960398 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 29 23:56:23.125410 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 29 23:56:23.129214 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 29 23:56:23.129214 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 29 23:56:23.561227 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 29 23:56:23.693560 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 29 23:56:23.693560 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 29 23:56:23.702901 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 29 23:56:23.984531 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 29 23:56:24.321339 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 29 23:56:24.326267 ignition[1309]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 29 23:56:24.365964 ignition[1309]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 29 23:56:24.365964 ignition[1309]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 29 23:56:24.365964 ignition[1309]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 29 23:56:24.365964 ignition[1309]: INFO : files: files passed Apr 29 23:56:24.365964 ignition[1309]: INFO : Ignition finished successfully Apr 29 23:56:24.334649 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 29 23:56:24.369961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 29 23:56:24.383123 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 29 23:56:24.389383 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 29 23:56:24.389604 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 29 23:56:24.420965 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 29 23:56:24.420965 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 29 23:56:24.427440 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 29 23:56:24.434207 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 29 23:56:24.441366 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 29 23:56:24.459348 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 29 23:56:24.516327 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 29 23:56:24.518359 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 29 23:56:24.524986 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 29 23:56:24.527080 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 29 23:56:24.529938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 29 23:56:24.545568 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 29 23:56:24.572581 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 29 23:56:24.585825 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 29 23:56:24.613965 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 29 23:56:24.618870 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 29 23:56:24.621444 systemd[1]: Stopped target timers.target - Timer Units. Apr 29 23:56:24.623594 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 29 23:56:24.623860 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 29 23:56:24.632390 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 29 23:56:24.638519 systemd[1]: Stopped target basic.target - Basic System. Apr 29 23:56:24.642016 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 29 23:56:24.645888 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 29 23:56:24.650556 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 29 23:56:24.656296 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 29 23:56:24.658415 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 29 23:56:24.660970 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 29 23:56:24.664304 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 29 23:56:24.667331 systemd[1]: Stopped target swap.target - Swaps. Apr 29 23:56:24.670588 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 29 23:56:24.671294 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 29 23:56:24.675617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 29 23:56:24.678896 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 29 23:56:24.683471 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 29 23:56:24.685389 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 29 23:56:24.687998 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 29 23:56:24.688285 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 29 23:56:24.695202 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 29 23:56:24.696482 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 29 23:56:24.712218 systemd[1]: ignition-files.service: Deactivated successfully. Apr 29 23:56:24.713017 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 29 23:56:24.726787 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 29 23:56:24.736698 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 29 23:56:24.738800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 29 23:56:24.742910 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 29 23:56:24.749300 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 29 23:56:24.751766 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 29 23:56:24.770050 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 29 23:56:24.772312 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 29 23:56:24.783058 ignition[1361]: INFO : Ignition 2.20.0 Apr 29 23:56:24.786613 ignition[1361]: INFO : Stage: umount Apr 29 23:56:24.786613 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 29 23:56:24.786613 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 29 23:56:24.786613 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 29 23:56:24.796691 ignition[1361]: INFO : PUT result: OK Apr 29 23:56:24.805166 ignition[1361]: INFO : umount: umount passed Apr 29 23:56:24.806948 ignition[1361]: INFO : Ignition finished successfully Apr 29 23:56:24.810971 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 29 23:56:24.811169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 29 23:56:24.815334 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 29 23:56:24.815431 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 29 23:56:24.819101 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 29 23:56:24.819199 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 29 23:56:24.819740 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 29 23:56:24.819814 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 29 23:56:24.820432 systemd[1]: Stopped target network.target - Network. Apr 29 23:56:24.850942 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 29 23:56:24.851533 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 29 23:56:24.857308 systemd[1]: Stopped target paths.target - Path Units. Apr 29 23:56:24.859239 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 29 23:56:24.864586 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 29 23:56:24.867630 systemd[1]: Stopped target slices.target - Slice Units. Apr 29 23:56:24.869413 systemd[1]: Stopped target sockets.target - Socket Units. Apr 29 23:56:24.871932 systemd[1]: iscsid.socket: Deactivated successfully. Apr 29 23:56:24.872018 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 29 23:56:24.877632 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 29 23:56:24.877723 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 29 23:56:24.885394 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 29 23:56:24.885522 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 29 23:56:24.894711 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 29 23:56:24.894827 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 29 23:56:24.897231 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 29 23:56:24.900280 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 29 23:56:24.911587 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 29 23:56:24.912651 systemd-networkd[1116]: eth0: DHCPv6 lease lost Apr 29 23:56:24.912694 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 29 23:56:24.913269 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 29 23:56:24.919426 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 29 23:56:24.921445 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 29 23:56:24.930068 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 29 23:56:24.932697 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 29 23:56:24.939117 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 29 23:56:24.941223 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 29 23:56:24.947346 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 29 23:56:24.949338 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 29 23:56:24.961662 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 29 23:56:24.964208 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 29 23:56:24.964334 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 29 23:56:24.967271 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 29 23:56:24.967375 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 29 23:56:24.970082 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 29 23:56:24.970180 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 29 23:56:24.973128 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 29 23:56:24.973225 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 29 23:56:24.978620 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 29 23:56:25.020070 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 29 23:56:25.020618 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 29 23:56:25.033866 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 29 23:56:25.034115 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 29 23:56:25.039118 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 29 23:56:25.039225 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 29 23:56:25.042116 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 29 23:56:25.042221 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 29 23:56:25.052470 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 29 23:56:25.052610 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 29 23:56:25.054910 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 29 23:56:25.055013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 29 23:56:25.070797 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 29 23:56:25.075031 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 29 23:56:25.075155 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 29 23:56:25.078013 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 29 23:56:25.078108 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 29 23:56:25.081220 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 29 23:56:25.081306 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 29 23:56:25.083929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 29 23:56:25.084013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 29 23:56:25.087151 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 29 23:56:25.087347 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 29 23:56:25.128539 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 29 23:56:25.128992 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 29 23:56:25.135803 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 29 23:56:25.150508 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 29 23:56:25.187423 systemd[1]: Switching root. Apr 29 23:56:25.226516 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Apr 29 23:56:25.226597 systemd-journald[251]: Journal stopped Apr 29 23:56:28.139563 kernel: SELinux: policy capability network_peer_controls=1 Apr 29 23:56:28.139744 kernel: SELinux: policy capability open_perms=1 Apr 29 23:56:28.139787 kernel: SELinux: policy capability extended_socket_class=1 Apr 29 23:56:28.139821 kernel: SELinux: policy capability always_check_network=0 Apr 29 23:56:28.140337 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 29 23:56:28.140391 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 29 23:56:28.140436 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 29 23:56:28.140469 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 29 23:56:28.140926 kernel: audit: type=1403 audit(1745970986.098:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 29 23:56:28.140990 systemd[1]: Successfully loaded SELinux policy in 76.696ms. Apr 29 23:56:28.141047 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.397ms. Apr 29 23:56:28.141084 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 29 23:56:28.141118 systemd[1]: Detected virtualization amazon. Apr 29 23:56:28.141150 systemd[1]: Detected architecture arm64. Apr 29 23:56:28.141182 systemd[1]: Detected first boot. Apr 29 23:56:28.141223 systemd[1]: Initializing machine ID from VM UUID. Apr 29 23:56:28.141258 zram_generator::config[1423]: No configuration found. Apr 29 23:56:28.141305 systemd[1]: Populated /etc with preset unit settings. Apr 29 23:56:28.141338 systemd[1]: Queued start job for default target multi-user.target. Apr 29 23:56:28.141371 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 29 23:56:28.141403 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 29 23:56:28.141436 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 29 23:56:28.141469 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 29 23:56:28.141633 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 29 23:56:28.141674 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 29 23:56:28.141710 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 29 23:56:28.141743 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 29 23:56:28.141776 systemd[1]: Created slice user.slice - User and Session Slice. Apr 29 23:56:28.141809 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 29 23:56:28.141843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 29 23:56:28.141875 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 29 23:56:28.141912 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 29 23:56:28.141947 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 29 23:56:28.141981 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 29 23:56:28.142012 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 29 23:56:28.142042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 29 23:56:28.142074 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 29 23:56:28.142106 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 29 23:56:28.142138 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 29 23:56:28.142172 systemd[1]: Reached target slices.target - Slice Units. Apr 29 23:56:28.142205 systemd[1]: Reached target swap.target - Swaps. Apr 29 23:56:28.142242 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 29 23:56:28.142275 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 29 23:56:28.142306 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 29 23:56:28.142344 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 29 23:56:28.142376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 29 23:56:28.142408 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 29 23:56:28.142439 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 29 23:56:28.142471 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 29 23:56:28.142600 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 29 23:56:28.142643 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 29 23:56:28.142696 systemd[1]: Mounting media.mount - External Media Directory... Apr 29 23:56:28.142730 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 29 23:56:28.142769 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 29 23:56:28.142804 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 29 23:56:28.142836 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 29 23:56:28.142868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 29 23:56:28.142906 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 29 23:56:28.142937 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 29 23:56:28.142971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 29 23:56:28.143016 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 29 23:56:28.143050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 29 23:56:28.143081 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 29 23:56:28.143112 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 29 23:56:28.143146 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 29 23:56:28.143189 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 29 23:56:28.143232 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 29 23:56:28.143263 kernel: fuse: init (API version 7.39) Apr 29 23:56:28.143292 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 29 23:56:28.143322 kernel: loop: module loaded Apr 29 23:56:28.143354 kernel: ACPI: bus type drm_connector registered Apr 29 23:56:28.143384 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 29 23:56:28.143416 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 29 23:56:28.143449 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 29 23:56:28.143479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 29 23:56:28.143558 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 29 23:56:28.143662 systemd-journald[1527]: Collecting audit messages is disabled. Apr 29 23:56:28.143736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 29 23:56:28.143771 systemd[1]: Mounted media.mount - External Media Directory. Apr 29 23:56:28.143801 systemd-journald[1527]: Journal started Apr 29 23:56:28.143847 systemd-journald[1527]: Runtime Journal (/run/log/journal/ec203ed51828a08f4e8393183005fea8) is 8.0M, max 75.3M, 67.3M free. Apr 29 23:56:28.155671 systemd[1]: Started systemd-journald.service - Journal Service. Apr 29 23:56:28.156423 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 29 23:56:28.162009 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 29 23:56:28.164959 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 29 23:56:28.167715 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 29 23:56:28.174072 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 29 23:56:28.177418 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 29 23:56:28.177969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 29 23:56:28.181389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 29 23:56:28.181873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 29 23:56:28.187206 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 29 23:56:28.187705 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 29 23:56:28.193626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 29 23:56:28.194055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 29 23:56:28.199466 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 29 23:56:28.201994 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 29 23:56:28.205470 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 29 23:56:28.207335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 29 23:56:28.211010 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 29 23:56:28.214908 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 29 23:56:28.218950 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 29 23:56:28.247188 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 29 23:56:28.258751 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 29 23:56:28.265668 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 29 23:56:28.269763 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 29 23:56:28.285862 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 29 23:56:28.315242 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 29 23:56:28.317751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 29 23:56:28.328966 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 29 23:56:28.331996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 29 23:56:28.346835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 29 23:56:28.367781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 29 23:56:28.376608 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 29 23:56:28.379244 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 29 23:56:28.388317 systemd-journald[1527]: Time spent on flushing to /var/log/journal/ec203ed51828a08f4e8393183005fea8 is 73.705ms for 899 entries. Apr 29 23:56:28.388317 systemd-journald[1527]: System Journal (/var/log/journal/ec203ed51828a08f4e8393183005fea8) is 8.0M, max 195.6M, 187.6M free. Apr 29 23:56:28.491839 systemd-journald[1527]: Received client request to flush runtime journal. Apr 29 23:56:28.425319 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 29 23:56:28.428142 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 29 23:56:28.484116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 29 23:56:28.488750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 29 23:56:28.515928 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 29 23:56:28.519619 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 29 23:56:28.534087 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Apr 29 23:56:28.534881 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Apr 29 23:56:28.546958 udevadm[1591]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 29 23:56:28.555786 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 29 23:56:28.573043 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 29 23:56:28.677755 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 29 23:56:28.690894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 29 23:56:28.739821 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Apr 29 23:56:28.739864 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Apr 29 23:56:28.752368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 29 23:56:29.469639 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 29 23:56:29.479892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 29 23:56:29.543380 systemd-udevd[1604]: Using default interface naming scheme 'v255'. Apr 29 23:56:29.615894 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 29 23:56:29.630974 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 29 23:56:29.691229 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 29 23:56:29.760262 (udev-worker)[1622]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:56:29.767977 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 29 23:56:29.876739 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 29 23:56:30.059302 systemd-networkd[1609]: lo: Link UP Apr 29 23:56:30.060051 systemd-networkd[1609]: lo: Gained carrier Apr 29 23:56:30.063521 systemd-networkd[1609]: Enumeration completed Apr 29 23:56:30.064036 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 29 23:56:30.066881 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 29 23:56:30.067098 systemd-networkd[1609]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 29 23:56:30.071710 systemd-networkd[1609]: eth0: Link UP Apr 29 23:56:30.072132 systemd-networkd[1609]: eth0: Gained carrier Apr 29 23:56:30.072171 systemd-networkd[1609]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 29 23:56:30.079288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 29 23:56:30.087757 systemd-networkd[1609]: eth0: DHCPv4 address 172.31.28.53/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 29 23:56:30.108005 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 29 23:56:30.140566 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1621) Apr 29 23:56:30.309180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 29 23:56:30.358787 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 29 23:56:30.375709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 29 23:56:30.402786 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 29 23:56:30.443646 lvm[1733]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 29 23:56:30.483301 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 29 23:56:30.489750 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 29 23:56:30.501817 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 29 23:56:30.521371 lvm[1736]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 29 23:56:30.559901 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 29 23:56:30.563257 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 29 23:56:30.566652 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 29 23:56:30.566898 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 29 23:56:30.568993 systemd[1]: Reached target machines.target - Containers. Apr 29 23:56:30.572938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 29 23:56:30.582877 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 29 23:56:30.589163 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 29 23:56:30.592139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 29 23:56:30.601768 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 29 23:56:30.609807 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 29 23:56:30.619842 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 29 23:56:30.626032 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 29 23:56:30.661580 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 29 23:56:30.675635 kernel: loop0: detected capacity change from 0 to 116808 Apr 29 23:56:30.676697 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 29 23:56:30.679992 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 29 23:56:30.780061 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 29 23:56:30.805527 kernel: loop1: detected capacity change from 0 to 53784 Apr 29 23:56:30.882541 kernel: loop2: detected capacity change from 0 to 113536 Apr 29 23:56:30.986557 kernel: loop3: detected capacity change from 0 to 194096 Apr 29 23:56:31.110694 kernel: loop4: detected capacity change from 0 to 116808 Apr 29 23:56:31.135693 kernel: loop5: detected capacity change from 0 to 53784 Apr 29 23:56:31.153637 kernel: loop6: detected capacity change from 0 to 113536 Apr 29 23:56:31.185536 kernel: loop7: detected capacity change from 0 to 194096 Apr 29 23:56:31.220914 (sd-merge)[1757]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 29 23:56:31.221956 (sd-merge)[1757]: Merged extensions into '/usr'. Apr 29 23:56:31.231268 systemd[1]: Reloading requested from client PID 1744 ('systemd-sysext') (unit systemd-sysext.service)... Apr 29 23:56:31.231470 systemd[1]: Reloading... Apr 29 23:56:31.378599 zram_generator::config[1788]: No configuration found. Apr 29 23:56:31.689932 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 29 23:56:31.849956 systemd[1]: Reloading finished in 616 ms. Apr 29 23:56:31.882972 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 29 23:56:31.901830 systemd[1]: Starting ensure-sysext.service... Apr 29 23:56:31.913966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 29 23:56:31.938197 systemd[1]: Reloading requested from client PID 1842 ('systemctl') (unit ensure-sysext.service)... Apr 29 23:56:31.938243 systemd[1]: Reloading... Apr 29 23:56:31.981596 systemd-tmpfiles[1843]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 29 23:56:31.982299 systemd-tmpfiles[1843]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 29 23:56:31.989476 systemd-tmpfiles[1843]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 29 23:56:31.991137 systemd-tmpfiles[1843]: ACLs are not supported, ignoring. Apr 29 23:56:31.991480 systemd-tmpfiles[1843]: ACLs are not supported, ignoring. Apr 29 23:56:31.999567 systemd-tmpfiles[1843]: Detected autofs mount point /boot during canonicalization of boot. Apr 29 23:56:32.000733 systemd-tmpfiles[1843]: Skipping /boot Apr 29 23:56:32.030610 systemd-tmpfiles[1843]: Detected autofs mount point /boot during canonicalization of boot. Apr 29 23:56:32.030829 systemd-tmpfiles[1843]: Skipping /boot Apr 29 23:56:32.056178 ldconfig[1740]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 29 23:56:32.106778 systemd-networkd[1609]: eth0: Gained IPv6LL Apr 29 23:56:32.152558 zram_generator::config[1878]: No configuration found. Apr 29 23:56:32.383418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 29 23:56:32.544848 systemd[1]: Reloading finished in 603 ms. Apr 29 23:56:32.573699 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 29 23:56:32.577761 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 29 23:56:32.586545 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 29 23:56:32.605834 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 29 23:56:32.620838 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 29 23:56:32.627004 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 29 23:56:32.639815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 29 23:56:32.654807 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 29 23:56:32.678070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 29 23:56:32.690275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 29 23:56:32.699350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 29 23:56:32.719944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 29 23:56:32.722284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 29 23:56:32.736057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 29 23:56:32.736436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 29 23:56:32.753843 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 29 23:56:32.758932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 29 23:56:32.759976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 29 23:56:32.764446 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 29 23:56:32.767117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 29 23:56:32.799100 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 29 23:56:32.824234 systemd[1]: Finished ensure-sysext.service. Apr 29 23:56:32.827681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 29 23:56:32.842880 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 29 23:56:32.847864 augenrules[1979]: No rules Apr 29 23:56:32.865752 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 29 23:56:32.875764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 29 23:56:32.899956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 29 23:56:32.902476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 29 23:56:32.902606 systemd[1]: Reached target time-set.target - System Time Set. Apr 29 23:56:32.916349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 29 23:56:32.921986 systemd[1]: audit-rules.service: Deactivated successfully. Apr 29 23:56:32.922542 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 29 23:56:32.925275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 29 23:56:32.927202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 29 23:56:32.931871 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 29 23:56:32.935003 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 29 23:56:32.937899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 29 23:56:32.938522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 29 23:56:32.948207 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 29 23:56:32.948722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 29 23:56:32.973180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 29 23:56:32.973352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 29 23:56:32.989903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 29 23:56:33.003271 systemd-resolved[1941]: Positive Trust Anchors: Apr 29 23:56:33.003335 systemd-resolved[1941]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 29 23:56:33.003399 systemd-resolved[1941]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 29 23:56:33.012228 systemd-resolved[1941]: Defaulting to hostname 'linux'. Apr 29 23:56:33.016121 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 29 23:56:33.018660 systemd[1]: Reached target network.target - Network. Apr 29 23:56:33.020735 systemd[1]: Reached target network-online.target - Network is Online. Apr 29 23:56:33.023024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 29 23:56:33.075174 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 29 23:56:33.077969 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 29 23:56:33.078053 systemd[1]: Reached target sysinit.target - System Initialization. Apr 29 23:56:33.080899 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 29 23:56:33.084059 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 29 23:56:33.086771 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 29 23:56:33.089068 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 29 23:56:33.091547 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 29 23:56:33.094009 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 29 23:56:33.094078 systemd[1]: Reached target paths.target - Path Units. Apr 29 23:56:33.095844 systemd[1]: Reached target timers.target - Timer Units. Apr 29 23:56:33.099175 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 29 23:56:33.104936 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 29 23:56:33.109166 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 29 23:56:33.114454 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 29 23:56:33.116637 systemd[1]: Reached target sockets.target - Socket Units. Apr 29 23:56:33.118548 systemd[1]: Reached target basic.target - Basic System. Apr 29 23:56:33.120658 systemd[1]: System is tainted: cgroupsv1 Apr 29 23:56:33.120734 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 29 23:56:33.120784 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 29 23:56:33.128882 systemd[1]: Starting containerd.service - containerd container runtime... Apr 29 23:56:33.137870 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 29 23:56:33.154857 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 29 23:56:33.165714 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 29 23:56:33.172824 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 29 23:56:33.174893 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 29 23:56:33.188840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:56:33.198960 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 29 23:56:33.217410 jq[2009]: false Apr 29 23:56:33.228763 systemd[1]: Started ntpd.service - Network Time Service. Apr 29 23:56:33.241667 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 29 23:56:33.262681 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 29 23:56:33.285836 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 29 23:56:33.288851 extend-filesystems[2010]: Found loop4 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found loop5 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found loop6 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found loop7 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found nvme0n1 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found nvme0n1p1 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found nvme0n1p2 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found nvme0n1p3 Apr 29 23:56:33.294671 extend-filesystems[2010]: Found usr Apr 29 23:56:33.293802 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 29 23:56:33.316054 extend-filesystems[2010]: Found nvme0n1p4 Apr 29 23:56:33.316054 extend-filesystems[2010]: Found nvme0n1p6 Apr 29 23:56:33.316054 extend-filesystems[2010]: Found nvme0n1p7 Apr 29 23:56:33.316054 extend-filesystems[2010]: Found nvme0n1p9 Apr 29 23:56:33.316054 extend-filesystems[2010]: Checking size of /dev/nvme0n1p9 Apr 29 23:56:33.320007 dbus-daemon[2008]: [system] SELinux support is enabled Apr 29 23:56:33.323086 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 29 23:56:33.345058 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 29 23:56:33.348874 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 29 23:56:33.351017 dbus-daemon[2008]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1609 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 29 23:56:33.371781 systemd[1]: Starting update-engine.service - Update Engine... Apr 29 23:56:33.385790 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 29 23:56:33.392038 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 29 23:56:33.411840 ntpd[2015]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:35:04 UTC 2025 (1): Starting Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: ntpd 4.2.8p17@1.4004-o Tue Apr 29 21:35:04 UTC 2025 (1): Starting Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: ---------------------------------------------------- Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: ntp-4 is maintained by Network Time Foundation, Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: corporation. Support and training for ntp-4 are Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: available at https://www.nwtime.org/support Apr 29 23:56:33.413019 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: ---------------------------------------------------- Apr 29 23:56:33.411895 ntpd[2015]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 29 23:56:33.411915 ntpd[2015]: ---------------------------------------------------- Apr 29 23:56:33.411934 ntpd[2015]: ntp-4 is maintained by Network Time Foundation, Apr 29 23:56:33.411952 ntpd[2015]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 29 23:56:33.411970 ntpd[2015]: corporation. Support and training for ntp-4 are Apr 29 23:56:33.411988 ntpd[2015]: available at https://www.nwtime.org/support Apr 29 23:56:33.412006 ntpd[2015]: ---------------------------------------------------- Apr 29 23:56:33.419847 ntpd[2015]: proto: precision = 0.108 usec (-23) Apr 29 23:56:33.423713 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: proto: precision = 0.108 usec (-23) Apr 29 23:56:33.423713 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: basedate set to 2025-04-17 Apr 29 23:56:33.423713 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: gps base set to 2025-04-20 (week 2363) Apr 29 23:56:33.420265 ntpd[2015]: basedate set to 2025-04-17 Apr 29 23:56:33.420289 ntpd[2015]: gps base set to 2025-04-20 (week 2363) Apr 29 23:56:33.425279 ntpd[2015]: Listen and drop on 0 v6wildcard [::]:123 Apr 29 23:56:33.428414 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Listen and drop on 0 v6wildcard [::]:123 Apr 29 23:56:33.428414 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 29 23:56:33.428414 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Listen normally on 2 lo 127.0.0.1:123 Apr 29 23:56:33.428414 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Listen normally on 3 eth0 172.31.28.53:123 Apr 29 23:56:33.428414 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Listen normally on 4 lo [::1]:123 Apr 29 23:56:33.428414 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Listen normally on 5 eth0 [fe80::4ba:88ff:fe52:2ff1%2]:123 Apr 29 23:56:33.428414 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: Listening on routing socket on fd #22 for interface updates Apr 29 23:56:33.427630 ntpd[2015]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 29 23:56:33.427926 ntpd[2015]: Listen normally on 2 lo 127.0.0.1:123 Apr 29 23:56:33.427992 ntpd[2015]: Listen normally on 3 eth0 172.31.28.53:123 Apr 29 23:56:33.428062 ntpd[2015]: Listen normally on 4 lo [::1]:123 Apr 29 23:56:33.428145 ntpd[2015]: Listen normally on 5 eth0 [fe80::4ba:88ff:fe52:2ff1%2]:123 Apr 29 23:56:33.428211 ntpd[2015]: Listening on routing socket on fd #22 for interface updates Apr 29 23:56:33.435447 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 29 23:56:33.436686 ntpd[2015]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 29 23:56:33.438658 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 29 23:56:33.438658 ntpd[2015]: 29 Apr 23:56:33 ntpd[2015]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 29 23:56:33.436737 ntpd[2015]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 29 23:56:33.440641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 29 23:56:33.455273 jq[2033]: true Apr 29 23:56:33.481537 coreos-metadata[2006]: Apr 29 23:56:33.476 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 29 23:56:33.478265 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 29 23:56:33.485938 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 29 23:56:33.496447 coreos-metadata[2006]: Apr 29 23:56:33.496 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 29 23:56:33.510600 coreos-metadata[2006]: Apr 29 23:56:33.510 INFO Fetch successful Apr 29 23:56:33.510600 coreos-metadata[2006]: Apr 29 23:56:33.510 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 29 23:56:33.516110 coreos-metadata[2006]: Apr 29 23:56:33.515 INFO Fetch successful Apr 29 23:56:33.516110 coreos-metadata[2006]: Apr 29 23:56:33.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 29 23:56:33.540979 coreos-metadata[2006]: Apr 29 23:56:33.523 INFO Fetch successful Apr 29 23:56:33.540979 coreos-metadata[2006]: Apr 29 23:56:33.523 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 29 23:56:33.540979 coreos-metadata[2006]: Apr 29 23:56:33.527 INFO Fetch successful Apr 29 23:56:33.540979 coreos-metadata[2006]: Apr 29 23:56:33.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 29 23:56:33.540979 coreos-metadata[2006]: Apr 29 23:56:33.534 INFO Fetch failed with 404: resource not found Apr 29 23:56:33.540979 coreos-metadata[2006]: Apr 29 23:56:33.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 29 23:56:33.551723 extend-filesystems[2010]: Resized partition /dev/nvme0n1p9 Apr 29 23:56:33.553815 coreos-metadata[2006]: Apr 29 23:56:33.553 INFO Fetch successful Apr 29 23:56:33.553815 coreos-metadata[2006]: Apr 29 23:56:33.553 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 29 23:56:33.561017 systemd[1]: motdgen.service: Deactivated successfully. Apr 29 23:56:33.567972 extend-filesystems[2061]: resize2fs 1.47.1 (20-May-2024) Apr 29 23:56:33.571747 coreos-metadata[2006]: Apr 29 23:56:33.563 INFO Fetch successful Apr 29 23:56:33.571747 coreos-metadata[2006]: Apr 29 23:56:33.563 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 29 23:56:33.573585 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 29 23:56:33.581679 coreos-metadata[2006]: Apr 29 23:56:33.580 INFO Fetch successful Apr 29 23:56:33.581679 coreos-metadata[2006]: Apr 29 23:56:33.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 29 23:56:33.580209 (ntainerd)[2056]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 29 23:56:33.593024 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 29 23:56:33.593116 coreos-metadata[2006]: Apr 29 23:56:33.592 INFO Fetch successful Apr 29 23:56:33.593116 coreos-metadata[2006]: Apr 29 23:56:33.592 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 29 23:56:33.601216 coreos-metadata[2006]: Apr 29 23:56:33.595 INFO Fetch successful Apr 29 23:56:33.601457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 29 23:56:33.620564 jq[2055]: true Apr 29 23:56:33.631958 dbus-daemon[2008]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 29 23:56:33.674010 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 29 23:56:33.674083 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 29 23:56:33.695743 update_engine[2031]: I20250429 23:56:33.692438 2031 main.cc:92] Flatcar Update Engine starting Apr 29 23:56:33.701677 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 29 23:56:33.703724 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 29 23:56:33.703773 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 29 23:56:33.737557 update_engine[2031]: I20250429 23:56:33.734784 2031 update_check_scheduler.cc:74] Next update check in 7m41s Apr 29 23:56:33.740463 systemd[1]: Started update-engine.service - Update Engine. Apr 29 23:56:33.762535 tar[2048]: linux-arm64/helm Apr 29 23:56:33.761672 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 29 23:56:33.779235 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 29 23:56:33.841653 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 29 23:56:33.849328 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 29 23:56:33.867987 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 29 23:56:33.888446 extend-filesystems[2061]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 29 23:56:33.888446 extend-filesystems[2061]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 29 23:56:33.888446 extend-filesystems[2061]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 29 23:56:33.899750 extend-filesystems[2010]: Resized filesystem in /dev/nvme0n1p9 Apr 29 23:56:33.907239 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 29 23:56:33.908567 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 29 23:56:33.944254 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 29 23:56:33.951032 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 29 23:56:34.027523 amazon-ssm-agent[2098]: Initializing new seelog logger Apr 29 23:56:34.027523 amazon-ssm-agent[2098]: New Seelog Logger Creation Complete Apr 29 23:56:34.027523 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.027523 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.030229 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 processing appconfig overrides Apr 29 23:56:34.031256 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.031256 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.031256 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 processing appconfig overrides Apr 29 23:56:34.031456 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.031456 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.032235 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 processing appconfig overrides Apr 29 23:56:34.038580 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO Proxy environment variables: Apr 29 23:56:34.039292 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.039292 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 29 23:56:34.039462 amazon-ssm-agent[2098]: 2025/04/29 23:56:34 processing appconfig overrides Apr 29 23:56:34.042253 bash[2126]: Updated "/home/core/.ssh/authorized_keys" Apr 29 23:56:34.053052 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 29 23:56:34.082952 systemd[1]: Starting sshkeys.service... Apr 29 23:56:34.091515 systemd-logind[2028]: Watching system buttons on /dev/input/event0 (Power Button) Apr 29 23:56:34.091558 systemd-logind[2028]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 29 23:56:34.093682 systemd-logind[2028]: New seat seat0. Apr 29 23:56:34.103477 systemd[1]: Started systemd-logind.service - User Login Management. Apr 29 23:56:34.152818 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO no_proxy: Apr 29 23:56:34.187714 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 29 23:56:34.256131 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO https_proxy: Apr 29 23:56:34.361285 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO http_proxy: Apr 29 23:56:34.384914 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 29 23:56:34.461832 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO Checking if agent identity type OnPrem can be assumed Apr 29 23:56:34.470023 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2133) Apr 29 23:56:34.483740 locksmithd[2090]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 29 23:56:34.563802 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO Checking if agent identity type EC2 can be assumed Apr 29 23:56:34.597302 containerd[2056]: time="2025-04-29T23:56:34.597108937Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 29 23:56:34.663258 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO Agent will take identity from EC2 Apr 29 23:56:34.771532 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 29 23:56:34.782649 coreos-metadata[2152]: Apr 29 23:56:34.782 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 29 23:56:34.783904 coreos-metadata[2152]: Apr 29 23:56:34.783 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 29 23:56:34.784430 coreos-metadata[2152]: Apr 29 23:56:34.784 INFO Fetch successful Apr 29 23:56:34.790411 coreos-metadata[2152]: Apr 29 23:56:34.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 29 23:56:34.796812 coreos-metadata[2152]: Apr 29 23:56:34.795 INFO Fetch successful Apr 29 23:56:34.800711 unknown[2152]: wrote ssh authorized keys file for user: core Apr 29 23:56:34.838697 dbus-daemon[2008]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 29 23:56:34.840133 dbus-daemon[2008]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2086 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 29 23:56:34.841023 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 29 23:56:34.873758 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 29 23:56:34.869747 systemd[1]: Starting polkit.service - Authorization Manager... Apr 29 23:56:34.919167 update-ssh-keys[2209]: Updated "/home/core/.ssh/authorized_keys" Apr 29 23:56:34.922082 containerd[2056]: time="2025-04-29T23:56:34.911005707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 29 23:56:34.893888 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 29 23:56:34.908369 systemd[1]: Finished sshkeys.service. Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.928118079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.928191207Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.928227255Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.930776247Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.930843159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.931024119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.931057587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 29 23:56:34.931555 containerd[2056]: time="2025-04-29T23:56:34.931453263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 29 23:56:34.938666 containerd[2056]: time="2025-04-29T23:56:34.938586759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 29 23:56:34.939534 containerd[2056]: time="2025-04-29T23:56:34.938834307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 29 23:56:34.939534 containerd[2056]: time="2025-04-29T23:56:34.938880063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 29 23:56:34.939534 containerd[2056]: time="2025-04-29T23:56:34.939113763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 29 23:56:34.943540 containerd[2056]: time="2025-04-29T23:56:34.941632407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 29 23:56:34.943540 containerd[2056]: time="2025-04-29T23:56:34.941998395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 29 23:56:34.943540 containerd[2056]: time="2025-04-29T23:56:34.942040371Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 29 23:56:34.943540 containerd[2056]: time="2025-04-29T23:56:34.942281151Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 29 23:56:34.943540 containerd[2056]: time="2025-04-29T23:56:34.942403443Z" level=info msg="metadata content store policy set" policy=shared Apr 29 23:56:34.963534 containerd[2056]: time="2025-04-29T23:56:34.961986327Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 29 23:56:34.963534 containerd[2056]: time="2025-04-29T23:56:34.962152623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 29 23:56:34.963534 containerd[2056]: time="2025-04-29T23:56:34.962217615Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 29 23:56:34.963534 containerd[2056]: time="2025-04-29T23:56:34.962270859Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 29 23:56:34.963534 containerd[2056]: time="2025-04-29T23:56:34.962309307Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 29 23:56:34.963534 containerd[2056]: time="2025-04-29T23:56:34.962654331Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 29 23:56:34.963534 containerd[2056]: time="2025-04-29T23:56:34.963271407Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 29 23:56:34.965666 containerd[2056]: time="2025-04-29T23:56:34.965613447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 29 23:56:34.968836 containerd[2056]: time="2025-04-29T23:56:34.968759331Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969008607Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969075711Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969111207Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969142755Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969178623Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969212283Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969253695Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969344583Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969375375Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969418179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969449103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969477975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969539847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.970523 containerd[2056]: time="2025-04-29T23:56:34.969570351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971196 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969602619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969630843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969660939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969698043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969732579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969779931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969812355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969844695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969876903Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969924219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.969971127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.970001079Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.970129131Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 29 23:56:34.971266 containerd[2056]: time="2025-04-29T23:56:34.970168947Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 29 23:56:34.971897 containerd[2056]: time="2025-04-29T23:56:34.970195023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 29 23:56:34.971897 containerd[2056]: time="2025-04-29T23:56:34.970224675Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 29 23:56:34.971897 containerd[2056]: time="2025-04-29T23:56:34.970247775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.971897 containerd[2056]: time="2025-04-29T23:56:34.970282671Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 29 23:56:34.971897 containerd[2056]: time="2025-04-29T23:56:34.970307607Z" level=info msg="NRI interface is disabled by configuration." Apr 29 23:56:34.971897 containerd[2056]: time="2025-04-29T23:56:34.970340991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 29 23:56:34.979718 polkitd[2212]: Started polkitd version 121 Apr 29 23:56:34.987079 containerd[2056]: time="2025-04-29T23:56:34.980067495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 29 23:56:34.987079 containerd[2056]: time="2025-04-29T23:56:34.980228187Z" level=info msg="Connect containerd service" Apr 29 23:56:34.987079 containerd[2056]: time="2025-04-29T23:56:34.981259707Z" level=info msg="using legacy CRI server" Apr 29 23:56:34.987079 containerd[2056]: time="2025-04-29T23:56:34.981327783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 29 23:56:35.002517 containerd[2056]: time="2025-04-29T23:56:34.991790835Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 29 23:56:35.006312 containerd[2056]: time="2025-04-29T23:56:35.006211499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 29 23:56:35.008693 containerd[2056]: time="2025-04-29T23:56:35.008616647Z" level=info msg="Start subscribing containerd event" Apr 29 23:56:35.017078 containerd[2056]: time="2025-04-29T23:56:35.015593507Z" level=info msg="Start recovering state" Apr 29 23:56:35.017078 containerd[2056]: time="2025-04-29T23:56:35.011068175Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 29 23:56:35.017078 containerd[2056]: time="2025-04-29T23:56:35.015821135Z" level=info msg="Start event monitor" Apr 29 23:56:35.017078 containerd[2056]: time="2025-04-29T23:56:35.015865931Z" level=info msg="Start snapshots syncer" Apr 29 23:56:35.017078 containerd[2056]: time="2025-04-29T23:56:35.015894935Z" level=info msg="Start cni network conf syncer for default" Apr 29 23:56:35.017078 containerd[2056]: time="2025-04-29T23:56:35.015915611Z" level=info msg="Start streaming server" Apr 29 23:56:35.017078 containerd[2056]: time="2025-04-29T23:56:35.015824591Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 29 23:56:35.016323 systemd[1]: Started containerd.service - containerd container runtime. Apr 29 23:56:35.022514 containerd[2056]: time="2025-04-29T23:56:35.019562795Z" level=info msg="containerd successfully booted in 0.433447s" Apr 29 23:56:35.035564 polkitd[2212]: Loading rules from directory /etc/polkit-1/rules.d Apr 29 23:56:35.036674 polkitd[2212]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 29 23:56:35.040962 polkitd[2212]: Finished loading, compiling and executing 2 rules Apr 29 23:56:35.047734 dbus-daemon[2008]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 29 23:56:35.048048 systemd[1]: Started polkit.service - Authorization Manager. Apr 29 23:56:35.049846 polkitd[2212]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 29 23:56:35.070131 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 29 23:56:35.098996 systemd-hostnamed[2086]: Hostname set to (transient) Apr 29 23:56:35.100639 systemd-resolved[1941]: System hostname changed to 'ip-172-31-28-53'. Apr 29 23:56:35.170539 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 29 23:56:35.269515 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [amazon-ssm-agent] Starting Core Agent Apr 29 23:56:35.372407 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 29 23:56:35.474522 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [Registrar] Starting registrar module Apr 29 23:56:35.573185 amazon-ssm-agent[2098]: 2025-04-29 23:56:34 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 29 23:56:35.850388 tar[2048]: linux-arm64/LICENSE Apr 29 23:56:35.851262 tar[2048]: linux-arm64/README.md Apr 29 23:56:35.883413 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 29 23:56:36.544187 sshd_keygen[2054]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 29 23:56:36.592275 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 29 23:56:36.608825 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 29 23:56:36.635795 systemd[1]: issuegen.service: Deactivated successfully. Apr 29 23:56:36.636986 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 29 23:56:36.649030 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 29 23:56:36.693440 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 29 23:56:36.707273 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 29 23:56:36.721282 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 29 23:56:36.723865 systemd[1]: Reached target getty.target - Login Prompts. Apr 29 23:56:36.900734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:56:36.905598 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 29 23:56:36.909891 systemd[1]: Startup finished in 10.396s (kernel) + 10.885s (userspace) = 21.282s. Apr 29 23:56:36.919180 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 29 23:56:36.996443 amazon-ssm-agent[2098]: 2025-04-29 23:56:36 INFO [EC2Identity] EC2 registration was successful. Apr 29 23:56:37.029174 amazon-ssm-agent[2098]: 2025-04-29 23:56:36 INFO [CredentialRefresher] credentialRefresher has started Apr 29 23:56:37.029174 amazon-ssm-agent[2098]: 2025-04-29 23:56:36 INFO [CredentialRefresher] Starting credentials refresher loop Apr 29 23:56:37.029358 amazon-ssm-agent[2098]: 2025-04-29 23:56:37 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 29 23:56:37.097283 amazon-ssm-agent[2098]: 2025-04-29 23:56:37 INFO [CredentialRefresher] Next credential rotation will be in 31.4499821437 minutes Apr 29 23:56:38.057926 amazon-ssm-agent[2098]: 2025-04-29 23:56:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 29 23:56:38.159273 amazon-ssm-agent[2098]: 2025-04-29 23:56:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2313) started Apr 29 23:56:38.182424 kubelet[2301]: E0429 23:56:38.182265 2301 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 29 23:56:38.189111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 29 23:56:38.189955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 29 23:56:38.259697 amazon-ssm-agent[2098]: 2025-04-29 23:56:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 29 23:56:40.647879 systemd-resolved[1941]: Clock change detected. Flushing caches. Apr 29 23:56:42.677076 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 29 23:56:42.687066 systemd[1]: Started sshd@0-172.31.28.53:22-139.178.89.65:39772.service - OpenSSH per-connection server daemon (139.178.89.65:39772). Apr 29 23:56:42.985480 sshd[2325]: Accepted publickey for core from 139.178.89.65 port 39772 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:56:42.989328 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:56:43.009289 systemd-logind[2028]: New session 1 of user core. Apr 29 23:56:43.011002 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 29 23:56:43.021037 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 29 23:56:43.047024 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 29 23:56:43.064164 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 29 23:56:43.070785 (systemd)[2331]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 29 23:56:43.304753 systemd[2331]: Queued start job for default target default.target. Apr 29 23:56:43.305942 systemd[2331]: Created slice app.slice - User Application Slice. Apr 29 23:56:43.305984 systemd[2331]: Reached target paths.target - Paths. Apr 29 23:56:43.306014 systemd[2331]: Reached target timers.target - Timers. Apr 29 23:56:43.321782 systemd[2331]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 29 23:56:43.336174 systemd[2331]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 29 23:56:43.336303 systemd[2331]: Reached target sockets.target - Sockets. Apr 29 23:56:43.336336 systemd[2331]: Reached target basic.target - Basic System. Apr 29 23:56:43.336426 systemd[2331]: Reached target default.target - Main User Target. Apr 29 23:56:43.336503 systemd[2331]: Startup finished in 254ms. Apr 29 23:56:43.336760 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 29 23:56:43.343308 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 29 23:56:43.564056 systemd[1]: Started sshd@1-172.31.28.53:22-139.178.89.65:39780.service - OpenSSH per-connection server daemon (139.178.89.65:39780). Apr 29 23:56:43.835218 sshd[2343]: Accepted publickey for core from 139.178.89.65 port 39780 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:56:43.837897 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:56:43.847684 systemd-logind[2028]: New session 2 of user core. Apr 29 23:56:43.858207 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 29 23:56:44.034700 sshd[2346]: Connection closed by 139.178.89.65 port 39780 Apr 29 23:56:44.035720 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Apr 29 23:56:44.042994 systemd[1]: sshd@1-172.31.28.53:22-139.178.89.65:39780.service: Deactivated successfully. Apr 29 23:56:44.047685 systemd-logind[2028]: Session 2 logged out. Waiting for processes to exit. Apr 29 23:56:44.048923 systemd[1]: session-2.scope: Deactivated successfully. Apr 29 23:56:44.051039 systemd-logind[2028]: Removed session 2. Apr 29 23:56:44.084093 systemd[1]: Started sshd@2-172.31.28.53:22-139.178.89.65:39782.service - OpenSSH per-connection server daemon (139.178.89.65:39782). Apr 29 23:56:44.357589 sshd[2351]: Accepted publickey for core from 139.178.89.65 port 39782 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:56:44.360133 sshd-session[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:56:44.369818 systemd-logind[2028]: New session 3 of user core. Apr 29 23:56:44.380308 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 29 23:56:44.550245 sshd[2354]: Connection closed by 139.178.89.65 port 39782 Apr 29 23:56:44.551102 sshd-session[2351]: pam_unix(sshd:session): session closed for user core Apr 29 23:56:44.556210 systemd-logind[2028]: Session 3 logged out. Waiting for processes to exit. Apr 29 23:56:44.557418 systemd[1]: sshd@2-172.31.28.53:22-139.178.89.65:39782.service: Deactivated successfully. Apr 29 23:56:44.565064 systemd[1]: session-3.scope: Deactivated successfully. Apr 29 23:56:44.566983 systemd-logind[2028]: Removed session 3. Apr 29 23:56:44.600062 systemd[1]: Started sshd@3-172.31.28.53:22-139.178.89.65:39792.service - OpenSSH per-connection server daemon (139.178.89.65:39792). Apr 29 23:56:44.871511 sshd[2359]: Accepted publickey for core from 139.178.89.65 port 39792 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:56:44.874073 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:56:44.884855 systemd-logind[2028]: New session 4 of user core. Apr 29 23:56:44.888364 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 29 23:56:45.070574 sshd[2362]: Connection closed by 139.178.89.65 port 39792 Apr 29 23:56:45.071373 sshd-session[2359]: pam_unix(sshd:session): session closed for user core Apr 29 23:56:45.078021 systemd-logind[2028]: Session 4 logged out. Waiting for processes to exit. Apr 29 23:56:45.079542 systemd[1]: sshd@3-172.31.28.53:22-139.178.89.65:39792.service: Deactivated successfully. Apr 29 23:56:45.086043 systemd[1]: session-4.scope: Deactivated successfully. Apr 29 23:56:45.087938 systemd-logind[2028]: Removed session 4. Apr 29 23:56:45.117107 systemd[1]: Started sshd@4-172.31.28.53:22-139.178.89.65:39798.service - OpenSSH per-connection server daemon (139.178.89.65:39798). Apr 29 23:56:45.390249 sshd[2367]: Accepted publickey for core from 139.178.89.65 port 39798 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:56:45.392798 sshd-session[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:56:45.400236 systemd-logind[2028]: New session 5 of user core. Apr 29 23:56:45.413336 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 29 23:56:45.574391 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 29 23:56:45.575081 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 29 23:56:45.592336 sudo[2371]: pam_unix(sudo:session): session closed for user root Apr 29 23:56:45.630410 sshd[2370]: Connection closed by 139.178.89.65 port 39798 Apr 29 23:56:45.631519 sshd-session[2367]: pam_unix(sshd:session): session closed for user core Apr 29 23:56:45.637249 systemd-logind[2028]: Session 5 logged out. Waiting for processes to exit. Apr 29 23:56:45.640009 systemd[1]: sshd@4-172.31.28.53:22-139.178.89.65:39798.service: Deactivated successfully. Apr 29 23:56:45.644523 systemd[1]: session-5.scope: Deactivated successfully. Apr 29 23:56:45.647608 systemd-logind[2028]: Removed session 5. Apr 29 23:56:45.681063 systemd[1]: Started sshd@5-172.31.28.53:22-139.178.89.65:39814.service - OpenSSH per-connection server daemon (139.178.89.65:39814). Apr 29 23:56:45.954181 sshd[2376]: Accepted publickey for core from 139.178.89.65 port 39814 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:56:45.956573 sshd-session[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:56:45.965988 systemd-logind[2028]: New session 6 of user core. Apr 29 23:56:45.972314 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 29 23:56:46.116088 sudo[2381]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 29 23:56:46.116793 sudo[2381]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 29 23:56:46.123048 sudo[2381]: pam_unix(sudo:session): session closed for user root Apr 29 23:56:46.133063 sudo[2380]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 29 23:56:46.133758 sudo[2380]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 29 23:56:46.155339 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 29 23:56:46.217883 augenrules[2403]: No rules Apr 29 23:56:46.222195 systemd[1]: audit-rules.service: Deactivated successfully. Apr 29 23:56:46.223009 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 29 23:56:46.227755 sudo[2380]: pam_unix(sudo:session): session closed for user root Apr 29 23:56:46.266712 sshd[2379]: Connection closed by 139.178.89.65 port 39814 Apr 29 23:56:46.267533 sshd-session[2376]: pam_unix(sshd:session): session closed for user core Apr 29 23:56:46.274172 systemd[1]: sshd@5-172.31.28.53:22-139.178.89.65:39814.service: Deactivated successfully. Apr 29 23:56:46.280960 systemd[1]: session-6.scope: Deactivated successfully. Apr 29 23:56:46.282529 systemd-logind[2028]: Session 6 logged out. Waiting for processes to exit. Apr 29 23:56:46.284353 systemd-logind[2028]: Removed session 6. Apr 29 23:56:46.320059 systemd[1]: Started sshd@6-172.31.28.53:22-139.178.89.65:39822.service - OpenSSH per-connection server daemon (139.178.89.65:39822). Apr 29 23:56:46.591859 sshd[2412]: Accepted publickey for core from 139.178.89.65 port 39822 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:56:46.593606 sshd-session[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:56:46.603030 systemd-logind[2028]: New session 7 of user core. Apr 29 23:56:46.612352 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 29 23:56:46.753394 sudo[2416]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 29 23:56:46.754231 sudo[2416]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 29 23:56:47.471113 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 29 23:56:47.484326 (dockerd)[2433]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 29 23:56:47.929639 dockerd[2433]: time="2025-04-29T23:56:47.929546524Z" level=info msg="Starting up" Apr 29 23:56:48.336328 dockerd[2433]: time="2025-04-29T23:56:48.336170162Z" level=info msg="Loading containers: start." Apr 29 23:56:48.615675 kernel: Initializing XFRM netlink socket Apr 29 23:56:48.649071 (udev-worker)[2455]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:56:48.669896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 29 23:56:48.678963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:56:48.760557 systemd-networkd[1609]: docker0: Link UP Apr 29 23:56:48.808472 dockerd[2433]: time="2025-04-29T23:56:48.808254124Z" level=info msg="Loading containers: done." Apr 29 23:56:48.839219 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2244856484-merged.mount: Deactivated successfully. Apr 29 23:56:48.851861 dockerd[2433]: time="2025-04-29T23:56:48.851725516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 29 23:56:48.852097 dockerd[2433]: time="2025-04-29T23:56:48.851916592Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Apr 29 23:56:48.852207 dockerd[2433]: time="2025-04-29T23:56:48.852162244Z" level=info msg="Daemon has completed initialization" Apr 29 23:56:48.943059 dockerd[2433]: time="2025-04-29T23:56:48.942866645Z" level=info msg="API listen on /run/docker.sock" Apr 29 23:56:48.943241 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 29 23:56:49.044369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:56:49.061248 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 29 23:56:49.145765 kubelet[2627]: E0429 23:56:49.145663 2627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 29 23:56:49.154815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 29 23:56:49.155216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 29 23:56:50.549473 containerd[2056]: time="2025-04-29T23:56:50.549265985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 29 23:56:51.266379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713835623.mount: Deactivated successfully. Apr 29 23:56:52.721020 containerd[2056]: time="2025-04-29T23:56:52.720960559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:52.723594 containerd[2056]: time="2025-04-29T23:56:52.723501859Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" Apr 29 23:56:52.725359 containerd[2056]: time="2025-04-29T23:56:52.725271367Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:52.734272 containerd[2056]: time="2025-04-29T23:56:52.734180060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:52.736645 containerd[2056]: time="2025-04-29T23:56:52.736358540Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.187000479s" Apr 29 23:56:52.736645 containerd[2056]: time="2025-04-29T23:56:52.736413824Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 29 23:56:52.774956 containerd[2056]: time="2025-04-29T23:56:52.774911432Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 29 23:56:54.418275 containerd[2056]: time="2025-04-29T23:56:54.418199816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:54.420262 containerd[2056]: time="2025-04-29T23:56:54.420180140Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" Apr 29 23:56:54.421084 containerd[2056]: time="2025-04-29T23:56:54.421003496Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:54.432691 containerd[2056]: time="2025-04-29T23:56:54.432560240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:54.434223 containerd[2056]: time="2025-04-29T23:56:54.434016044Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.658858336s" Apr 29 23:56:54.434223 containerd[2056]: time="2025-04-29T23:56:54.434073608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 29 23:56:54.478456 containerd[2056]: time="2025-04-29T23:56:54.478353788Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 29 23:56:55.561982 containerd[2056]: time="2025-04-29T23:56:55.561905986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:55.563992 containerd[2056]: time="2025-04-29T23:56:55.563912458Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" Apr 29 23:56:55.564657 containerd[2056]: time="2025-04-29T23:56:55.564342334Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:55.569879 containerd[2056]: time="2025-04-29T23:56:55.569829478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:55.572285 containerd[2056]: time="2025-04-29T23:56:55.572078458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.093669926s" Apr 29 23:56:55.572285 containerd[2056]: time="2025-04-29T23:56:55.572137702Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 29 23:56:55.610556 containerd[2056]: time="2025-04-29T23:56:55.610487986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 29 23:56:56.825381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621591909.mount: Deactivated successfully. Apr 29 23:56:57.320979 containerd[2056]: time="2025-04-29T23:56:57.320925466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:57.323061 containerd[2056]: time="2025-04-29T23:56:57.322996474Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" Apr 29 23:56:57.323996 containerd[2056]: time="2025-04-29T23:56:57.323911654Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:57.332688 containerd[2056]: time="2025-04-29T23:56:57.331791670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:57.335775 containerd[2056]: time="2025-04-29T23:56:57.335721706Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.72516964s" Apr 29 23:56:57.335970 containerd[2056]: time="2025-04-29T23:56:57.335942494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 29 23:56:57.373281 containerd[2056]: time="2025-04-29T23:56:57.373225955Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 29 23:56:57.901073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581512032.mount: Deactivated successfully. Apr 29 23:56:59.021289 containerd[2056]: time="2025-04-29T23:56:59.021109691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:59.023404 containerd[2056]: time="2025-04-29T23:56:59.023323847Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Apr 29 23:56:59.025617 containerd[2056]: time="2025-04-29T23:56:59.025543235Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:59.031196 containerd[2056]: time="2025-04-29T23:56:59.031116335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:59.033675 containerd[2056]: time="2025-04-29T23:56:59.033471035Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.66018292s" Apr 29 23:56:59.033675 containerd[2056]: time="2025-04-29T23:56:59.033524159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 29 23:56:59.072301 containerd[2056]: time="2025-04-29T23:56:59.072212999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 29 23:56:59.174226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 29 23:56:59.184952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:56:59.491902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:56:59.503327 (kubelet)[2794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 29 23:56:59.591595 kubelet[2794]: E0429 23:56:59.591447 2794 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 29 23:56:59.600027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419185502.mount: Deactivated successfully. Apr 29 23:56:59.603510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 29 23:56:59.604049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 29 23:56:59.616141 containerd[2056]: time="2025-04-29T23:56:59.616065602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:59.618725 containerd[2056]: time="2025-04-29T23:56:59.618657170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Apr 29 23:56:59.621053 containerd[2056]: time="2025-04-29T23:56:59.620962358Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:59.627583 containerd[2056]: time="2025-04-29T23:56:59.627489398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:56:59.629729 containerd[2056]: time="2025-04-29T23:56:59.629192246Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 556.713375ms" Apr 29 23:56:59.629729 containerd[2056]: time="2025-04-29T23:56:59.629245982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 29 23:56:59.667843 containerd[2056]: time="2025-04-29T23:56:59.667767074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 29 23:57:00.251555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917471881.mount: Deactivated successfully. Apr 29 23:57:03.250141 containerd[2056]: time="2025-04-29T23:57:03.250062628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:57:03.252407 containerd[2056]: time="2025-04-29T23:57:03.252318316Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Apr 29 23:57:03.255511 containerd[2056]: time="2025-04-29T23:57:03.255438448Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:57:03.260891 containerd[2056]: time="2025-04-29T23:57:03.260815216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:57:03.263418 containerd[2056]: time="2025-04-29T23:57:03.263215120Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.594974778s" Apr 29 23:57:03.263418 containerd[2056]: time="2025-04-29T23:57:03.263271124Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 29 23:57:05.374073 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 29 23:57:09.674260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 29 23:57:09.683083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:57:09.985944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:57:09.994910 (kubelet)[2932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 29 23:57:10.099899 kubelet[2932]: E0429 23:57:10.099799 2932 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 29 23:57:10.106902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 29 23:57:10.107321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 29 23:57:14.190244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:57:14.206061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:57:14.240070 systemd[1]: Reloading requested from client PID 2949 ('systemctl') (unit session-7.scope)... Apr 29 23:57:14.240280 systemd[1]: Reloading... Apr 29 23:57:14.455730 zram_generator::config[2996]: No configuration found. Apr 29 23:57:14.709855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 29 23:57:14.890781 systemd[1]: Reloading finished in 649 ms. Apr 29 23:57:14.965700 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 29 23:57:14.965971 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 29 23:57:14.966697 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:57:14.983045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:57:15.305061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:57:15.311192 (kubelet)[3061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 29 23:57:15.401137 kubelet[3061]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 29 23:57:15.401137 kubelet[3061]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 29 23:57:15.401137 kubelet[3061]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 29 23:57:15.403924 kubelet[3061]: I0429 23:57:15.403668 3061 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 29 23:57:17.055333 kubelet[3061]: I0429 23:57:17.055271 3061 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 29 23:57:17.055333 kubelet[3061]: I0429 23:57:17.055317 3061 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 29 23:57:17.056000 kubelet[3061]: I0429 23:57:17.055676 3061 server.go:927] "Client rotation is on, will bootstrap in background" Apr 29 23:57:17.078767 kubelet[3061]: I0429 23:57:17.078733 3061 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 29 23:57:17.079095 kubelet[3061]: E0429 23:57:17.078915 3061 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.104571 kubelet[3061]: I0429 23:57:17.104534 3061 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 29 23:57:17.106350 kubelet[3061]: I0429 23:57:17.105511 3061 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 29 23:57:17.106350 kubelet[3061]: I0429 23:57:17.105563 3061 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 29 23:57:17.106350 kubelet[3061]: I0429 23:57:17.105902 3061 topology_manager.go:138] "Creating topology manager with none policy" Apr 29 23:57:17.106350 kubelet[3061]: I0429 23:57:17.105920 3061 container_manager_linux.go:301] "Creating device plugin manager" Apr 29 23:57:17.106350 kubelet[3061]: I0429 23:57:17.106127 3061 state_mem.go:36] "Initialized new in-memory state store" Apr 29 23:57:17.108038 kubelet[3061]: I0429 23:57:17.108011 3061 kubelet.go:400] "Attempting to sync node with API server" Apr 29 23:57:17.108186 kubelet[3061]: I0429 23:57:17.108165 3061 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 29 23:57:17.108371 kubelet[3061]: I0429 23:57:17.108352 3061 kubelet.go:312] "Adding apiserver pod source" Apr 29 23:57:17.108506 kubelet[3061]: I0429 23:57:17.108487 3061 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 29 23:57:17.109831 kubelet[3061]: W0429 23:57:17.109760 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-53&limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.110055 kubelet[3061]: E0429 23:57:17.110031 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-53&limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.110726 kubelet[3061]: I0429 23:57:17.110692 3061 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 29 23:57:17.112670 kubelet[3061]: I0429 23:57:17.111197 3061 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 29 23:57:17.112670 kubelet[3061]: W0429 23:57:17.111265 3061 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 29 23:57:17.112670 kubelet[3061]: I0429 23:57:17.112287 3061 server.go:1264] "Started kubelet" Apr 29 23:57:17.112670 kubelet[3061]: W0429 23:57:17.112466 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.112670 kubelet[3061]: E0429 23:57:17.112534 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.122454 kubelet[3061]: E0429 23:57:17.122237 3061 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.53:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-53.183aef685f324699 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-53,UID:ip-172-31-28-53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-53,},FirstTimestamp:2025-04-29 23:57:17.112256153 +0000 UTC m=+1.793055646,LastTimestamp:2025-04-29 23:57:17.112256153 +0000 UTC m=+1.793055646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-53,}" Apr 29 23:57:17.124006 kubelet[3061]: I0429 23:57:17.123948 3061 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 29 23:57:17.124854 kubelet[3061]: I0429 23:57:17.124746 3061 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 29 23:57:17.125265 kubelet[3061]: I0429 23:57:17.125221 3061 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 29 23:57:17.125356 kubelet[3061]: I0429 23:57:17.125298 3061 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 29 23:57:17.127020 kubelet[3061]: I0429 23:57:17.126968 3061 server.go:455] "Adding debug handlers to kubelet server" Apr 29 23:57:17.130962 kubelet[3061]: I0429 23:57:17.130931 3061 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 29 23:57:17.132570 kubelet[3061]: I0429 23:57:17.132266 3061 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 29 23:57:17.133137 kubelet[3061]: I0429 23:57:17.133116 3061 reconciler.go:26] "Reconciler: start to sync state" Apr 29 23:57:17.135019 kubelet[3061]: W0429 23:57:17.134291 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.135019 kubelet[3061]: E0429 23:57:17.134376 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.135019 kubelet[3061]: E0429 23:57:17.134557 3061 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 29 23:57:17.135019 kubelet[3061]: E0429 23:57:17.134723 3061 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-53?timeout=10s\": dial tcp 172.31.28.53:6443: connect: connection refused" interval="200ms" Apr 29 23:57:17.138039 kubelet[3061]: I0429 23:57:17.138003 3061 factory.go:221] Registration of the containerd container factory successfully Apr 29 23:57:17.138224 kubelet[3061]: I0429 23:57:17.138205 3061 factory.go:221] Registration of the systemd container factory successfully Apr 29 23:57:17.138460 kubelet[3061]: I0429 23:57:17.138429 3061 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 29 23:57:17.159614 kubelet[3061]: I0429 23:57:17.159528 3061 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 29 23:57:17.161925 kubelet[3061]: I0429 23:57:17.161867 3061 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 29 23:57:17.162040 kubelet[3061]: I0429 23:57:17.161971 3061 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 29 23:57:17.162040 kubelet[3061]: I0429 23:57:17.162004 3061 kubelet.go:2337] "Starting kubelet main sync loop" Apr 29 23:57:17.162157 kubelet[3061]: E0429 23:57:17.162072 3061 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 29 23:57:17.180385 kubelet[3061]: W0429 23:57:17.180311 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.192688 kubelet[3061]: E0429 23:57:17.191419 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:17.209544 kubelet[3061]: I0429 23:57:17.209512 3061 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 29 23:57:17.209811 kubelet[3061]: I0429 23:57:17.209791 3061 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 29 23:57:17.209954 kubelet[3061]: I0429 23:57:17.209936 3061 state_mem.go:36] "Initialized new in-memory state store" Apr 29 23:57:17.212145 kubelet[3061]: I0429 23:57:17.212119 3061 policy_none.go:49] "None policy: Start" Apr 29 23:57:17.213896 kubelet[3061]: I0429 23:57:17.213861 3061 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 29 23:57:17.214034 kubelet[3061]: I0429 23:57:17.213909 3061 state_mem.go:35] "Initializing new in-memory state store" Apr 29 23:57:17.223734 kubelet[3061]: I0429 23:57:17.223677 3061 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 29 23:57:17.224042 kubelet[3061]: I0429 23:57:17.223970 3061 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 29 23:57:17.224177 kubelet[3061]: I0429 23:57:17.224143 3061 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 29 23:57:17.232902 kubelet[3061]: E0429 23:57:17.232837 3061 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-53\" not found" Apr 29 23:57:17.234987 kubelet[3061]: I0429 23:57:17.234947 3061 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-53" Apr 29 23:57:17.235679 kubelet[3061]: E0429 23:57:17.235588 3061 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.53:6443/api/v1/nodes\": dial tcp 172.31.28.53:6443: connect: connection refused" node="ip-172-31-28-53" Apr 29 23:57:17.262847 kubelet[3061]: I0429 23:57:17.262771 3061 topology_manager.go:215] "Topology Admit Handler" podUID="ccb0a85efbdddad293bb99f4a01feb48" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-53" Apr 29 23:57:17.265055 kubelet[3061]: I0429 23:57:17.265014 3061 topology_manager.go:215] "Topology Admit Handler" podUID="65861d4faff8a35ab889b224dfeaf155" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:17.268844 kubelet[3061]: I0429 23:57:17.268462 3061 topology_manager.go:215] "Topology Admit Handler" podUID="3854ee19e21101cfb0f4ba6ed78d3846" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-53" Apr 29 23:57:17.335674 kubelet[3061]: E0429 23:57:17.335347 3061 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-53?timeout=10s\": dial tcp 172.31.28.53:6443: connect: connection refused" interval="400ms" Apr 29 23:57:17.435677 kubelet[3061]: I0429 23:57:17.435385 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:17.435677 kubelet[3061]: I0429 23:57:17.435447 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:17.435677 kubelet[3061]: I0429 23:57:17.435485 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:17.435677 kubelet[3061]: I0429 23:57:17.435519 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccb0a85efbdddad293bb99f4a01feb48-ca-certs\") pod \"kube-apiserver-ip-172-31-28-53\" (UID: \"ccb0a85efbdddad293bb99f4a01feb48\") " pod="kube-system/kube-apiserver-ip-172-31-28-53" Apr 29 23:57:17.435677 kubelet[3061]: I0429 23:57:17.435553 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccb0a85efbdddad293bb99f4a01feb48-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-53\" (UID: \"ccb0a85efbdddad293bb99f4a01feb48\") " pod="kube-system/kube-apiserver-ip-172-31-28-53" Apr 29 23:57:17.436024 kubelet[3061]: I0429 23:57:17.435588 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:17.436024 kubelet[3061]: I0429 23:57:17.435660 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:17.436024 kubelet[3061]: I0429 23:57:17.435710 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3854ee19e21101cfb0f4ba6ed78d3846-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-53\" (UID: \"3854ee19e21101cfb0f4ba6ed78d3846\") " pod="kube-system/kube-scheduler-ip-172-31-28-53" Apr 29 23:57:17.436024 kubelet[3061]: I0429 23:57:17.435746 3061 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccb0a85efbdddad293bb99f4a01feb48-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-53\" (UID: \"ccb0a85efbdddad293bb99f4a01feb48\") " pod="kube-system/kube-apiserver-ip-172-31-28-53" Apr 29 23:57:17.438165 kubelet[3061]: I0429 23:57:17.438106 3061 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-53" Apr 29 23:57:17.438708 kubelet[3061]: E0429 23:57:17.438658 3061 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.53:6443/api/v1/nodes\": dial tcp 172.31.28.53:6443: connect: connection refused" node="ip-172-31-28-53" Apr 29 23:57:17.581675 containerd[2056]: time="2025-04-29T23:57:17.581560603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-53,Uid:ccb0a85efbdddad293bb99f4a01feb48,Namespace:kube-system,Attempt:0,}" Apr 29 23:57:17.590650 containerd[2056]: time="2025-04-29T23:57:17.590339143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-53,Uid:65861d4faff8a35ab889b224dfeaf155,Namespace:kube-system,Attempt:0,}" Apr 29 23:57:17.591555 containerd[2056]: time="2025-04-29T23:57:17.591105403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-53,Uid:3854ee19e21101cfb0f4ba6ed78d3846,Namespace:kube-system,Attempt:0,}" Apr 29 23:57:17.736120 kubelet[3061]: E0429 23:57:17.736040 3061 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-53?timeout=10s\": dial tcp 172.31.28.53:6443: connect: connection refused" interval="800ms" Apr 29 23:57:17.841160 kubelet[3061]: I0429 23:57:17.840940 3061 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-53" Apr 29 23:57:17.842292 kubelet[3061]: E0429 23:57:17.842200 3061 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.53:6443/api/v1/nodes\": dial tcp 172.31.28.53:6443: connect: connection refused" node="ip-172-31-28-53" Apr 29 23:57:18.037582 kubelet[3061]: W0429 23:57:18.037461 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.037582 kubelet[3061]: E0429 23:57:18.037549 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.086002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075899568.mount: Deactivated successfully. Apr 29 23:57:18.097765 containerd[2056]: time="2025-04-29T23:57:18.096420594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 29 23:57:18.102209 containerd[2056]: time="2025-04-29T23:57:18.102119418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 29 23:57:18.109530 containerd[2056]: time="2025-04-29T23:57:18.109443906Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 29 23:57:18.111957 containerd[2056]: time="2025-04-29T23:57:18.111902154Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 29 23:57:18.113792 containerd[2056]: time="2025-04-29T23:57:18.113704218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 29 23:57:18.117446 containerd[2056]: time="2025-04-29T23:57:18.117384546Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 29 23:57:18.120438 containerd[2056]: time="2025-04-29T23:57:18.120182202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 29 23:57:18.122091 containerd[2056]: time="2025-04-29T23:57:18.121640730Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 29 23:57:18.122091 containerd[2056]: time="2025-04-29T23:57:18.121733238Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.677791ms" Apr 29 23:57:18.132026 containerd[2056]: time="2025-04-29T23:57:18.131593158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.812935ms" Apr 29 23:57:18.133433 containerd[2056]: time="2025-04-29T23:57:18.133358046Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.121075ms" Apr 29 23:57:18.168674 kubelet[3061]: W0429 23:57:18.167825 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.172724 kubelet[3061]: E0429 23:57:18.171702 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.281854 kubelet[3061]: W0429 23:57:18.281774 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-53&limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.282097 kubelet[3061]: E0429 23:57:18.282059 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-53&limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.303758 containerd[2056]: time="2025-04-29T23:57:18.303465943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:57:18.304102 containerd[2056]: time="2025-04-29T23:57:18.303991843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:57:18.306334 containerd[2056]: time="2025-04-29T23:57:18.305817955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:57:18.306334 containerd[2056]: time="2025-04-29T23:57:18.305930575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:57:18.306334 containerd[2056]: time="2025-04-29T23:57:18.305973187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:18.306334 containerd[2056]: time="2025-04-29T23:57:18.306144307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:18.306334 containerd[2056]: time="2025-04-29T23:57:18.304993963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:18.306334 containerd[2056]: time="2025-04-29T23:57:18.305189059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:18.313690 containerd[2056]: time="2025-04-29T23:57:18.313422319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:57:18.313952 containerd[2056]: time="2025-04-29T23:57:18.313688215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:57:18.313952 containerd[2056]: time="2025-04-29T23:57:18.313777483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:18.314178 containerd[2056]: time="2025-04-29T23:57:18.313991863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:18.458694 containerd[2056]: time="2025-04-29T23:57:18.458464723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-53,Uid:65861d4faff8a35ab889b224dfeaf155,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa3492cc56f41a77f7388d413bda4f8881ffcf6a6015e26163e7459def9b1cd8\"" Apr 29 23:57:18.468656 containerd[2056]: time="2025-04-29T23:57:18.467260999Z" level=info msg="CreateContainer within sandbox \"fa3492cc56f41a77f7388d413bda4f8881ffcf6a6015e26163e7459def9b1cd8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 29 23:57:18.482082 containerd[2056]: time="2025-04-29T23:57:18.482018143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-53,Uid:ccb0a85efbdddad293bb99f4a01feb48,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ae8b8bb0d28ef79a1467a337e41858e4c4c27e496833f8237c0e43ea22a66df\"" Apr 29 23:57:18.492486 containerd[2056]: time="2025-04-29T23:57:18.492253183Z" level=info msg="CreateContainer within sandbox \"1ae8b8bb0d28ef79a1467a337e41858e4c4c27e496833f8237c0e43ea22a66df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 29 23:57:18.494090 containerd[2056]: time="2025-04-29T23:57:18.494040859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-53,Uid:3854ee19e21101cfb0f4ba6ed78d3846,Namespace:kube-system,Attempt:0,} returns sandbox id \"92226e3d26139ac0163bd051d3ea660fdc8a13d3952e2d96d79199bea73fad44\"" Apr 29 23:57:18.507542 containerd[2056]: time="2025-04-29T23:57:18.507485708Z" level=info msg="CreateContainer within sandbox \"92226e3d26139ac0163bd051d3ea660fdc8a13d3952e2d96d79199bea73fad44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 29 23:57:18.521751 containerd[2056]: time="2025-04-29T23:57:18.521688596Z" level=info msg="CreateContainer within sandbox \"fa3492cc56f41a77f7388d413bda4f8881ffcf6a6015e26163e7459def9b1cd8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4fed504350684b6576200d71ed7dfb02b151e6f5a53c2a28f6084ef6eec98c7d\"" Apr 29 23:57:18.523034 containerd[2056]: time="2025-04-29T23:57:18.522737264Z" level=info msg="StartContainer for \"4fed504350684b6576200d71ed7dfb02b151e6f5a53c2a28f6084ef6eec98c7d\"" Apr 29 23:57:18.526228 kubelet[3061]: W0429 23:57:18.526182 3061 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.527839 kubelet[3061]: E0429 23:57:18.527752 3061 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.53:6443: connect: connection refused Apr 29 23:57:18.538258 kubelet[3061]: E0429 23:57:18.538080 3061 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-53?timeout=10s\": dial tcp 172.31.28.53:6443: connect: connection refused" interval="1.6s" Apr 29 23:57:18.544188 containerd[2056]: time="2025-04-29T23:57:18.544133168Z" level=info msg="CreateContainer within sandbox \"1ae8b8bb0d28ef79a1467a337e41858e4c4c27e496833f8237c0e43ea22a66df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e73aec0e642800bed8b8f402373c2a47567523da0cd73a0f3a32b7da0a61df8\"" Apr 29 23:57:18.545810 containerd[2056]: time="2025-04-29T23:57:18.545744804Z" level=info msg="StartContainer for \"3e73aec0e642800bed8b8f402373c2a47567523da0cd73a0f3a32b7da0a61df8\"" Apr 29 23:57:18.561849 containerd[2056]: time="2025-04-29T23:57:18.561786668Z" level=info msg="CreateContainer within sandbox \"92226e3d26139ac0163bd051d3ea660fdc8a13d3952e2d96d79199bea73fad44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"026533eaa1616ce5d8f1d0bc9d9a274598b4b893a652b3dc9221e77780e3e3bb\"" Apr 29 23:57:18.564063 containerd[2056]: time="2025-04-29T23:57:18.563995100Z" level=info msg="StartContainer for \"026533eaa1616ce5d8f1d0bc9d9a274598b4b893a652b3dc9221e77780e3e3bb\"" Apr 29 23:57:18.646038 kubelet[3061]: I0429 23:57:18.645997 3061 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-53" Apr 29 23:57:18.647096 kubelet[3061]: E0429 23:57:18.646844 3061 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.53:6443/api/v1/nodes\": dial tcp 172.31.28.53:6443: connect: connection refused" node="ip-172-31-28-53" Apr 29 23:57:18.746326 containerd[2056]: time="2025-04-29T23:57:18.745146381Z" level=info msg="StartContainer for \"4fed504350684b6576200d71ed7dfb02b151e6f5a53c2a28f6084ef6eec98c7d\" returns successfully" Apr 29 23:57:18.797059 containerd[2056]: time="2025-04-29T23:57:18.795509445Z" level=info msg="StartContainer for \"026533eaa1616ce5d8f1d0bc9d9a274598b4b893a652b3dc9221e77780e3e3bb\" returns successfully" Apr 29 23:57:18.804595 containerd[2056]: time="2025-04-29T23:57:18.804147465Z" level=info msg="StartContainer for \"3e73aec0e642800bed8b8f402373c2a47567523da0cd73a0f3a32b7da0a61df8\" returns successfully" Apr 29 23:57:18.875764 update_engine[2031]: I20250429 23:57:18.875669 2031 update_attempter.cc:509] Updating boot flags... Apr 29 23:57:19.057658 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3346) Apr 29 23:57:19.777651 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3346) Apr 29 23:57:20.255954 kubelet[3061]: I0429 23:57:20.255910 3061 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-53" Apr 29 23:57:22.375108 kubelet[3061]: E0429 23:57:22.375039 3061 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-53\" not found" node="ip-172-31-28-53" Apr 29 23:57:22.470645 kubelet[3061]: I0429 23:57:22.469343 3061 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-53" Apr 29 23:57:23.117059 kubelet[3061]: I0429 23:57:23.116756 3061 apiserver.go:52] "Watching apiserver" Apr 29 23:57:23.133024 kubelet[3061]: I0429 23:57:23.132985 3061 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 29 23:57:24.785857 systemd[1]: Reloading requested from client PID 3519 ('systemctl') (unit session-7.scope)... Apr 29 23:57:24.785888 systemd[1]: Reloading... Apr 29 23:57:24.963798 zram_generator::config[3571]: No configuration found. Apr 29 23:57:25.193860 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 29 23:57:25.402012 systemd[1]: Reloading finished in 615 ms. Apr 29 23:57:25.510189 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:57:25.526406 systemd[1]: kubelet.service: Deactivated successfully. Apr 29 23:57:25.527129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:57:25.538294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 29 23:57:25.850148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 29 23:57:25.869461 (kubelet)[3629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 29 23:57:25.964973 kubelet[3629]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 29 23:57:25.964973 kubelet[3629]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 29 23:57:25.964973 kubelet[3629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 29 23:57:25.965550 kubelet[3629]: I0429 23:57:25.965091 3629 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 29 23:57:25.978031 kubelet[3629]: I0429 23:57:25.977683 3629 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 29 23:57:25.978031 kubelet[3629]: I0429 23:57:25.977845 3629 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 29 23:57:25.979508 kubelet[3629]: I0429 23:57:25.978818 3629 server.go:927] "Client rotation is on, will bootstrap in background" Apr 29 23:57:25.986039 kubelet[3629]: I0429 23:57:25.985987 3629 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 29 23:57:25.989462 kubelet[3629]: I0429 23:57:25.988703 3629 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 29 23:57:25.998169 sudo[3643]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 29 23:57:25.998842 sudo[3643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 29 23:57:26.005689 kubelet[3629]: I0429 23:57:26.004998 3629 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 29 23:57:26.006298 kubelet[3629]: I0429 23:57:26.006240 3629 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 29 23:57:26.006787 kubelet[3629]: I0429 23:57:26.006438 3629 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 29 23:57:26.007001 kubelet[3629]: I0429 23:57:26.006978 3629 topology_manager.go:138] "Creating topology manager with none policy" Apr 29 23:57:26.007182 kubelet[3629]: I0429 23:57:26.007161 3629 container_manager_linux.go:301] "Creating device plugin manager" Apr 29 23:57:26.007677 kubelet[3629]: I0429 23:57:26.007326 3629 state_mem.go:36] "Initialized new in-memory state store" Apr 29 23:57:26.007677 kubelet[3629]: I0429 23:57:26.007502 3629 kubelet.go:400] "Attempting to sync node with API server" Apr 29 23:57:26.007677 kubelet[3629]: I0429 23:57:26.007524 3629 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 29 23:57:26.007677 kubelet[3629]: I0429 23:57:26.007577 3629 kubelet.go:312] "Adding apiserver pod source" Apr 29 23:57:26.007677 kubelet[3629]: I0429 23:57:26.007614 3629 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 29 23:57:26.011896 kubelet[3629]: I0429 23:57:26.011856 3629 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 29 23:57:26.012662 kubelet[3629]: I0429 23:57:26.012326 3629 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 29 23:57:26.013174 kubelet[3629]: I0429 23:57:26.013150 3629 server.go:1264] "Started kubelet" Apr 29 23:57:26.022810 kubelet[3629]: I0429 23:57:26.022775 3629 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 29 23:57:26.028908 kubelet[3629]: I0429 23:57:26.028853 3629 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 29 23:57:26.034830 kubelet[3629]: I0429 23:57:26.033863 3629 server.go:455] "Adding debug handlers to kubelet server" Apr 29 23:57:26.039853 kubelet[3629]: I0429 23:57:26.039267 3629 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 29 23:57:26.045907 kubelet[3629]: I0429 23:57:26.045855 3629 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 29 23:57:26.046028 kubelet[3629]: I0429 23:57:26.045991 3629 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 29 23:57:26.046617 kubelet[3629]: I0429 23:57:26.046575 3629 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 29 23:57:26.047554 kubelet[3629]: I0429 23:57:26.046856 3629 reconciler.go:26] "Reconciler: start to sync state" Apr 29 23:57:26.063568 kubelet[3629]: E0429 23:57:26.063519 3629 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 29 23:57:26.067233 kubelet[3629]: I0429 23:57:26.066464 3629 factory.go:221] Registration of the systemd container factory successfully Apr 29 23:57:26.079797 kubelet[3629]: I0429 23:57:26.079724 3629 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 29 23:57:26.131815 kubelet[3629]: I0429 23:57:26.131265 3629 factory.go:221] Registration of the containerd container factory successfully Apr 29 23:57:26.135769 kubelet[3629]: I0429 23:57:26.134504 3629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 29 23:57:26.142813 kubelet[3629]: I0429 23:57:26.142772 3629 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 29 23:57:26.143000 kubelet[3629]: I0429 23:57:26.142981 3629 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 29 23:57:26.143134 kubelet[3629]: I0429 23:57:26.143116 3629 kubelet.go:2337] "Starting kubelet main sync loop" Apr 29 23:57:26.145781 kubelet[3629]: E0429 23:57:26.145732 3629 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 29 23:57:26.167682 kubelet[3629]: E0429 23:57:26.165581 3629 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Apr 29 23:57:26.184477 kubelet[3629]: I0429 23:57:26.184429 3629 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-53" Apr 29 23:57:26.212047 kubelet[3629]: I0429 23:57:26.211527 3629 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-28-53" Apr 29 23:57:26.212047 kubelet[3629]: I0429 23:57:26.211719 3629 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-53" Apr 29 23:57:26.254426 kubelet[3629]: E0429 23:57:26.253868 3629 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 29 23:57:26.326423 kubelet[3629]: I0429 23:57:26.326240 3629 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 29 23:57:26.326423 kubelet[3629]: I0429 23:57:26.326271 3629 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 29 23:57:26.326423 kubelet[3629]: I0429 23:57:26.326305 3629 state_mem.go:36] "Initialized new in-memory state store" Apr 29 23:57:26.327447 kubelet[3629]: I0429 23:57:26.326826 3629 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 29 23:57:26.327447 kubelet[3629]: I0429 23:57:26.326853 3629 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 29 23:57:26.327447 kubelet[3629]: I0429 23:57:26.326888 3629 policy_none.go:49] "None policy: Start" Apr 29 23:57:26.330770 kubelet[3629]: I0429 23:57:26.328606 3629 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 29 23:57:26.330770 kubelet[3629]: I0429 23:57:26.328669 3629 state_mem.go:35] "Initializing new in-memory state store" Apr 29 23:57:26.330770 kubelet[3629]: I0429 23:57:26.328988 3629 state_mem.go:75] "Updated machine memory state" Apr 29 23:57:26.331876 kubelet[3629]: I0429 23:57:26.331838 3629 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 29 23:57:26.332609 kubelet[3629]: I0429 23:57:26.332543 3629 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 29 23:57:26.337446 kubelet[3629]: I0429 23:57:26.337411 3629 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 29 23:57:26.454128 kubelet[3629]: I0429 23:57:26.453980 3629 topology_manager.go:215] "Topology Admit Handler" podUID="ccb0a85efbdddad293bb99f4a01feb48" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-53" Apr 29 23:57:26.455037 kubelet[3629]: I0429 23:57:26.454406 3629 topology_manager.go:215] "Topology Admit Handler" podUID="65861d4faff8a35ab889b224dfeaf155" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:26.455037 kubelet[3629]: I0429 23:57:26.454513 3629 topology_manager.go:215] "Topology Admit Handler" podUID="3854ee19e21101cfb0f4ba6ed78d3846" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-53" Apr 29 23:57:26.550942 kubelet[3629]: I0429 23:57:26.550885 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccb0a85efbdddad293bb99f4a01feb48-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-53\" (UID: \"ccb0a85efbdddad293bb99f4a01feb48\") " pod="kube-system/kube-apiserver-ip-172-31-28-53" Apr 29 23:57:26.551073 kubelet[3629]: I0429 23:57:26.550953 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:26.551073 kubelet[3629]: I0429 23:57:26.551001 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:26.551073 kubelet[3629]: I0429 23:57:26.551048 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:26.551248 kubelet[3629]: I0429 23:57:26.551101 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:26.552652 kubelet[3629]: I0429 23:57:26.551423 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3854ee19e21101cfb0f4ba6ed78d3846-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-53\" (UID: \"3854ee19e21101cfb0f4ba6ed78d3846\") " pod="kube-system/kube-scheduler-ip-172-31-28-53" Apr 29 23:57:26.552652 kubelet[3629]: I0429 23:57:26.551810 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccb0a85efbdddad293bb99f4a01feb48-ca-certs\") pod \"kube-apiserver-ip-172-31-28-53\" (UID: \"ccb0a85efbdddad293bb99f4a01feb48\") " pod="kube-system/kube-apiserver-ip-172-31-28-53" Apr 29 23:57:26.552652 kubelet[3629]: I0429 23:57:26.551885 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccb0a85efbdddad293bb99f4a01feb48-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-53\" (UID: \"ccb0a85efbdddad293bb99f4a01feb48\") " pod="kube-system/kube-apiserver-ip-172-31-28-53" Apr 29 23:57:26.552652 kubelet[3629]: I0429 23:57:26.551944 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65861d4faff8a35ab889b224dfeaf155-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-53\" (UID: \"65861d4faff8a35ab889b224dfeaf155\") " pod="kube-system/kube-controller-manager-ip-172-31-28-53" Apr 29 23:57:26.900846 sudo[3643]: pam_unix(sudo:session): session closed for user root Apr 29 23:57:27.010956 kubelet[3629]: I0429 23:57:27.010885 3629 apiserver.go:52] "Watching apiserver" Apr 29 23:57:27.047846 kubelet[3629]: I0429 23:57:27.047780 3629 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 29 23:57:27.272352 kubelet[3629]: I0429 23:57:27.272253 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-53" podStartSLOduration=1.272233563 podStartE2EDuration="1.272233563s" podCreationTimestamp="2025-04-29 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-29 23:57:27.267139011 +0000 UTC m=+1.389965288" watchObservedRunningTime="2025-04-29 23:57:27.272233563 +0000 UTC m=+1.395059852" Apr 29 23:57:27.300073 kubelet[3629]: I0429 23:57:27.299986 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-53" podStartSLOduration=1.299963271 podStartE2EDuration="1.299963271s" podCreationTimestamp="2025-04-29 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-29 23:57:27.282376167 +0000 UTC m=+1.405202468" watchObservedRunningTime="2025-04-29 23:57:27.299963271 +0000 UTC m=+1.422789548" Apr 29 23:57:29.634762 sudo[2416]: pam_unix(sudo:session): session closed for user root Apr 29 23:57:29.673346 sshd[2415]: Connection closed by 139.178.89.65 port 39822 Apr 29 23:57:29.673171 sshd-session[2412]: pam_unix(sshd:session): session closed for user core Apr 29 23:57:29.679918 systemd[1]: sshd@6-172.31.28.53:22-139.178.89.65:39822.service: Deactivated successfully. Apr 29 23:57:29.687116 systemd[1]: session-7.scope: Deactivated successfully. Apr 29 23:57:29.688893 systemd-logind[2028]: Session 7 logged out. Waiting for processes to exit. Apr 29 23:57:29.694296 systemd-logind[2028]: Removed session 7. Apr 29 23:57:30.959213 kubelet[3629]: I0429 23:57:30.958793 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-53" podStartSLOduration=4.958750113 podStartE2EDuration="4.958750113s" podCreationTimestamp="2025-04-29 23:57:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-29 23:57:27.300334023 +0000 UTC m=+1.423160300" watchObservedRunningTime="2025-04-29 23:57:30.958750113 +0000 UTC m=+5.081576390" Apr 29 23:57:39.674212 kubelet[3629]: I0429 23:57:39.674127 3629 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 29 23:57:39.675512 containerd[2056]: time="2025-04-29T23:57:39.675354893Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 29 23:57:39.676610 kubelet[3629]: I0429 23:57:39.675788 3629 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 29 23:57:40.278669 kubelet[3629]: I0429 23:57:40.278590 3629 topology_manager.go:215] "Topology Admit Handler" podUID="1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b" podNamespace="kube-system" podName="kube-proxy-kbrjh" Apr 29 23:57:40.297100 kubelet[3629]: W0429 23:57:40.297033 3629 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-28-53" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-53' and this object Apr 29 23:57:40.297495 kubelet[3629]: E0429 23:57:40.297412 3629 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-28-53" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-53' and this object Apr 29 23:57:40.298958 kubelet[3629]: W0429 23:57:40.298806 3629 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-28-53" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-53' and this object Apr 29 23:57:40.299396 kubelet[3629]: E0429 23:57:40.299166 3629 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-28-53" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-53' and this object Apr 29 23:57:40.329193 kubelet[3629]: I0429 23:57:40.327831 3629 topology_manager.go:215] "Topology Admit Handler" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" podNamespace="kube-system" podName="cilium-xbzkf" Apr 29 23:57:40.338735 kubelet[3629]: I0429 23:57:40.337986 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vts74\" (UniqueName: \"kubernetes.io/projected/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-kube-api-access-vts74\") pod \"kube-proxy-kbrjh\" (UID: \"1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b\") " pod="kube-system/kube-proxy-kbrjh" Apr 29 23:57:40.338735 kubelet[3629]: I0429 23:57:40.338082 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-kube-proxy\") pod \"kube-proxy-kbrjh\" (UID: \"1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b\") " pod="kube-system/kube-proxy-kbrjh" Apr 29 23:57:40.338735 kubelet[3629]: I0429 23:57:40.338122 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-xtables-lock\") pod \"kube-proxy-kbrjh\" (UID: \"1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b\") " pod="kube-system/kube-proxy-kbrjh" Apr 29 23:57:40.338735 kubelet[3629]: I0429 23:57:40.338160 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-lib-modules\") pod \"kube-proxy-kbrjh\" (UID: \"1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b\") " pod="kube-system/kube-proxy-kbrjh" Apr 29 23:57:40.439546 kubelet[3629]: I0429 23:57:40.439270 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cni-path\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.439546 kubelet[3629]: I0429 23:57:40.439337 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63c7a280-fb80-4eb8-90d3-abc163980c40-clustermesh-secrets\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.439546 kubelet[3629]: I0429 23:57:40.439375 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-hostproc\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.439546 kubelet[3629]: I0429 23:57:40.439409 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-xtables-lock\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.439546 kubelet[3629]: I0429 23:57:40.439447 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-run\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.439546 kubelet[3629]: I0429 23:57:40.439482 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-bpf-maps\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440516 kubelet[3629]: I0429 23:57:40.439517 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-etc-cni-netd\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440516 kubelet[3629]: I0429 23:57:40.439568 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-cgroup\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440516 kubelet[3629]: I0429 23:57:40.439693 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-lib-modules\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440516 kubelet[3629]: I0429 23:57:40.439753 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-net\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440516 kubelet[3629]: I0429 23:57:40.439792 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-kernel\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440516 kubelet[3629]: I0429 23:57:40.439828 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-hubble-tls\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440962 kubelet[3629]: I0429 23:57:40.439866 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-config-path\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.440962 kubelet[3629]: I0429 23:57:40.439901 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nknkq\" (UniqueName: \"kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-kube-api-access-nknkq\") pod \"cilium-xbzkf\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " pod="kube-system/cilium-xbzkf" Apr 29 23:57:40.670795 kubelet[3629]: I0429 23:57:40.670598 3629 topology_manager.go:215] "Topology Admit Handler" podUID="b7be1b30-ccfe-43af-97b2-41874ca3c92e" podNamespace="kube-system" podName="cilium-operator-599987898-gprd8" Apr 29 23:57:40.744864 kubelet[3629]: I0429 23:57:40.744689 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7be1b30-ccfe-43af-97b2-41874ca3c92e-cilium-config-path\") pod \"cilium-operator-599987898-gprd8\" (UID: \"b7be1b30-ccfe-43af-97b2-41874ca3c92e\") " pod="kube-system/cilium-operator-599987898-gprd8" Apr 29 23:57:40.744864 kubelet[3629]: I0429 23:57:40.744764 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snfv5\" (UniqueName: \"kubernetes.io/projected/b7be1b30-ccfe-43af-97b2-41874ca3c92e-kube-api-access-snfv5\") pod \"cilium-operator-599987898-gprd8\" (UID: \"b7be1b30-ccfe-43af-97b2-41874ca3c92e\") " pod="kube-system/cilium-operator-599987898-gprd8" Apr 29 23:57:41.442977 kubelet[3629]: E0429 23:57:41.442917 3629 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.443144 kubelet[3629]: E0429 23:57:41.443047 3629 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-kube-proxy podName:1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b nodeName:}" failed. No retries permitted until 2025-04-29 23:57:41.943014817 +0000 UTC m=+16.065841082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-kube-proxy") pod "kube-proxy-kbrjh" (UID: "1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b") : failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.452961 kubelet[3629]: E0429 23:57:41.452906 3629 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.452961 kubelet[3629]: E0429 23:57:41.452955 3629 projected.go:200] Error preparing data for projected volume kube-api-access-vts74 for pod kube-system/kube-proxy-kbrjh: failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.453200 kubelet[3629]: E0429 23:57:41.453049 3629 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-kube-api-access-vts74 podName:1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b nodeName:}" failed. No retries permitted until 2025-04-29 23:57:41.953022638 +0000 UTC m=+16.075848915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vts74" (UniqueName: "kubernetes.io/projected/1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b-kube-api-access-vts74") pod "kube-proxy-kbrjh" (UID: "1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b") : failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.563262 kubelet[3629]: E0429 23:57:41.562701 3629 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.563262 kubelet[3629]: E0429 23:57:41.562751 3629 projected.go:200] Error preparing data for projected volume kube-api-access-nknkq for pod kube-system/cilium-xbzkf: failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.563262 kubelet[3629]: E0429 23:57:41.562823 3629 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-kube-api-access-nknkq podName:63c7a280-fb80-4eb8-90d3-abc163980c40 nodeName:}" failed. No retries permitted until 2025-04-29 23:57:42.062799202 +0000 UTC m=+16.185625479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nknkq" (UniqueName: "kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-kube-api-access-nknkq") pod "cilium-xbzkf" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40") : failed to sync configmap cache: timed out waiting for the condition Apr 29 23:57:41.587663 containerd[2056]: time="2025-04-29T23:57:41.586127274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gprd8,Uid:b7be1b30-ccfe-43af-97b2-41874ca3c92e,Namespace:kube-system,Attempt:0,}" Apr 29 23:57:41.625134 containerd[2056]: time="2025-04-29T23:57:41.624777282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:57:41.625134 containerd[2056]: time="2025-04-29T23:57:41.624920478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:57:41.625134 containerd[2056]: time="2025-04-29T23:57:41.624946626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:41.625866 containerd[2056]: time="2025-04-29T23:57:41.625545378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:41.720587 containerd[2056]: time="2025-04-29T23:57:41.720400999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gprd8,Uid:b7be1b30-ccfe-43af-97b2-41874ca3c92e,Namespace:kube-system,Attempt:0,} returns sandbox id \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\"" Apr 29 23:57:41.726058 containerd[2056]: time="2025-04-29T23:57:41.725614495Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 29 23:57:42.106590 containerd[2056]: time="2025-04-29T23:57:42.106120325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbrjh,Uid:1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b,Namespace:kube-system,Attempt:0,}" Apr 29 23:57:42.139063 containerd[2056]: time="2025-04-29T23:57:42.138734465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:57:42.139063 containerd[2056]: time="2025-04-29T23:57:42.138837905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:57:42.139063 containerd[2056]: time="2025-04-29T23:57:42.138865853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:42.140383 containerd[2056]: time="2025-04-29T23:57:42.140256185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:42.167670 containerd[2056]: time="2025-04-29T23:57:42.167599445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xbzkf,Uid:63c7a280-fb80-4eb8-90d3-abc163980c40,Namespace:kube-system,Attempt:0,}" Apr 29 23:57:42.225163 containerd[2056]: time="2025-04-29T23:57:42.224905697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:57:42.225424 containerd[2056]: time="2025-04-29T23:57:42.225182369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:57:42.225424 containerd[2056]: time="2025-04-29T23:57:42.225363317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:42.226665 containerd[2056]: time="2025-04-29T23:57:42.226502993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:57:42.230676 containerd[2056]: time="2025-04-29T23:57:42.230363561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbrjh,Uid:1d58fd07-9c9e-4b04-8ddc-7d5e56ab5f9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"35d1faeef8057bd8c5df5d78c133c6e4deca1f6407c716833d78641ee09f1c6e\"" Apr 29 23:57:42.240784 containerd[2056]: time="2025-04-29T23:57:42.240720101Z" level=info msg="CreateContainer within sandbox \"35d1faeef8057bd8c5df5d78c133c6e4deca1f6407c716833d78641ee09f1c6e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 29 23:57:42.268822 containerd[2056]: time="2025-04-29T23:57:42.268653246Z" level=info msg="CreateContainer within sandbox \"35d1faeef8057bd8c5df5d78c133c6e4deca1f6407c716833d78641ee09f1c6e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"18e0e6dec197ec711f39fe357ff6831eaaf6955432cf4a69bd282bf116bfc590\"" Apr 29 23:57:42.273344 containerd[2056]: time="2025-04-29T23:57:42.271527402Z" level=info msg="StartContainer for \"18e0e6dec197ec711f39fe357ff6831eaaf6955432cf4a69bd282bf116bfc590\"" Apr 29 23:57:42.315450 containerd[2056]: time="2025-04-29T23:57:42.315386922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xbzkf,Uid:63c7a280-fb80-4eb8-90d3-abc163980c40,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\"" Apr 29 23:57:42.397451 containerd[2056]: time="2025-04-29T23:57:42.396446646Z" level=info msg="StartContainer for \"18e0e6dec197ec711f39fe357ff6831eaaf6955432cf4a69bd282bf116bfc590\" returns successfully" Apr 29 23:57:44.121295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106103385.mount: Deactivated successfully. Apr 29 23:57:44.815399 containerd[2056]: time="2025-04-29T23:57:44.815062738Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:57:44.817340 containerd[2056]: time="2025-04-29T23:57:44.817248238Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 29 23:57:44.819073 containerd[2056]: time="2025-04-29T23:57:44.818994070Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:57:44.823592 containerd[2056]: time="2025-04-29T23:57:44.823522606Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.097819587s" Apr 29 23:57:44.823849 containerd[2056]: time="2025-04-29T23:57:44.823585918Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 29 23:57:44.826202 containerd[2056]: time="2025-04-29T23:57:44.825862918Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 29 23:57:44.831220 containerd[2056]: time="2025-04-29T23:57:44.831159178Z" level=info msg="CreateContainer within sandbox \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 29 23:57:44.864155 containerd[2056]: time="2025-04-29T23:57:44.863977714Z" level=info msg="CreateContainer within sandbox \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\"" Apr 29 23:57:44.864893 containerd[2056]: time="2025-04-29T23:57:44.864845746Z" level=info msg="StartContainer for \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\"" Apr 29 23:57:44.961685 containerd[2056]: time="2025-04-29T23:57:44.961605551Z" level=info msg="StartContainer for \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\" returns successfully" Apr 29 23:57:45.314195 kubelet[3629]: I0429 23:57:45.313417 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbrjh" podStartSLOduration=5.313378353 podStartE2EDuration="5.313378353s" podCreationTimestamp="2025-04-29 23:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-29 23:57:43.286147483 +0000 UTC m=+17.408973784" watchObservedRunningTime="2025-04-29 23:57:45.313378353 +0000 UTC m=+19.436204630" Apr 29 23:57:51.965105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430332378.mount: Deactivated successfully. Apr 29 23:57:55.167704 containerd[2056]: time="2025-04-29T23:57:55.167202714Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:57:55.169566 containerd[2056]: time="2025-04-29T23:57:55.169451058Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 29 23:57:55.172200 containerd[2056]: time="2025-04-29T23:57:55.172124022Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 29 23:57:55.176136 containerd[2056]: time="2025-04-29T23:57:55.175927206Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.350002404s" Apr 29 23:57:55.176136 containerd[2056]: time="2025-04-29T23:57:55.175988406Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 29 23:57:55.181250 containerd[2056]: time="2025-04-29T23:57:55.181191858Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 29 23:57:55.209659 containerd[2056]: time="2025-04-29T23:57:55.209510910Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\"" Apr 29 23:57:55.211994 containerd[2056]: time="2025-04-29T23:57:55.210496506Z" level=info msg="StartContainer for \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\"" Apr 29 23:57:55.316458 containerd[2056]: time="2025-04-29T23:57:55.316392006Z" level=info msg="StartContainer for \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\" returns successfully" Apr 29 23:57:55.386062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c-rootfs.mount: Deactivated successfully. Apr 29 23:57:55.395317 kubelet[3629]: I0429 23:57:55.394855 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gprd8" podStartSLOduration=12.293918632 podStartE2EDuration="15.394833691s" podCreationTimestamp="2025-04-29 23:57:40 +0000 UTC" firstStartedPulling="2025-04-29 23:57:41.724187731 +0000 UTC m=+15.847014008" lastFinishedPulling="2025-04-29 23:57:44.825102778 +0000 UTC m=+18.947929067" observedRunningTime="2025-04-29 23:57:45.320880585 +0000 UTC m=+19.443706886" watchObservedRunningTime="2025-04-29 23:57:55.394833691 +0000 UTC m=+29.517659980" Apr 29 23:57:55.528497 containerd[2056]: time="2025-04-29T23:57:55.528422323Z" level=info msg="shim disconnected" id=f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c namespace=k8s.io Apr 29 23:57:55.529112 containerd[2056]: time="2025-04-29T23:57:55.528803803Z" level=warning msg="cleaning up after shim disconnected" id=f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c namespace=k8s.io Apr 29 23:57:55.529112 containerd[2056]: time="2025-04-29T23:57:55.528833179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:57:56.380351 containerd[2056]: time="2025-04-29T23:57:56.380293784Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 29 23:57:56.421173 containerd[2056]: time="2025-04-29T23:57:56.421093592Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\"" Apr 29 23:57:56.422270 containerd[2056]: time="2025-04-29T23:57:56.422126744Z" level=info msg="StartContainer for \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\"" Apr 29 23:57:56.483488 systemd[1]: run-containerd-runc-k8s.io-4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451-runc.u9iPR4.mount: Deactivated successfully. Apr 29 23:57:56.542767 containerd[2056]: time="2025-04-29T23:57:56.541793636Z" level=info msg="StartContainer for \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\" returns successfully" Apr 29 23:57:56.564395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 29 23:57:56.565124 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 29 23:57:56.565256 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 29 23:57:56.580473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 29 23:57:56.620402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 29 23:57:56.629787 containerd[2056]: time="2025-04-29T23:57:56.629421693Z" level=info msg="shim disconnected" id=4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451 namespace=k8s.io Apr 29 23:57:56.629787 containerd[2056]: time="2025-04-29T23:57:56.629500809Z" level=warning msg="cleaning up after shim disconnected" id=4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451 namespace=k8s.io Apr 29 23:57:56.629787 containerd[2056]: time="2025-04-29T23:57:56.629522889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:57:57.377175 containerd[2056]: time="2025-04-29T23:57:57.375990981Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 29 23:57:57.412466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451-rootfs.mount: Deactivated successfully. Apr 29 23:57:57.421096 containerd[2056]: time="2025-04-29T23:57:57.421031337Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\"" Apr 29 23:57:57.423196 containerd[2056]: time="2025-04-29T23:57:57.423144153Z" level=info msg="StartContainer for \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\"" Apr 29 23:57:57.492451 systemd[1]: run-containerd-runc-k8s.io-7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9-runc.MyPAZI.mount: Deactivated successfully. Apr 29 23:57:57.555362 containerd[2056]: time="2025-04-29T23:57:57.554810794Z" level=info msg="StartContainer for \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\" returns successfully" Apr 29 23:57:57.601415 containerd[2056]: time="2025-04-29T23:57:57.601281982Z" level=info msg="shim disconnected" id=7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9 namespace=k8s.io Apr 29 23:57:57.601415 containerd[2056]: time="2025-04-29T23:57:57.601357930Z" level=warning msg="cleaning up after shim disconnected" id=7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9 namespace=k8s.io Apr 29 23:57:57.601415 containerd[2056]: time="2025-04-29T23:57:57.601379314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:57:58.388239 containerd[2056]: time="2025-04-29T23:57:58.387782794Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 29 23:57:58.411691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9-rootfs.mount: Deactivated successfully. Apr 29 23:57:58.427025 containerd[2056]: time="2025-04-29T23:57:58.426952138Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\"" Apr 29 23:57:58.429035 containerd[2056]: time="2025-04-29T23:57:58.428976910Z" level=info msg="StartContainer for \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\"" Apr 29 23:57:58.525420 containerd[2056]: time="2025-04-29T23:57:58.525323890Z" level=info msg="StartContainer for \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\" returns successfully" Apr 29 23:57:58.559822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9-rootfs.mount: Deactivated successfully. Apr 29 23:57:58.566821 containerd[2056]: time="2025-04-29T23:57:58.566708963Z" level=info msg="shim disconnected" id=506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9 namespace=k8s.io Apr 29 23:57:58.567285 containerd[2056]: time="2025-04-29T23:57:58.567032963Z" level=warning msg="cleaning up after shim disconnected" id=506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9 namespace=k8s.io Apr 29 23:57:58.567285 containerd[2056]: time="2025-04-29T23:57:58.567065051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:57:59.387345 containerd[2056]: time="2025-04-29T23:57:59.387271019Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 29 23:57:59.418725 containerd[2056]: time="2025-04-29T23:57:59.418616639Z" level=info msg="CreateContainer within sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\"" Apr 29 23:57:59.426251 containerd[2056]: time="2025-04-29T23:57:59.425383763Z" level=info msg="StartContainer for \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\"" Apr 29 23:57:59.535035 containerd[2056]: time="2025-04-29T23:57:59.534980615Z" level=info msg="StartContainer for \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\" returns successfully" Apr 29 23:57:59.747733 kubelet[3629]: I0429 23:57:59.747013 3629 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 29 23:57:59.798511 kubelet[3629]: I0429 23:57:59.793269 3629 topology_manager.go:215] "Topology Admit Handler" podUID="aabf4a83-aa23-4095-ab1e-a1e99d724cbc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lhl99" Apr 29 23:57:59.801742 kubelet[3629]: I0429 23:57:59.800059 3629 topology_manager.go:215] "Topology Admit Handler" podUID="51b85ac7-189b-45c5-aac8-d4c295d07f64" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xvlf2" Apr 29 23:57:59.892222 kubelet[3629]: I0429 23:57:59.891827 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aabf4a83-aa23-4095-ab1e-a1e99d724cbc-config-volume\") pod \"coredns-7db6d8ff4d-lhl99\" (UID: \"aabf4a83-aa23-4095-ab1e-a1e99d724cbc\") " pod="kube-system/coredns-7db6d8ff4d-lhl99" Apr 29 23:57:59.894805 kubelet[3629]: I0429 23:57:59.894748 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51b85ac7-189b-45c5-aac8-d4c295d07f64-config-volume\") pod \"coredns-7db6d8ff4d-xvlf2\" (UID: \"51b85ac7-189b-45c5-aac8-d4c295d07f64\") " pod="kube-system/coredns-7db6d8ff4d-xvlf2" Apr 29 23:57:59.895203 kubelet[3629]: I0429 23:57:59.895059 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqk6x\" (UniqueName: \"kubernetes.io/projected/51b85ac7-189b-45c5-aac8-d4c295d07f64-kube-api-access-jqk6x\") pod \"coredns-7db6d8ff4d-xvlf2\" (UID: \"51b85ac7-189b-45c5-aac8-d4c295d07f64\") " pod="kube-system/coredns-7db6d8ff4d-xvlf2" Apr 29 23:57:59.895203 kubelet[3629]: I0429 23:57:59.895128 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpd8t\" (UniqueName: \"kubernetes.io/projected/aabf4a83-aa23-4095-ab1e-a1e99d724cbc-kube-api-access-xpd8t\") pod \"coredns-7db6d8ff4d-lhl99\" (UID: \"aabf4a83-aa23-4095-ab1e-a1e99d724cbc\") " pod="kube-system/coredns-7db6d8ff4d-lhl99" Apr 29 23:58:00.129090 containerd[2056]: time="2025-04-29T23:58:00.126929530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvlf2,Uid:51b85ac7-189b-45c5-aac8-d4c295d07f64,Namespace:kube-system,Attempt:0,}" Apr 29 23:58:00.129090 containerd[2056]: time="2025-04-29T23:58:00.127333450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lhl99,Uid:aabf4a83-aa23-4095-ab1e-a1e99d724cbc,Namespace:kube-system,Attempt:0,}" Apr 29 23:58:02.393354 systemd-networkd[1609]: cilium_host: Link UP Apr 29 23:58:02.394653 systemd-networkd[1609]: cilium_net: Link UP Apr 29 23:58:02.396574 systemd-networkd[1609]: cilium_net: Gained carrier Apr 29 23:58:02.396950 systemd-networkd[1609]: cilium_host: Gained carrier Apr 29 23:58:02.400257 (udev-worker)[4420]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:58:02.403941 (udev-worker)[4458]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:58:02.565590 (udev-worker)[4422]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:58:02.575328 systemd-networkd[1609]: cilium_net: Gained IPv6LL Apr 29 23:58:02.577775 systemd-networkd[1609]: cilium_vxlan: Link UP Apr 29 23:58:02.578153 systemd-networkd[1609]: cilium_vxlan: Gained carrier Apr 29 23:58:02.805940 systemd-networkd[1609]: cilium_host: Gained IPv6LL Apr 29 23:58:03.053671 kernel: NET: Registered PF_ALG protocol family Apr 29 23:58:04.117875 systemd-networkd[1609]: cilium_vxlan: Gained IPv6LL Apr 29 23:58:04.365602 systemd-networkd[1609]: lxc_health: Link UP Apr 29 23:58:04.372715 systemd-networkd[1609]: lxc_health: Gained carrier Apr 29 23:58:04.754413 systemd-networkd[1609]: lxce77aa0d88a80: Link UP Apr 29 23:58:04.763730 kernel: eth0: renamed from tmpe8251 Apr 29 23:58:04.772604 systemd-networkd[1609]: lxce77aa0d88a80: Gained carrier Apr 29 23:58:04.830534 (udev-worker)[4471]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:58:04.846809 systemd-networkd[1609]: lxc67e4259a9da6: Link UP Apr 29 23:58:04.870731 kernel: eth0: renamed from tmpea5eb Apr 29 23:58:04.878913 systemd-networkd[1609]: lxc67e4259a9da6: Gained carrier Apr 29 23:58:05.782007 systemd-networkd[1609]: lxc_health: Gained IPv6LL Apr 29 23:58:06.091124 systemd[1]: Started sshd@7-172.31.28.53:22-139.178.89.65:54028.service - OpenSSH per-connection server daemon (139.178.89.65:54028). Apr 29 23:58:06.105804 systemd-networkd[1609]: lxc67e4259a9da6: Gained IPv6LL Apr 29 23:58:06.227097 kubelet[3629]: I0429 23:58:06.227000 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xbzkf" podStartSLOduration=13.367138853 podStartE2EDuration="26.226982657s" podCreationTimestamp="2025-04-29 23:57:40 +0000 UTC" firstStartedPulling="2025-04-29 23:57:42.318152586 +0000 UTC m=+16.440978863" lastFinishedPulling="2025-04-29 23:57:55.17799639 +0000 UTC m=+29.300822667" observedRunningTime="2025-04-29 23:58:00.447945024 +0000 UTC m=+34.570771313" watchObservedRunningTime="2025-04-29 23:58:06.226982657 +0000 UTC m=+40.349808946" Apr 29 23:58:06.453052 sshd[4814]: Accepted publickey for core from 139.178.89.65 port 54028 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:06.457940 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:06.477305 systemd-logind[2028]: New session 8 of user core. Apr 29 23:58:06.487041 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 29 23:58:06.549890 systemd-networkd[1609]: lxce77aa0d88a80: Gained IPv6LL Apr 29 23:58:06.992674 sshd[4819]: Connection closed by 139.178.89.65 port 54028 Apr 29 23:58:06.995685 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:07.007086 systemd[1]: sshd@7-172.31.28.53:22-139.178.89.65:54028.service: Deactivated successfully. Apr 29 23:58:07.019962 systemd-logind[2028]: Session 8 logged out. Waiting for processes to exit. Apr 29 23:58:07.022142 systemd[1]: session-8.scope: Deactivated successfully. Apr 29 23:58:07.029796 systemd-logind[2028]: Removed session 8. Apr 29 23:58:08.647804 ntpd[2015]: Listen normally on 6 cilium_host 192.168.0.174:123 Apr 29 23:58:08.648991 ntpd[2015]: 29 Apr 23:58:08 ntpd[2015]: Listen normally on 6 cilium_host 192.168.0.174:123 Apr 29 23:58:08.648991 ntpd[2015]: 29 Apr 23:58:08 ntpd[2015]: Listen normally on 7 cilium_net [fe80::a83a:42ff:fe7e:9173%4]:123 Apr 29 23:58:08.648991 ntpd[2015]: 29 Apr 23:58:08 ntpd[2015]: Listen normally on 8 cilium_host [fe80::4dc:f9ff:feb7:6885%5]:123 Apr 29 23:58:08.648991 ntpd[2015]: 29 Apr 23:58:08 ntpd[2015]: Listen normally on 9 cilium_vxlan [fe80::e45c:d9ff:fe83:bb64%6]:123 Apr 29 23:58:08.648991 ntpd[2015]: 29 Apr 23:58:08 ntpd[2015]: Listen normally on 10 lxc_health [fe80::8cd6:e7ff:fe4f:66e2%8]:123 Apr 29 23:58:08.648991 ntpd[2015]: 29 Apr 23:58:08 ntpd[2015]: Listen normally on 11 lxce77aa0d88a80 [fe80::a47a:66ff:fe6c:2bb5%10]:123 Apr 29 23:58:08.648991 ntpd[2015]: 29 Apr 23:58:08 ntpd[2015]: Listen normally on 12 lxc67e4259a9da6 [fe80::8089:2fff:fe6c:2b31%12]:123 Apr 29 23:58:08.647972 ntpd[2015]: Listen normally on 7 cilium_net [fe80::a83a:42ff:fe7e:9173%4]:123 Apr 29 23:58:08.648056 ntpd[2015]: Listen normally on 8 cilium_host [fe80::4dc:f9ff:feb7:6885%5]:123 Apr 29 23:58:08.648126 ntpd[2015]: Listen normally on 9 cilium_vxlan [fe80::e45c:d9ff:fe83:bb64%6]:123 Apr 29 23:58:08.648193 ntpd[2015]: Listen normally on 10 lxc_health [fe80::8cd6:e7ff:fe4f:66e2%8]:123 Apr 29 23:58:08.648259 ntpd[2015]: Listen normally on 11 lxce77aa0d88a80 [fe80::a47a:66ff:fe6c:2bb5%10]:123 Apr 29 23:58:08.648326 ntpd[2015]: Listen normally on 12 lxc67e4259a9da6 [fe80::8089:2fff:fe6c:2b31%12]:123 Apr 29 23:58:12.036314 systemd[1]: Started sshd@8-172.31.28.53:22-139.178.89.65:51474.service - OpenSSH per-connection server daemon (139.178.89.65:51474). Apr 29 23:58:12.325530 sshd[4844]: Accepted publickey for core from 139.178.89.65 port 51474 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:12.329583 sshd-session[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:12.342821 systemd-logind[2028]: New session 9 of user core. Apr 29 23:58:12.350780 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 29 23:58:12.694774 sshd[4847]: Connection closed by 139.178.89.65 port 51474 Apr 29 23:58:12.695974 sshd-session[4844]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:12.706885 systemd[1]: sshd@8-172.31.28.53:22-139.178.89.65:51474.service: Deactivated successfully. Apr 29 23:58:12.721041 systemd[1]: session-9.scope: Deactivated successfully. Apr 29 23:58:12.726605 systemd-logind[2028]: Session 9 logged out. Waiting for processes to exit. Apr 29 23:58:12.729119 systemd-logind[2028]: Removed session 9. Apr 29 23:58:13.868695 containerd[2056]: time="2025-04-29T23:58:13.866390307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:58:13.868695 containerd[2056]: time="2025-04-29T23:58:13.866573127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:58:13.868695 containerd[2056]: time="2025-04-29T23:58:13.866848803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:58:13.868695 containerd[2056]: time="2025-04-29T23:58:13.867677439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:58:13.924430 containerd[2056]: time="2025-04-29T23:58:13.923769795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:58:13.924430 containerd[2056]: time="2025-04-29T23:58:13.923876595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:58:13.924430 containerd[2056]: time="2025-04-29T23:58:13.923913111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:58:13.924430 containerd[2056]: time="2025-04-29T23:58:13.924085299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:58:14.119219 containerd[2056]: time="2025-04-29T23:58:14.116706420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lhl99,Uid:aabf4a83-aa23-4095-ab1e-a1e99d724cbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea5eb4b5b0a63f54cbbce3ae0bdd7de14009032e0478bd92bdd4e09e8160ae3f\"" Apr 29 23:58:14.131711 containerd[2056]: time="2025-04-29T23:58:14.131655252Z" level=info msg="CreateContainer within sandbox \"ea5eb4b5b0a63f54cbbce3ae0bdd7de14009032e0478bd92bdd4e09e8160ae3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 29 23:58:14.159979 containerd[2056]: time="2025-04-29T23:58:14.159926940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xvlf2,Uid:51b85ac7-189b-45c5-aac8-d4c295d07f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"e82517f72ee367cb9424cf5c740ca9a9e68b067b75ad392725f78bcc491e0b2b\"" Apr 29 23:58:14.165867 containerd[2056]: time="2025-04-29T23:58:14.165683916Z" level=info msg="CreateContainer within sandbox \"e82517f72ee367cb9424cf5c740ca9a9e68b067b75ad392725f78bcc491e0b2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 29 23:58:14.188871 containerd[2056]: time="2025-04-29T23:58:14.187946376Z" level=info msg="CreateContainer within sandbox \"ea5eb4b5b0a63f54cbbce3ae0bdd7de14009032e0478bd92bdd4e09e8160ae3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f7fa965c8274a7c62993e40f05f4dd442ab730462b24399cae5e04130cd0212\"" Apr 29 23:58:14.189845 containerd[2056]: time="2025-04-29T23:58:14.189714072Z" level=info msg="StartContainer for \"5f7fa965c8274a7c62993e40f05f4dd442ab730462b24399cae5e04130cd0212\"" Apr 29 23:58:14.229072 containerd[2056]: time="2025-04-29T23:58:14.228970836Z" level=info msg="CreateContainer within sandbox \"e82517f72ee367cb9424cf5c740ca9a9e68b067b75ad392725f78bcc491e0b2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50a78f636be46c74aab4c1db33ab090623ea0654895804ccb03ce0f5ceb4cbc5\"" Apr 29 23:58:14.236661 containerd[2056]: time="2025-04-29T23:58:14.235238964Z" level=info msg="StartContainer for \"50a78f636be46c74aab4c1db33ab090623ea0654895804ccb03ce0f5ceb4cbc5\"" Apr 29 23:58:14.364081 containerd[2056]: time="2025-04-29T23:58:14.363887125Z" level=info msg="StartContainer for \"5f7fa965c8274a7c62993e40f05f4dd442ab730462b24399cae5e04130cd0212\" returns successfully" Apr 29 23:58:14.391679 containerd[2056]: time="2025-04-29T23:58:14.391120393Z" level=info msg="StartContainer for \"50a78f636be46c74aab4c1db33ab090623ea0654895804ccb03ce0f5ceb4cbc5\" returns successfully" Apr 29 23:58:14.549549 kubelet[3629]: I0429 23:58:14.549452 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lhl99" podStartSLOduration=34.54942869 podStartE2EDuration="34.54942869s" podCreationTimestamp="2025-04-29 23:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-29 23:58:14.515205062 +0000 UTC m=+48.638031339" watchObservedRunningTime="2025-04-29 23:58:14.54942869 +0000 UTC m=+48.672254967" Apr 29 23:58:14.551920 kubelet[3629]: I0429 23:58:14.551806 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xvlf2" podStartSLOduration=34.551782922 podStartE2EDuration="34.551782922s" podCreationTimestamp="2025-04-29 23:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-29 23:58:14.54555173 +0000 UTC m=+48.668378115" watchObservedRunningTime="2025-04-29 23:58:14.551782922 +0000 UTC m=+48.674609259" Apr 29 23:58:17.746853 systemd[1]: Started sshd@9-172.31.28.53:22-139.178.89.65:59176.service - OpenSSH per-connection server daemon (139.178.89.65:59176). Apr 29 23:58:18.022088 sshd[5039]: Accepted publickey for core from 139.178.89.65 port 59176 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:18.024367 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:18.032114 systemd-logind[2028]: New session 10 of user core. Apr 29 23:58:18.038201 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 29 23:58:18.335578 sshd[5042]: Connection closed by 139.178.89.65 port 59176 Apr 29 23:58:18.336452 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:18.343562 systemd-logind[2028]: Session 10 logged out. Waiting for processes to exit. Apr 29 23:58:18.344050 systemd[1]: sshd@9-172.31.28.53:22-139.178.89.65:59176.service: Deactivated successfully. Apr 29 23:58:18.349137 systemd[1]: session-10.scope: Deactivated successfully. Apr 29 23:58:18.352745 systemd-logind[2028]: Removed session 10. Apr 29 23:58:23.387355 systemd[1]: Started sshd@10-172.31.28.53:22-139.178.89.65:59190.service - OpenSSH per-connection server daemon (139.178.89.65:59190). Apr 29 23:58:23.668562 sshd[5054]: Accepted publickey for core from 139.178.89.65 port 59190 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:23.670739 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:23.678955 systemd-logind[2028]: New session 11 of user core. Apr 29 23:58:23.695184 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 29 23:58:23.989239 sshd[5057]: Connection closed by 139.178.89.65 port 59190 Apr 29 23:58:23.989515 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:23.998466 systemd[1]: sshd@10-172.31.28.53:22-139.178.89.65:59190.service: Deactivated successfully. Apr 29 23:58:24.004352 systemd[1]: session-11.scope: Deactivated successfully. Apr 29 23:58:24.006482 systemd-logind[2028]: Session 11 logged out. Waiting for processes to exit. Apr 29 23:58:24.008876 systemd-logind[2028]: Removed session 11. Apr 29 23:58:24.035480 systemd[1]: Started sshd@11-172.31.28.53:22-139.178.89.65:59192.service - OpenSSH per-connection server daemon (139.178.89.65:59192). Apr 29 23:58:24.320318 sshd[5069]: Accepted publickey for core from 139.178.89.65 port 59192 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:24.323156 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:24.331227 systemd-logind[2028]: New session 12 of user core. Apr 29 23:58:24.337115 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 29 23:58:24.720383 sshd[5072]: Connection closed by 139.178.89.65 port 59192 Apr 29 23:58:24.721988 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:24.732125 systemd-logind[2028]: Session 12 logged out. Waiting for processes to exit. Apr 29 23:58:24.734848 systemd[1]: sshd@11-172.31.28.53:22-139.178.89.65:59192.service: Deactivated successfully. Apr 29 23:58:24.744165 systemd[1]: session-12.scope: Deactivated successfully. Apr 29 23:58:24.748323 systemd-logind[2028]: Removed session 12. Apr 29 23:58:24.763092 systemd[1]: Started sshd@12-172.31.28.53:22-139.178.89.65:59206.service - OpenSSH per-connection server daemon (139.178.89.65:59206). Apr 29 23:58:25.044672 sshd[5081]: Accepted publickey for core from 139.178.89.65 port 59206 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:25.047389 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:25.056990 systemd-logind[2028]: New session 13 of user core. Apr 29 23:58:25.063144 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 29 23:58:25.370048 sshd[5084]: Connection closed by 139.178.89.65 port 59206 Apr 29 23:58:25.371220 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:25.382469 systemd[1]: sshd@12-172.31.28.53:22-139.178.89.65:59206.service: Deactivated successfully. Apr 29 23:58:25.395937 systemd[1]: session-13.scope: Deactivated successfully. Apr 29 23:58:25.397597 systemd-logind[2028]: Session 13 logged out. Waiting for processes to exit. Apr 29 23:58:25.400194 systemd-logind[2028]: Removed session 13. Apr 29 23:58:30.417118 systemd[1]: Started sshd@13-172.31.28.53:22-139.178.89.65:37314.service - OpenSSH per-connection server daemon (139.178.89.65:37314). Apr 29 23:58:30.710600 sshd[5097]: Accepted publickey for core from 139.178.89.65 port 37314 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:30.713166 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:30.721168 systemd-logind[2028]: New session 14 of user core. Apr 29 23:58:30.730231 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 29 23:58:31.027961 sshd[5100]: Connection closed by 139.178.89.65 port 37314 Apr 29 23:58:31.028374 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:31.037127 systemd[1]: sshd@13-172.31.28.53:22-139.178.89.65:37314.service: Deactivated successfully. Apr 29 23:58:31.047132 systemd[1]: session-14.scope: Deactivated successfully. Apr 29 23:58:31.050228 systemd-logind[2028]: Session 14 logged out. Waiting for processes to exit. Apr 29 23:58:31.053615 systemd-logind[2028]: Removed session 14. Apr 29 23:58:36.074263 systemd[1]: Started sshd@14-172.31.28.53:22-139.178.89.65:37320.service - OpenSSH per-connection server daemon (139.178.89.65:37320). Apr 29 23:58:36.358351 sshd[5110]: Accepted publickey for core from 139.178.89.65 port 37320 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:36.362145 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:36.371300 systemd-logind[2028]: New session 15 of user core. Apr 29 23:58:36.377227 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 29 23:58:36.671352 sshd[5113]: Connection closed by 139.178.89.65 port 37320 Apr 29 23:58:36.672558 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:36.679896 systemd[1]: sshd@14-172.31.28.53:22-139.178.89.65:37320.service: Deactivated successfully. Apr 29 23:58:36.685485 systemd-logind[2028]: Session 15 logged out. Waiting for processes to exit. Apr 29 23:58:36.686548 systemd[1]: session-15.scope: Deactivated successfully. Apr 29 23:58:36.690482 systemd-logind[2028]: Removed session 15. Apr 29 23:58:41.719264 systemd[1]: Started sshd@15-172.31.28.53:22-139.178.89.65:37432.service - OpenSSH per-connection server daemon (139.178.89.65:37432). Apr 29 23:58:42.009709 sshd[5125]: Accepted publickey for core from 139.178.89.65 port 37432 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:42.012145 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:42.021195 systemd-logind[2028]: New session 16 of user core. Apr 29 23:58:42.026201 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 29 23:58:42.328913 sshd[5128]: Connection closed by 139.178.89.65 port 37432 Apr 29 23:58:42.330148 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:42.337392 systemd[1]: sshd@15-172.31.28.53:22-139.178.89.65:37432.service: Deactivated successfully. Apr 29 23:58:42.338228 systemd-logind[2028]: Session 16 logged out. Waiting for processes to exit. Apr 29 23:58:42.346606 systemd[1]: session-16.scope: Deactivated successfully. Apr 29 23:58:42.348373 systemd-logind[2028]: Removed session 16. Apr 29 23:58:42.376098 systemd[1]: Started sshd@16-172.31.28.53:22-139.178.89.65:37434.service - OpenSSH per-connection server daemon (139.178.89.65:37434). Apr 29 23:58:42.659425 sshd[5138]: Accepted publickey for core from 139.178.89.65 port 37434 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:42.662387 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:42.670713 systemd-logind[2028]: New session 17 of user core. Apr 29 23:58:42.682823 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 29 23:58:43.033747 sshd[5143]: Connection closed by 139.178.89.65 port 37434 Apr 29 23:58:43.034716 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:43.040824 systemd[1]: sshd@16-172.31.28.53:22-139.178.89.65:37434.service: Deactivated successfully. Apr 29 23:58:43.042750 systemd-logind[2028]: Session 17 logged out. Waiting for processes to exit. Apr 29 23:58:43.050614 systemd[1]: session-17.scope: Deactivated successfully. Apr 29 23:58:43.054060 systemd-logind[2028]: Removed session 17. Apr 29 23:58:43.080134 systemd[1]: Started sshd@17-172.31.28.53:22-139.178.89.65:37446.service - OpenSSH per-connection server daemon (139.178.89.65:37446). Apr 29 23:58:43.366940 sshd[5151]: Accepted publickey for core from 139.178.89.65 port 37446 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:43.369895 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:43.377781 systemd-logind[2028]: New session 18 of user core. Apr 29 23:58:43.390896 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 29 23:58:46.060118 sshd[5154]: Connection closed by 139.178.89.65 port 37446 Apr 29 23:58:46.061478 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:46.069419 systemd[1]: sshd@17-172.31.28.53:22-139.178.89.65:37446.service: Deactivated successfully. Apr 29 23:58:46.083986 systemd[1]: session-18.scope: Deactivated successfully. Apr 29 23:58:46.086069 systemd-logind[2028]: Session 18 logged out. Waiting for processes to exit. Apr 29 23:58:46.088494 systemd-logind[2028]: Removed session 18. Apr 29 23:58:46.105122 systemd[1]: Started sshd@18-172.31.28.53:22-139.178.89.65:37450.service - OpenSSH per-connection server daemon (139.178.89.65:37450). Apr 29 23:58:46.394566 sshd[5171]: Accepted publickey for core from 139.178.89.65 port 37450 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:46.396713 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:46.406006 systemd-logind[2028]: New session 19 of user core. Apr 29 23:58:46.416235 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 29 23:58:46.961305 sshd[5174]: Connection closed by 139.178.89.65 port 37450 Apr 29 23:58:46.961893 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:46.969824 systemd-logind[2028]: Session 19 logged out. Waiting for processes to exit. Apr 29 23:58:46.971503 systemd[1]: sshd@18-172.31.28.53:22-139.178.89.65:37450.service: Deactivated successfully. Apr 29 23:58:46.976615 systemd[1]: session-19.scope: Deactivated successfully. Apr 29 23:58:46.978845 systemd-logind[2028]: Removed session 19. Apr 29 23:58:47.008110 systemd[1]: Started sshd@19-172.31.28.53:22-139.178.89.65:46294.service - OpenSSH per-connection server daemon (139.178.89.65:46294). Apr 29 23:58:47.294058 sshd[5183]: Accepted publickey for core from 139.178.89.65 port 46294 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:47.296949 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:47.304735 systemd-logind[2028]: New session 20 of user core. Apr 29 23:58:47.314160 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 29 23:58:47.615702 sshd[5186]: Connection closed by 139.178.89.65 port 46294 Apr 29 23:58:47.616026 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:47.623993 systemd[1]: sshd@19-172.31.28.53:22-139.178.89.65:46294.service: Deactivated successfully. Apr 29 23:58:47.630476 systemd[1]: session-20.scope: Deactivated successfully. Apr 29 23:58:47.633573 systemd-logind[2028]: Session 20 logged out. Waiting for processes to exit. Apr 29 23:58:47.635420 systemd-logind[2028]: Removed session 20. Apr 29 23:58:52.663104 systemd[1]: Started sshd@20-172.31.28.53:22-139.178.89.65:46304.service - OpenSSH per-connection server daemon (139.178.89.65:46304). Apr 29 23:58:52.946672 sshd[5196]: Accepted publickey for core from 139.178.89.65 port 46304 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:52.949787 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:52.957016 systemd-logind[2028]: New session 21 of user core. Apr 29 23:58:52.964234 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 29 23:58:53.256887 sshd[5202]: Connection closed by 139.178.89.65 port 46304 Apr 29 23:58:53.257741 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:53.266036 systemd[1]: sshd@20-172.31.28.53:22-139.178.89.65:46304.service: Deactivated successfully. Apr 29 23:58:53.271220 systemd[1]: session-21.scope: Deactivated successfully. Apr 29 23:58:53.274005 systemd-logind[2028]: Session 21 logged out. Waiting for processes to exit. Apr 29 23:58:53.276577 systemd-logind[2028]: Removed session 21. Apr 29 23:58:58.305096 systemd[1]: Started sshd@21-172.31.28.53:22-139.178.89.65:57648.service - OpenSSH per-connection server daemon (139.178.89.65:57648). Apr 29 23:58:58.582392 sshd[5214]: Accepted publickey for core from 139.178.89.65 port 57648 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:58:58.585458 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:58:58.594319 systemd-logind[2028]: New session 22 of user core. Apr 29 23:58:58.604107 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 29 23:58:58.897171 sshd[5217]: Connection closed by 139.178.89.65 port 57648 Apr 29 23:58:58.898488 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Apr 29 23:58:58.905847 systemd-logind[2028]: Session 22 logged out. Waiting for processes to exit. Apr 29 23:58:58.906210 systemd[1]: sshd@21-172.31.28.53:22-139.178.89.65:57648.service: Deactivated successfully. Apr 29 23:58:58.913904 systemd[1]: session-22.scope: Deactivated successfully. Apr 29 23:58:58.916572 systemd-logind[2028]: Removed session 22. Apr 29 23:59:03.945141 systemd[1]: Started sshd@22-172.31.28.53:22-139.178.89.65:57662.service - OpenSSH per-connection server daemon (139.178.89.65:57662). Apr 29 23:59:04.231473 sshd[5228]: Accepted publickey for core from 139.178.89.65 port 57662 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:59:04.234526 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:59:04.242939 systemd-logind[2028]: New session 23 of user core. Apr 29 23:59:04.253258 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 29 23:59:04.543735 sshd[5231]: Connection closed by 139.178.89.65 port 57662 Apr 29 23:59:04.543937 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Apr 29 23:59:04.551801 systemd[1]: sshd@22-172.31.28.53:22-139.178.89.65:57662.service: Deactivated successfully. Apr 29 23:59:04.557241 systemd-logind[2028]: Session 23 logged out. Waiting for processes to exit. Apr 29 23:59:04.557770 systemd[1]: session-23.scope: Deactivated successfully. Apr 29 23:59:04.561807 systemd-logind[2028]: Removed session 23. Apr 29 23:59:09.593121 systemd[1]: Started sshd@23-172.31.28.53:22-139.178.89.65:48704.service - OpenSSH per-connection server daemon (139.178.89.65:48704). Apr 29 23:59:09.875952 sshd[5241]: Accepted publickey for core from 139.178.89.65 port 48704 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:59:09.880925 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:59:09.891742 systemd-logind[2028]: New session 24 of user core. Apr 29 23:59:09.898208 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 29 23:59:10.191279 sshd[5244]: Connection closed by 139.178.89.65 port 48704 Apr 29 23:59:10.192193 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Apr 29 23:59:10.200278 systemd[1]: sshd@23-172.31.28.53:22-139.178.89.65:48704.service: Deactivated successfully. Apr 29 23:59:10.205829 systemd-logind[2028]: Session 24 logged out. Waiting for processes to exit. Apr 29 23:59:10.207493 systemd[1]: session-24.scope: Deactivated successfully. Apr 29 23:59:10.209504 systemd-logind[2028]: Removed session 24. Apr 29 23:59:10.238141 systemd[1]: Started sshd@24-172.31.28.53:22-139.178.89.65:48718.service - OpenSSH per-connection server daemon (139.178.89.65:48718). Apr 29 23:59:10.521136 sshd[5255]: Accepted publickey for core from 139.178.89.65 port 48718 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:59:10.523489 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:59:10.530910 systemd-logind[2028]: New session 25 of user core. Apr 29 23:59:10.539253 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 29 23:59:13.615001 containerd[2056]: time="2025-04-29T23:59:13.614856623Z" level=info msg="StopContainer for \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\" with timeout 30 (s)" Apr 29 23:59:13.618695 containerd[2056]: time="2025-04-29T23:59:13.617919947Z" level=info msg="Stop container \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\" with signal terminated" Apr 29 23:59:13.670904 containerd[2056]: time="2025-04-29T23:59:13.670666668Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 29 23:59:13.684217 containerd[2056]: time="2025-04-29T23:59:13.683723160Z" level=info msg="StopContainer for \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\" with timeout 2 (s)" Apr 29 23:59:13.685842 containerd[2056]: time="2025-04-29T23:59:13.685763664Z" level=info msg="Stop container \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\" with signal terminated" Apr 29 23:59:13.705183 systemd-networkd[1609]: lxc_health: Link DOWN Apr 29 23:59:13.707486 systemd-networkd[1609]: lxc_health: Lost carrier Apr 29 23:59:13.730195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd-rootfs.mount: Deactivated successfully. Apr 29 23:59:13.752012 containerd[2056]: time="2025-04-29T23:59:13.751387860Z" level=info msg="shim disconnected" id=bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd namespace=k8s.io Apr 29 23:59:13.752012 containerd[2056]: time="2025-04-29T23:59:13.751689144Z" level=warning msg="cleaning up after shim disconnected" id=bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd namespace=k8s.io Apr 29 23:59:13.752012 containerd[2056]: time="2025-04-29T23:59:13.751739484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:13.782680 containerd[2056]: time="2025-04-29T23:59:13.782441364Z" level=warning msg="cleanup warnings time=\"2025-04-29T23:59:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 29 23:59:13.791213 containerd[2056]: time="2025-04-29T23:59:13.791145216Z" level=info msg="StopContainer for \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\" returns successfully" Apr 29 23:59:13.791599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e-rootfs.mount: Deactivated successfully. Apr 29 23:59:13.793580 containerd[2056]: time="2025-04-29T23:59:13.792350484Z" level=info msg="StopPodSandbox for \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\"" Apr 29 23:59:13.793580 containerd[2056]: time="2025-04-29T23:59:13.792499728Z" level=info msg="Container to stop \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 29 23:59:13.798136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9-shm.mount: Deactivated successfully. Apr 29 23:59:13.801107 containerd[2056]: time="2025-04-29T23:59:13.800778408Z" level=info msg="shim disconnected" id=5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e namespace=k8s.io Apr 29 23:59:13.801107 containerd[2056]: time="2025-04-29T23:59:13.800856384Z" level=warning msg="cleaning up after shim disconnected" id=5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e namespace=k8s.io Apr 29 23:59:13.801107 containerd[2056]: time="2025-04-29T23:59:13.800875536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:13.844839 containerd[2056]: time="2025-04-29T23:59:13.844766784Z" level=info msg="StopContainer for \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\" returns successfully" Apr 29 23:59:13.845921 containerd[2056]: time="2025-04-29T23:59:13.845617764Z" level=info msg="StopPodSandbox for \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\"" Apr 29 23:59:13.845921 containerd[2056]: time="2025-04-29T23:59:13.845726196Z" level=info msg="Container to stop \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 29 23:59:13.845921 containerd[2056]: time="2025-04-29T23:59:13.845753424Z" level=info msg="Container to stop \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 29 23:59:13.845921 containerd[2056]: time="2025-04-29T23:59:13.845775984Z" level=info msg="Container to stop \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 29 23:59:13.845921 containerd[2056]: time="2025-04-29T23:59:13.845798280Z" level=info msg="Container to stop \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 29 23:59:13.845921 containerd[2056]: time="2025-04-29T23:59:13.845823540Z" level=info msg="Container to stop \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 29 23:59:13.852953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878-shm.mount: Deactivated successfully. Apr 29 23:59:13.906880 containerd[2056]: time="2025-04-29T23:59:13.905392885Z" level=info msg="shim disconnected" id=24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9 namespace=k8s.io Apr 29 23:59:13.908031 containerd[2056]: time="2025-04-29T23:59:13.905616181Z" level=warning msg="cleaning up after shim disconnected" id=24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9 namespace=k8s.io Apr 29 23:59:13.909027 containerd[2056]: time="2025-04-29T23:59:13.908821261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:13.943055 containerd[2056]: time="2025-04-29T23:59:13.942743773Z" level=info msg="shim disconnected" id=2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878 namespace=k8s.io Apr 29 23:59:13.943055 containerd[2056]: time="2025-04-29T23:59:13.942833389Z" level=warning msg="cleaning up after shim disconnected" id=2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878 namespace=k8s.io Apr 29 23:59:13.943055 containerd[2056]: time="2025-04-29T23:59:13.942855637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:13.949466 containerd[2056]: time="2025-04-29T23:59:13.949230109Z" level=info msg="TearDown network for sandbox \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" successfully" Apr 29 23:59:13.949466 containerd[2056]: time="2025-04-29T23:59:13.949301653Z" level=info msg="StopPodSandbox for \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" returns successfully" Apr 29 23:59:13.980181 containerd[2056]: time="2025-04-29T23:59:13.980116441Z" level=info msg="TearDown network for sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" successfully" Apr 29 23:59:13.980567 containerd[2056]: time="2025-04-29T23:59:13.980409145Z" level=info msg="StopPodSandbox for \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" returns successfully" Apr 29 23:59:13.987941 kubelet[3629]: I0429 23:59:13.987889 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snfv5\" (UniqueName: \"kubernetes.io/projected/b7be1b30-ccfe-43af-97b2-41874ca3c92e-kube-api-access-snfv5\") pod \"b7be1b30-ccfe-43af-97b2-41874ca3c92e\" (UID: \"b7be1b30-ccfe-43af-97b2-41874ca3c92e\") " Apr 29 23:59:13.987941 kubelet[3629]: I0429 23:59:13.987957 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7be1b30-ccfe-43af-97b2-41874ca3c92e-cilium-config-path\") pod \"b7be1b30-ccfe-43af-97b2-41874ca3c92e\" (UID: \"b7be1b30-ccfe-43af-97b2-41874ca3c92e\") " Apr 29 23:59:13.998412 kubelet[3629]: I0429 23:59:13.998347 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7be1b30-ccfe-43af-97b2-41874ca3c92e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b7be1b30-ccfe-43af-97b2-41874ca3c92e" (UID: "b7be1b30-ccfe-43af-97b2-41874ca3c92e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 29 23:59:13.999158 kubelet[3629]: I0429 23:59:13.999014 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7be1b30-ccfe-43af-97b2-41874ca3c92e-kube-api-access-snfv5" (OuterVolumeSpecName: "kube-api-access-snfv5") pod "b7be1b30-ccfe-43af-97b2-41874ca3c92e" (UID: "b7be1b30-ccfe-43af-97b2-41874ca3c92e"). InnerVolumeSpecName "kube-api-access-snfv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 29 23:59:14.089954 kubelet[3629]: I0429 23:59:14.088961 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-hostproc\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.089954 kubelet[3629]: I0429 23:59:14.089029 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-config-path\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.089954 kubelet[3629]: I0429 23:59:14.089054 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-hostproc" (OuterVolumeSpecName: "hostproc") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.089954 kubelet[3629]: I0429 23:59:14.089089 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-kernel\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.089954 kubelet[3629]: I0429 23:59:14.089123 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cni-path\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.089954 kubelet[3629]: I0429 23:59:14.089156 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-lib-modules\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090362 kubelet[3629]: I0429 23:59:14.089192 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-etc-cni-netd\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090362 kubelet[3629]: I0429 23:59:14.089227 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-net\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090362 kubelet[3629]: I0429 23:59:14.089264 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nknkq\" (UniqueName: \"kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-kube-api-access-nknkq\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090362 kubelet[3629]: I0429 23:59:14.089302 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-xtables-lock\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090362 kubelet[3629]: I0429 23:59:14.089335 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-run\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090362 kubelet[3629]: I0429 23:59:14.089369 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-cgroup\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090694 kubelet[3629]: I0429 23:59:14.089408 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-hubble-tls\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090694 kubelet[3629]: I0429 23:59:14.089445 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63c7a280-fb80-4eb8-90d3-abc163980c40-clustermesh-secrets\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090694 kubelet[3629]: I0429 23:59:14.089478 3629 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-bpf-maps\") pod \"63c7a280-fb80-4eb8-90d3-abc163980c40\" (UID: \"63c7a280-fb80-4eb8-90d3-abc163980c40\") " Apr 29 23:59:14.090694 kubelet[3629]: I0429 23:59:14.089533 3629 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-snfv5\" (UniqueName: \"kubernetes.io/projected/b7be1b30-ccfe-43af-97b2-41874ca3c92e-kube-api-access-snfv5\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.090694 kubelet[3629]: I0429 23:59:14.089558 3629 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7be1b30-ccfe-43af-97b2-41874ca3c92e-cilium-config-path\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.090694 kubelet[3629]: I0429 23:59:14.089585 3629 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-hostproc\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.091027 kubelet[3629]: I0429 23:59:14.089670 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091027 kubelet[3629]: I0429 23:59:14.089715 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091027 kubelet[3629]: I0429 23:59:14.089755 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cni-path" (OuterVolumeSpecName: "cni-path") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091027 kubelet[3629]: I0429 23:59:14.089790 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091027 kubelet[3629]: I0429 23:59:14.089823 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091279 kubelet[3629]: I0429 23:59:14.089883 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091279 kubelet[3629]: I0429 23:59:14.090816 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091279 kubelet[3629]: I0429 23:59:14.090913 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.091279 kubelet[3629]: I0429 23:59:14.090982 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 29 23:59:14.100672 kubelet[3629]: I0429 23:59:14.099662 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-kube-api-access-nknkq" (OuterVolumeSpecName: "kube-api-access-nknkq") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "kube-api-access-nknkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 29 23:59:14.100883 kubelet[3629]: I0429 23:59:14.100609 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 29 23:59:14.103500 kubelet[3629]: I0429 23:59:14.103434 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63c7a280-fb80-4eb8-90d3-abc163980c40-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 29 23:59:14.104680 kubelet[3629]: I0429 23:59:14.104594 3629 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "63c7a280-fb80-4eb8-90d3-abc163980c40" (UID: "63c7a280-fb80-4eb8-90d3-abc163980c40"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190328 3629 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-net\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190381 3629 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nknkq\" (UniqueName: \"kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-kube-api-access-nknkq\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190405 3629 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-xtables-lock\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190429 3629 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-run\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190450 3629 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-cgroup\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190469 3629 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/63c7a280-fb80-4eb8-90d3-abc163980c40-hubble-tls\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190487 3629 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-bpf-maps\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.190671 kubelet[3629]: I0429 23:59:14.190508 3629 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/63c7a280-fb80-4eb8-90d3-abc163980c40-clustermesh-secrets\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.191180 kubelet[3629]: I0429 23:59:14.190530 3629 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/63c7a280-fb80-4eb8-90d3-abc163980c40-cilium-config-path\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.191180 kubelet[3629]: I0429 23:59:14.190549 3629 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-host-proc-sys-kernel\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.191180 kubelet[3629]: I0429 23:59:14.190568 3629 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-cni-path\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.191180 kubelet[3629]: I0429 23:59:14.190588 3629 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-lib-modules\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.191180 kubelet[3629]: I0429 23:59:14.190606 3629 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/63c7a280-fb80-4eb8-90d3-abc163980c40-etc-cni-netd\") on node \"ip-172-31-28-53\" DevicePath \"\"" Apr 29 23:59:14.637055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878-rootfs.mount: Deactivated successfully. Apr 29 23:59:14.639591 kubelet[3629]: I0429 23:59:14.638134 3629 scope.go:117] "RemoveContainer" containerID="bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd" Apr 29 23:59:14.642610 systemd[1]: var-lib-kubelet-pods-63c7a280\x2dfb80\x2d4eb8\x2d90d3\x2dabc163980c40-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnknkq.mount: Deactivated successfully. Apr 29 23:59:14.648252 containerd[2056]: time="2025-04-29T23:59:14.645592008Z" level=info msg="RemoveContainer for \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\"" Apr 29 23:59:14.642908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9-rootfs.mount: Deactivated successfully. Apr 29 23:59:14.643150 systemd[1]: var-lib-kubelet-pods-b7be1b30\x2dccfe\x2d43af\x2d97b2\x2d41874ca3c92e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsnfv5.mount: Deactivated successfully. Apr 29 23:59:14.643372 systemd[1]: var-lib-kubelet-pods-63c7a280\x2dfb80\x2d4eb8\x2d90d3\x2dabc163980c40-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 29 23:59:14.643594 systemd[1]: var-lib-kubelet-pods-63c7a280\x2dfb80\x2d4eb8\x2d90d3\x2dabc163980c40-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 29 23:59:14.664463 containerd[2056]: time="2025-04-29T23:59:14.664394677Z" level=info msg="RemoveContainer for \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\" returns successfully" Apr 29 23:59:14.665064 kubelet[3629]: I0429 23:59:14.664987 3629 scope.go:117] "RemoveContainer" containerID="bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd" Apr 29 23:59:14.666204 kubelet[3629]: E0429 23:59:14.665881 3629 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\": not found" containerID="bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd" Apr 29 23:59:14.666204 kubelet[3629]: I0429 23:59:14.665932 3629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd"} err="failed to get container status \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\": not found" Apr 29 23:59:14.666204 kubelet[3629]: I0429 23:59:14.666060 3629 scope.go:117] "RemoveContainer" containerID="5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e" Apr 29 23:59:14.666436 containerd[2056]: time="2025-04-29T23:59:14.665554297Z" level=error msg="ContainerStatus for \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc7489e337026cc1ec9879dffebc5ae6bd565a0e1da74d260ce9f2bf77fe86cd\": not found" Apr 29 23:59:14.672315 containerd[2056]: time="2025-04-29T23:59:14.671941297Z" level=info msg="RemoveContainer for \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\"" Apr 29 23:59:14.679463 containerd[2056]: time="2025-04-29T23:59:14.679406893Z" level=info msg="RemoveContainer for \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\" returns successfully" Apr 29 23:59:14.680012 kubelet[3629]: I0429 23:59:14.679765 3629 scope.go:117] "RemoveContainer" containerID="506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9" Apr 29 23:59:14.690689 containerd[2056]: time="2025-04-29T23:59:14.689364013Z" level=info msg="RemoveContainer for \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\"" Apr 29 23:59:14.701840 containerd[2056]: time="2025-04-29T23:59:14.701771953Z" level=info msg="RemoveContainer for \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\" returns successfully" Apr 29 23:59:14.702286 kubelet[3629]: I0429 23:59:14.702083 3629 scope.go:117] "RemoveContainer" containerID="7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9" Apr 29 23:59:14.705777 containerd[2056]: time="2025-04-29T23:59:14.705343381Z" level=info msg="RemoveContainer for \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\"" Apr 29 23:59:14.712013 containerd[2056]: time="2025-04-29T23:59:14.711963805Z" level=info msg="RemoveContainer for \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\" returns successfully" Apr 29 23:59:14.712547 kubelet[3629]: I0429 23:59:14.712515 3629 scope.go:117] "RemoveContainer" containerID="4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451" Apr 29 23:59:14.715881 containerd[2056]: time="2025-04-29T23:59:14.715324357Z" level=info msg="RemoveContainer for \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\"" Apr 29 23:59:14.722981 containerd[2056]: time="2025-04-29T23:59:14.722907505Z" level=info msg="RemoveContainer for \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\" returns successfully" Apr 29 23:59:14.723895 kubelet[3629]: I0429 23:59:14.723867 3629 scope.go:117] "RemoveContainer" containerID="f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c" Apr 29 23:59:14.727387 containerd[2056]: time="2025-04-29T23:59:14.726920905Z" level=info msg="RemoveContainer for \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\"" Apr 29 23:59:14.733371 containerd[2056]: time="2025-04-29T23:59:14.733307017Z" level=info msg="RemoveContainer for \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\" returns successfully" Apr 29 23:59:14.733911 kubelet[3629]: I0429 23:59:14.733862 3629 scope.go:117] "RemoveContainer" containerID="5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e" Apr 29 23:59:14.734681 containerd[2056]: time="2025-04-29T23:59:14.734536489Z" level=error msg="ContainerStatus for \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\": not found" Apr 29 23:59:14.734997 kubelet[3629]: E0429 23:59:14.734928 3629 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\": not found" containerID="5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e" Apr 29 23:59:14.735159 kubelet[3629]: I0429 23:59:14.734984 3629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e"} err="failed to get container status \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5574f4f7d1d310d1d8f5c79eed2b8090f2aef3620265e3a5cc6fdd256cf09b4e\": not found" Apr 29 23:59:14.735159 kubelet[3629]: I0429 23:59:14.735023 3629 scope.go:117] "RemoveContainer" containerID="506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9" Apr 29 23:59:14.735659 containerd[2056]: time="2025-04-29T23:59:14.735501721Z" level=error msg="ContainerStatus for \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\": not found" Apr 29 23:59:14.735924 kubelet[3629]: E0429 23:59:14.735885 3629 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\": not found" containerID="506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9" Apr 29 23:59:14.736014 kubelet[3629]: I0429 23:59:14.735938 3629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9"} err="failed to get container status \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"506f3baf692c3bb0ed2124b3cdd5e9038ebb13d8a28c94a00e0d5f004221c6e9\": not found" Apr 29 23:59:14.736014 kubelet[3629]: I0429 23:59:14.735972 3629 scope.go:117] "RemoveContainer" containerID="7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9" Apr 29 23:59:14.736565 containerd[2056]: time="2025-04-29T23:59:14.736446289Z" level=error msg="ContainerStatus for \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\": not found" Apr 29 23:59:14.736845 kubelet[3629]: E0429 23:59:14.736811 3629 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\": not found" containerID="7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9" Apr 29 23:59:14.736939 kubelet[3629]: I0429 23:59:14.736858 3629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9"} err="failed to get container status \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e0f4df848a5a5a1b581ae460e895f3c409684f39315b28b846fe46f6b4f88e9\": not found" Apr 29 23:59:14.736939 kubelet[3629]: I0429 23:59:14.736894 3629 scope.go:117] "RemoveContainer" containerID="4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451" Apr 29 23:59:14.737574 containerd[2056]: time="2025-04-29T23:59:14.737499745Z" level=error msg="ContainerStatus for \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\": not found" Apr 29 23:59:14.737944 kubelet[3629]: E0429 23:59:14.737891 3629 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\": not found" containerID="4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451" Apr 29 23:59:14.738074 kubelet[3629]: I0429 23:59:14.737939 3629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451"} err="failed to get container status \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f08842bfcd89d5b8a2e769470bb2067d889834c24c528a62683930202b57451\": not found" Apr 29 23:59:14.738074 kubelet[3629]: I0429 23:59:14.737975 3629 scope.go:117] "RemoveContainer" containerID="f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c" Apr 29 23:59:14.738827 containerd[2056]: time="2025-04-29T23:59:14.738757045Z" level=error msg="ContainerStatus for \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\": not found" Apr 29 23:59:14.739344 kubelet[3629]: E0429 23:59:14.739239 3629 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\": not found" containerID="f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c" Apr 29 23:59:14.739344 kubelet[3629]: I0429 23:59:14.739298 3629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c"} err="failed to get container status \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1e502ea591cb47fdc04f6d6926c54fba805b4ac7d778240839a875ca5b34f4c\": not found" Apr 29 23:59:15.541696 sshd[5258]: Connection closed by 139.178.89.65 port 48718 Apr 29 23:59:15.542884 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Apr 29 23:59:15.549224 systemd[1]: sshd@24-172.31.28.53:22-139.178.89.65:48718.service: Deactivated successfully. Apr 29 23:59:15.556900 systemd-logind[2028]: Session 25 logged out. Waiting for processes to exit. Apr 29 23:59:15.557421 systemd[1]: session-25.scope: Deactivated successfully. Apr 29 23:59:15.561314 systemd-logind[2028]: Removed session 25. Apr 29 23:59:15.590104 systemd[1]: Started sshd@25-172.31.28.53:22-139.178.89.65:48726.service - OpenSSH per-connection server daemon (139.178.89.65:48726). Apr 29 23:59:15.879225 sshd[5426]: Accepted publickey for core from 139.178.89.65 port 48726 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:59:15.881876 sshd-session[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:59:15.890160 systemd-logind[2028]: New session 26 of user core. Apr 29 23:59:15.895415 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 29 23:59:16.152731 kubelet[3629]: I0429 23:59:16.152552 3629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" path="/var/lib/kubelet/pods/63c7a280-fb80-4eb8-90d3-abc163980c40/volumes" Apr 29 23:59:16.156524 kubelet[3629]: I0429 23:59:16.155946 3629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7be1b30-ccfe-43af-97b2-41874ca3c92e" path="/var/lib/kubelet/pods/b7be1b30-ccfe-43af-97b2-41874ca3c92e/volumes" Apr 29 23:59:16.374397 kubelet[3629]: E0429 23:59:16.374334 3629 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 29 23:59:16.647812 ntpd[2015]: Deleting interface #10 lxc_health, fe80::8cd6:e7ff:fe4f:66e2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Apr 29 23:59:16.648329 ntpd[2015]: 29 Apr 23:59:16 ntpd[2015]: Deleting interface #10 lxc_health, fe80::8cd6:e7ff:fe4f:66e2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Apr 29 23:59:17.143523 sshd[5429]: Connection closed by 139.178.89.65 port 48726 Apr 29 23:59:17.140957 sshd-session[5426]: pam_unix(sshd:session): session closed for user core Apr 29 23:59:17.155955 kubelet[3629]: I0429 23:59:17.155874 3629 topology_manager.go:215] "Topology Admit Handler" podUID="6a157f61-81dd-4e34-ac7d-806d3c338ab7" podNamespace="kube-system" podName="cilium-qp5mg" Apr 29 23:59:17.160857 kubelet[3629]: E0429 23:59:17.159049 3629 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" containerName="mount-cgroup" Apr 29 23:59:17.160857 kubelet[3629]: E0429 23:59:17.159459 3629 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" containerName="cilium-agent" Apr 29 23:59:17.164388 kubelet[3629]: E0429 23:59:17.159495 3629 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b7be1b30-ccfe-43af-97b2-41874ca3c92e" containerName="cilium-operator" Apr 29 23:59:17.164388 kubelet[3629]: E0429 23:59:17.162613 3629 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" containerName="apply-sysctl-overwrites" Apr 29 23:59:17.164388 kubelet[3629]: E0429 23:59:17.162848 3629 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" containerName="mount-bpf-fs" Apr 29 23:59:17.164388 kubelet[3629]: E0429 23:59:17.162866 3629 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" containerName="clean-cilium-state" Apr 29 23:59:17.164388 kubelet[3629]: I0429 23:59:17.163178 3629 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7be1b30-ccfe-43af-97b2-41874ca3c92e" containerName="cilium-operator" Apr 29 23:59:17.164388 kubelet[3629]: I0429 23:59:17.163207 3629 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c7a280-fb80-4eb8-90d3-abc163980c40" containerName="cilium-agent" Apr 29 23:59:17.166292 systemd[1]: sshd@25-172.31.28.53:22-139.178.89.65:48726.service: Deactivated successfully. Apr 29 23:59:17.179717 systemd[1]: session-26.scope: Deactivated successfully. Apr 29 23:59:17.183417 systemd-logind[2028]: Session 26 logged out. Waiting for processes to exit. Apr 29 23:59:17.208735 systemd[1]: Started sshd@26-172.31.28.53:22-139.178.89.65:47736.service - OpenSSH per-connection server daemon (139.178.89.65:47736). Apr 29 23:59:17.220136 systemd-logind[2028]: Removed session 26. Apr 29 23:59:17.229672 kubelet[3629]: I0429 23:59:17.225329 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6a157f61-81dd-4e34-ac7d-806d3c338ab7-clustermesh-secrets\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.229672 kubelet[3629]: I0429 23:59:17.225492 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6a157f61-81dd-4e34-ac7d-806d3c338ab7-hubble-tls\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.230048 kubelet[3629]: I0429 23:59:17.225548 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-bpf-maps\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.230248 kubelet[3629]: I0429 23:59:17.230210 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-cilium-cgroup\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.230718 kubelet[3629]: I0429 23:59:17.230680 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-lib-modules\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.250939 kubelet[3629]: I0429 23:59:17.250850 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6a157f61-81dd-4e34-ac7d-806d3c338ab7-cilium-config-path\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.251260 kubelet[3629]: I0429 23:59:17.251198 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-host-proc-sys-kernel\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.254761 kubelet[3629]: I0429 23:59:17.251297 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-xtables-lock\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.254992 kubelet[3629]: I0429 23:59:17.254949 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-etc-cni-netd\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.255527 kubelet[3629]: I0429 23:59:17.255138 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6a157f61-81dd-4e34-ac7d-806d3c338ab7-cilium-ipsec-secrets\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.264424 kubelet[3629]: I0429 23:59:17.264307 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-cilium-run\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.264557 kubelet[3629]: I0429 23:59:17.264433 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-hostproc\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.271171 kubelet[3629]: I0429 23:59:17.264559 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-cni-path\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.271171 kubelet[3629]: I0429 23:59:17.270935 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6a157f61-81dd-4e34-ac7d-806d3c338ab7-host-proc-sys-net\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.271883 kubelet[3629]: I0429 23:59:17.271731 3629 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncb8s\" (UniqueName: \"kubernetes.io/projected/6a157f61-81dd-4e34-ac7d-806d3c338ab7-kube-api-access-ncb8s\") pod \"cilium-qp5mg\" (UID: \"6a157f61-81dd-4e34-ac7d-806d3c338ab7\") " pod="kube-system/cilium-qp5mg" Apr 29 23:59:17.596680 sshd[5439]: Accepted publickey for core from 139.178.89.65 port 47736 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:59:17.599154 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:59:17.603388 containerd[2056]: time="2025-04-29T23:59:17.603318615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qp5mg,Uid:6a157f61-81dd-4e34-ac7d-806d3c338ab7,Namespace:kube-system,Attempt:0,}" Apr 29 23:59:17.612511 systemd-logind[2028]: New session 27 of user core. Apr 29 23:59:17.625431 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 29 23:59:17.670257 containerd[2056]: time="2025-04-29T23:59:17.670059759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 29 23:59:17.670257 containerd[2056]: time="2025-04-29T23:59:17.670177587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 29 23:59:17.670257 containerd[2056]: time="2025-04-29T23:59:17.670204119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:59:17.670675 containerd[2056]: time="2025-04-29T23:59:17.670385607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 29 23:59:17.734243 containerd[2056]: time="2025-04-29T23:59:17.733901488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qp5mg,Uid:6a157f61-81dd-4e34-ac7d-806d3c338ab7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\"" Apr 29 23:59:17.741400 containerd[2056]: time="2025-04-29T23:59:17.741348412Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 29 23:59:17.765306 containerd[2056]: time="2025-04-29T23:59:17.765109288Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3027b4985a63b88bb4ad6e97b49e0888caecb0b2e9ed87458f6e19e17f33664f\"" Apr 29 23:59:17.766054 containerd[2056]: time="2025-04-29T23:59:17.765852256Z" level=info msg="StartContainer for \"3027b4985a63b88bb4ad6e97b49e0888caecb0b2e9ed87458f6e19e17f33664f\"" Apr 29 23:59:17.819198 sshd[5460]: Connection closed by 139.178.89.65 port 47736 Apr 29 23:59:17.820026 sshd-session[5439]: pam_unix(sshd:session): session closed for user core Apr 29 23:59:17.832742 systemd[1]: sshd@26-172.31.28.53:22-139.178.89.65:47736.service: Deactivated successfully. Apr 29 23:59:17.841209 systemd[1]: session-27.scope: Deactivated successfully. Apr 29 23:59:17.843678 systemd-logind[2028]: Session 27 logged out. Waiting for processes to exit. Apr 29 23:59:17.847200 systemd-logind[2028]: Removed session 27. Apr 29 23:59:17.869597 systemd[1]: Started sshd@27-172.31.28.53:22-139.178.89.65:47748.service - OpenSSH per-connection server daemon (139.178.89.65:47748). Apr 29 23:59:17.886377 containerd[2056]: time="2025-04-29T23:59:17.882696388Z" level=info msg="StartContainer for \"3027b4985a63b88bb4ad6e97b49e0888caecb0b2e9ed87458f6e19e17f33664f\" returns successfully" Apr 29 23:59:17.966487 containerd[2056]: time="2025-04-29T23:59:17.966376409Z" level=info msg="shim disconnected" id=3027b4985a63b88bb4ad6e97b49e0888caecb0b2e9ed87458f6e19e17f33664f namespace=k8s.io Apr 29 23:59:17.967373 containerd[2056]: time="2025-04-29T23:59:17.967080485Z" level=warning msg="cleaning up after shim disconnected" id=3027b4985a63b88bb4ad6e97b49e0888caecb0b2e9ed87458f6e19e17f33664f namespace=k8s.io Apr 29 23:59:17.967373 containerd[2056]: time="2025-04-29T23:59:17.967121429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:18.169394 sshd[5524]: Accepted publickey for core from 139.178.89.65 port 47748 ssh2: RSA SHA256:rMShF5lv1krIneOW1i/lrlpFaOnnFxuzLqGDXTZQrzA Apr 29 23:59:18.172158 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 29 23:59:18.184248 systemd-logind[2028]: New session 28 of user core. Apr 29 23:59:18.191340 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 29 23:59:18.573340 kubelet[3629]: I0429 23:59:18.573159 3629 setters.go:580] "Node became not ready" node="ip-172-31-28-53" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-29T23:59:18Z","lastTransitionTime":"2025-04-29T23:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 29 23:59:18.682295 containerd[2056]: time="2025-04-29T23:59:18.682086400Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 29 23:59:18.712509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545242469.mount: Deactivated successfully. Apr 29 23:59:18.717178 containerd[2056]: time="2025-04-29T23:59:18.717031205Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"905acc7e336a3c8609291ccefdf44465f5daaf91310f2bb87455ee5aa5013765\"" Apr 29 23:59:18.718462 containerd[2056]: time="2025-04-29T23:59:18.718039373Z" level=info msg="StartContainer for \"905acc7e336a3c8609291ccefdf44465f5daaf91310f2bb87455ee5aa5013765\"" Apr 29 23:59:18.823212 containerd[2056]: time="2025-04-29T23:59:18.823123121Z" level=info msg="StartContainer for \"905acc7e336a3c8609291ccefdf44465f5daaf91310f2bb87455ee5aa5013765\" returns successfully" Apr 29 23:59:18.876751 containerd[2056]: time="2025-04-29T23:59:18.876392561Z" level=info msg="shim disconnected" id=905acc7e336a3c8609291ccefdf44465f5daaf91310f2bb87455ee5aa5013765 namespace=k8s.io Apr 29 23:59:18.876751 containerd[2056]: time="2025-04-29T23:59:18.876535541Z" level=warning msg="cleaning up after shim disconnected" id=905acc7e336a3c8609291ccefdf44465f5daaf91310f2bb87455ee5aa5013765 namespace=k8s.io Apr 29 23:59:18.876751 containerd[2056]: time="2025-04-29T23:59:18.876556337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:19.398101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-905acc7e336a3c8609291ccefdf44465f5daaf91310f2bb87455ee5aa5013765-rootfs.mount: Deactivated successfully. Apr 29 23:59:19.686005 containerd[2056]: time="2025-04-29T23:59:19.685949237Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 29 23:59:19.727366 containerd[2056]: time="2025-04-29T23:59:19.727285626Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a37810fb75ec024d89882fb915d2b2048307b54dc0f508d81405f1f9deb67785\"" Apr 29 23:59:19.728181 containerd[2056]: time="2025-04-29T23:59:19.728136522Z" level=info msg="StartContainer for \"a37810fb75ec024d89882fb915d2b2048307b54dc0f508d81405f1f9deb67785\"" Apr 29 23:59:19.838080 containerd[2056]: time="2025-04-29T23:59:19.838013286Z" level=info msg="StartContainer for \"a37810fb75ec024d89882fb915d2b2048307b54dc0f508d81405f1f9deb67785\" returns successfully" Apr 29 23:59:19.884759 containerd[2056]: time="2025-04-29T23:59:19.884470578Z" level=info msg="shim disconnected" id=a37810fb75ec024d89882fb915d2b2048307b54dc0f508d81405f1f9deb67785 namespace=k8s.io Apr 29 23:59:19.884759 containerd[2056]: time="2025-04-29T23:59:19.884544258Z" level=warning msg="cleaning up after shim disconnected" id=a37810fb75ec024d89882fb915d2b2048307b54dc0f508d81405f1f9deb67785 namespace=k8s.io Apr 29 23:59:19.884759 containerd[2056]: time="2025-04-29T23:59:19.884562702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:20.400691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a37810fb75ec024d89882fb915d2b2048307b54dc0f508d81405f1f9deb67785-rootfs.mount: Deactivated successfully. Apr 29 23:59:20.694189 containerd[2056]: time="2025-04-29T23:59:20.694111602Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 29 23:59:20.737355 containerd[2056]: time="2025-04-29T23:59:20.737294359Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ec4edc06d8c03e44b6c44c7465f182c8feb0492f8cc7bb6cdaf1fc8be8e85fed\"" Apr 29 23:59:20.740101 containerd[2056]: time="2025-04-29T23:59:20.738880759Z" level=info msg="StartContainer for \"ec4edc06d8c03e44b6c44c7465f182c8feb0492f8cc7bb6cdaf1fc8be8e85fed\"" Apr 29 23:59:20.844036 containerd[2056]: time="2025-04-29T23:59:20.843967555Z" level=info msg="StartContainer for \"ec4edc06d8c03e44b6c44c7465f182c8feb0492f8cc7bb6cdaf1fc8be8e85fed\" returns successfully" Apr 29 23:59:20.884301 containerd[2056]: time="2025-04-29T23:59:20.884118811Z" level=info msg="shim disconnected" id=ec4edc06d8c03e44b6c44c7465f182c8feb0492f8cc7bb6cdaf1fc8be8e85fed namespace=k8s.io Apr 29 23:59:20.884659 containerd[2056]: time="2025-04-29T23:59:20.884602255Z" level=warning msg="cleaning up after shim disconnected" id=ec4edc06d8c03e44b6c44c7465f182c8feb0492f8cc7bb6cdaf1fc8be8e85fed namespace=k8s.io Apr 29 23:59:20.884787 containerd[2056]: time="2025-04-29T23:59:20.884762719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:21.376509 kubelet[3629]: E0429 23:59:21.376361 3629 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 29 23:59:21.398863 systemd[1]: run-containerd-runc-k8s.io-ec4edc06d8c03e44b6c44c7465f182c8feb0492f8cc7bb6cdaf1fc8be8e85fed-runc.BzMMhK.mount: Deactivated successfully. Apr 29 23:59:21.399225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec4edc06d8c03e44b6c44c7465f182c8feb0492f8cc7bb6cdaf1fc8be8e85fed-rootfs.mount: Deactivated successfully. Apr 29 23:59:21.712239 containerd[2056]: time="2025-04-29T23:59:21.712144712Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 29 23:59:21.745176 containerd[2056]: time="2025-04-29T23:59:21.744367004Z" level=info msg="CreateContainer within sandbox \"f353c975a48a923557fcc5f6578ccca870f90ee021aba4d3bf9bf2b87ca8a776\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"366a095551830d78ff17a0c6dce4245bfe80ab2b2c5d30167aeca6d40167c46e\"" Apr 29 23:59:21.747355 containerd[2056]: time="2025-04-29T23:59:21.746354936Z" level=info msg="StartContainer for \"366a095551830d78ff17a0c6dce4245bfe80ab2b2c5d30167aeca6d40167c46e\"" Apr 29 23:59:21.862657 containerd[2056]: time="2025-04-29T23:59:21.861691712Z" level=info msg="StartContainer for \"366a095551830d78ff17a0c6dce4245bfe80ab2b2c5d30167aeca6d40167c46e\" returns successfully" Apr 29 23:59:22.646653 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 29 23:59:26.094140 containerd[2056]: time="2025-04-29T23:59:26.093803217Z" level=info msg="StopPodSandbox for \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\"" Apr 29 23:59:26.094140 containerd[2056]: time="2025-04-29T23:59:26.093959685Z" level=info msg="TearDown network for sandbox \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" successfully" Apr 29 23:59:26.094140 containerd[2056]: time="2025-04-29T23:59:26.093981225Z" level=info msg="StopPodSandbox for \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" returns successfully" Apr 29 23:59:26.095763 containerd[2056]: time="2025-04-29T23:59:26.094589361Z" level=info msg="RemovePodSandbox for \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\"" Apr 29 23:59:26.095763 containerd[2056]: time="2025-04-29T23:59:26.094955277Z" level=info msg="Forcibly stopping sandbox \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\"" Apr 29 23:59:26.095763 containerd[2056]: time="2025-04-29T23:59:26.095065773Z" level=info msg="TearDown network for sandbox \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" successfully" Apr 29 23:59:26.101945 containerd[2056]: time="2025-04-29T23:59:26.101795565Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 29 23:59:26.101945 containerd[2056]: time="2025-04-29T23:59:26.101897385Z" level=info msg="RemovePodSandbox \"24c5d4f094a670340a47a5d6af714d0dedbdb2a6e66bb634e550a43a60782cf9\" returns successfully" Apr 29 23:59:26.103457 containerd[2056]: time="2025-04-29T23:59:26.102838845Z" level=info msg="StopPodSandbox for \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\"" Apr 29 23:59:26.103457 containerd[2056]: time="2025-04-29T23:59:26.102980721Z" level=info msg="TearDown network for sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" successfully" Apr 29 23:59:26.103457 containerd[2056]: time="2025-04-29T23:59:26.103003329Z" level=info msg="StopPodSandbox for \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" returns successfully" Apr 29 23:59:26.103961 containerd[2056]: time="2025-04-29T23:59:26.103884945Z" level=info msg="RemovePodSandbox for \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\"" Apr 29 23:59:26.103961 containerd[2056]: time="2025-04-29T23:59:26.103955709Z" level=info msg="Forcibly stopping sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\"" Apr 29 23:59:26.104102 containerd[2056]: time="2025-04-29T23:59:26.104067501Z" level=info msg="TearDown network for sandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" successfully" Apr 29 23:59:26.110696 containerd[2056]: time="2025-04-29T23:59:26.110560101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 29 23:59:26.110873 containerd[2056]: time="2025-04-29T23:59:26.110730369Z" level=info msg="RemovePodSandbox \"2d11dfb99d60fa08c7d99e20f39d06bcf6c8c9bb7375c190ae9630b883de9878\" returns successfully" Apr 29 23:59:26.919028 systemd-networkd[1609]: lxc_health: Link UP Apr 29 23:59:26.932592 systemd-networkd[1609]: lxc_health: Gained carrier Apr 29 23:59:26.936921 (udev-worker)[6275]: Network interface NamePolicy= disabled on kernel command line. Apr 29 23:59:27.324852 systemd[1]: run-containerd-runc-k8s.io-366a095551830d78ff17a0c6dce4245bfe80ab2b2c5d30167aeca6d40167c46e-runc.VKFRUJ.mount: Deactivated successfully. Apr 29 23:59:27.661871 kubelet[3629]: I0429 23:59:27.660010 3629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qp5mg" podStartSLOduration=10.659989896999999 podStartE2EDuration="10.659989897s" podCreationTimestamp="2025-04-29 23:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-29 23:59:22.747339933 +0000 UTC m=+116.870166210" watchObservedRunningTime="2025-04-29 23:59:27.659989897 +0000 UTC m=+121.782816174" Apr 29 23:59:28.151293 systemd-networkd[1609]: lxc_health: Gained IPv6LL Apr 29 23:59:30.647894 ntpd[2015]: Listen normally on 13 lxc_health [fe80::a4d5:c6ff:fef4:42a7%14]:123 Apr 29 23:59:30.648940 ntpd[2015]: 29 Apr 23:59:30 ntpd[2015]: Listen normally on 13 lxc_health [fe80::a4d5:c6ff:fef4:42a7%14]:123 Apr 29 23:59:32.213446 sshd[5560]: Connection closed by 139.178.89.65 port 47748 Apr 29 23:59:32.213202 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Apr 29 23:59:32.224337 systemd[1]: sshd@27-172.31.28.53:22-139.178.89.65:47748.service: Deactivated successfully. Apr 29 23:59:32.234329 systemd[1]: session-28.scope: Deactivated successfully. Apr 29 23:59:32.234406 systemd-logind[2028]: Session 28 logged out. Waiting for processes to exit. Apr 29 23:59:32.238930 systemd-logind[2028]: Removed session 28. Apr 29 23:59:45.748419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fed504350684b6576200d71ed7dfb02b151e6f5a53c2a28f6084ef6eec98c7d-rootfs.mount: Deactivated successfully. Apr 29 23:59:45.792061 containerd[2056]: time="2025-04-29T23:59:45.791929507Z" level=info msg="shim disconnected" id=4fed504350684b6576200d71ed7dfb02b151e6f5a53c2a28f6084ef6eec98c7d namespace=k8s.io Apr 29 23:59:45.792061 containerd[2056]: time="2025-04-29T23:59:45.792007399Z" level=warning msg="cleaning up after shim disconnected" id=4fed504350684b6576200d71ed7dfb02b151e6f5a53c2a28f6084ef6eec98c7d namespace=k8s.io Apr 29 23:59:45.792061 containerd[2056]: time="2025-04-29T23:59:45.792026395Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:46.789781 kubelet[3629]: I0429 23:59:46.789372 3629 scope.go:117] "RemoveContainer" containerID="4fed504350684b6576200d71ed7dfb02b151e6f5a53c2a28f6084ef6eec98c7d" Apr 29 23:59:46.794667 containerd[2056]: time="2025-04-29T23:59:46.794196380Z" level=info msg="CreateContainer within sandbox \"fa3492cc56f41a77f7388d413bda4f8881ffcf6a6015e26163e7459def9b1cd8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 29 23:59:46.826830 containerd[2056]: time="2025-04-29T23:59:46.826754684Z" level=info msg="CreateContainer within sandbox \"fa3492cc56f41a77f7388d413bda4f8881ffcf6a6015e26163e7459def9b1cd8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a2fb1909dcf20c8ee63cb11a94e0cea49bf8e28e44a35da066ac2a255d53a40d\"" Apr 29 23:59:46.827562 containerd[2056]: time="2025-04-29T23:59:46.827517764Z" level=info msg="StartContainer for \"a2fb1909dcf20c8ee63cb11a94e0cea49bf8e28e44a35da066ac2a255d53a40d\"" Apr 29 23:59:46.954165 containerd[2056]: time="2025-04-29T23:59:46.954091161Z" level=info msg="StartContainer for \"a2fb1909dcf20c8ee63cb11a94e0cea49bf8e28e44a35da066ac2a255d53a40d\" returns successfully" Apr 29 23:59:48.964327 kubelet[3629]: E0429 23:59:48.964131 3629 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-53?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 29 23:59:50.629582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-026533eaa1616ce5d8f1d0bc9d9a274598b4b893a652b3dc9221e77780e3e3bb-rootfs.mount: Deactivated successfully. Apr 29 23:59:50.642914 containerd[2056]: time="2025-04-29T23:59:50.642772043Z" level=info msg="shim disconnected" id=026533eaa1616ce5d8f1d0bc9d9a274598b4b893a652b3dc9221e77780e3e3bb namespace=k8s.io Apr 29 23:59:50.643741 containerd[2056]: time="2025-04-29T23:59:50.642882371Z" level=warning msg="cleaning up after shim disconnected" id=026533eaa1616ce5d8f1d0bc9d9a274598b4b893a652b3dc9221e77780e3e3bb namespace=k8s.io Apr 29 23:59:50.643741 containerd[2056]: time="2025-04-29T23:59:50.643035383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 29 23:59:50.805187 kubelet[3629]: I0429 23:59:50.805137 3629 scope.go:117] "RemoveContainer" containerID="026533eaa1616ce5d8f1d0bc9d9a274598b4b893a652b3dc9221e77780e3e3bb" Apr 29 23:59:50.809416 containerd[2056]: time="2025-04-29T23:59:50.809100132Z" level=info msg="CreateContainer within sandbox \"92226e3d26139ac0163bd051d3ea660fdc8a13d3952e2d96d79199bea73fad44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 29 23:59:50.837033 containerd[2056]: time="2025-04-29T23:59:50.836957688Z" level=info msg="CreateContainer within sandbox \"92226e3d26139ac0163bd051d3ea660fdc8a13d3952e2d96d79199bea73fad44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"64f4db37fa89c6a55d28804bdb8528c1202710232edc77185fa00a1b4660310d\"" Apr 29 23:59:50.837756 containerd[2056]: time="2025-04-29T23:59:50.837698940Z" level=info msg="StartContainer for \"64f4db37fa89c6a55d28804bdb8528c1202710232edc77185fa00a1b4660310d\"" Apr 29 23:59:50.955129 containerd[2056]: time="2025-04-29T23:59:50.954974005Z" level=info msg="StartContainer for \"64f4db37fa89c6a55d28804bdb8528c1202710232edc77185fa00a1b4660310d\" returns successfully" Apr 29 23:59:58.965415 kubelet[3629]: E0429 23:59:58.965330 3629 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-28-53)"