Mar 17 17:24:48.196000 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 17:24:48.196045 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:24:48.196069 kernel: KASLR disabled due to lack of seed Mar 17 17:24:48.196086 kernel: efi: EFI v2.7 by EDK II Mar 17 17:24:48.196101 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Mar 17 17:24:48.196134 kernel: secureboot: Secure boot disabled Mar 17 17:24:48.196157 kernel: ACPI: Early table checksum verification disabled Mar 17 17:24:48.196173 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 17:24:48.196189 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 17:24:48.196204 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:24:48.196225 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 17:24:48.196240 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:24:48.196255 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 17:24:48.196271 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 17:24:48.196289 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 17:24:48.196309 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:24:48.196325 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 17:24:48.196341 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 17:24:48.196357 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 17:24:48.196373 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 17:24:48.196389 kernel: printk: bootconsole [uart0] enabled Mar 17 17:24:48.196404 kernel: NUMA: Failed to initialise from firmware Mar 17 17:24:48.196421 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:24:48.196437 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 17 17:24:48.196452 kernel: Zone ranges: Mar 17 17:24:48.196468 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:24:48.196488 kernel: DMA32 empty Mar 17 17:24:48.196505 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 17:24:48.196521 kernel: Movable zone start for each node Mar 17 17:24:48.196536 kernel: Early memory node ranges Mar 17 17:24:48.196552 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 17:24:48.196568 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 17:24:48.196584 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 17:24:48.196600 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 17:24:48.196615 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 17:24:48.196631 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 17:24:48.196647 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 17:24:48.196662 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 17:24:48.196682 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:24:48.196699 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 17:24:48.196721 kernel: psci: probing for conduit method from ACPI. Mar 17 17:24:48.196738 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 17:24:48.196755 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:24:48.196775 kernel: psci: Trusted OS migration not required Mar 17 17:24:48.196792 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:24:48.196809 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:24:48.196825 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:24:48.196842 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:24:48.196859 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:24:48.196876 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:24:48.196892 kernel: CPU features: detected: Spectre-v2 Mar 17 17:24:48.196909 kernel: CPU features: detected: Spectre-v3a Mar 17 17:24:48.196925 kernel: CPU features: detected: Spectre-BHB Mar 17 17:24:48.196942 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 17:24:48.196958 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 17:24:48.196979 kernel: alternatives: applying boot alternatives Mar 17 17:24:48.196998 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:48.197016 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:24:48.197033 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:24:48.197050 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:24:48.197066 kernel: Fallback order for Node 0: 0 Mar 17 17:24:48.197083 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 17:24:48.197830 kernel: Policy zone: Normal Mar 17 17:24:48.197857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:24:48.197874 kernel: software IO TLB: area num 2. Mar 17 17:24:48.197898 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 17:24:48.197916 kernel: Memory: 3819896K/4030464K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 210568K reserved, 0K cma-reserved) Mar 17 17:24:48.197934 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:24:48.197951 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:24:48.197968 kernel: rcu: RCU event tracing is enabled. Mar 17 17:24:48.197985 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:24:48.198003 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:24:48.198020 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:24:48.198037 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:24:48.198053 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:24:48.198070 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:24:48.198091 kernel: GICv3: 96 SPIs implemented Mar 17 17:24:48.198108 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:24:48.198149 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:24:48.198169 kernel: GICv3: GICv3 features: 16 PPIs Mar 17 17:24:48.198185 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 17:24:48.198202 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 17:24:48.198219 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:24:48.198236 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:24:48.198253 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 17 17:24:48.198269 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 17:24:48.198286 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 17 17:24:48.198303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:24:48.198326 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 17:24:48.198344 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 17:24:48.198360 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 17:24:48.198377 kernel: Console: colour dummy device 80x25 Mar 17 17:24:48.198395 kernel: printk: console [tty1] enabled Mar 17 17:24:48.198412 kernel: ACPI: Core revision 20230628 Mar 17 17:24:48.198430 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 17:24:48.198447 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:24:48.198464 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:24:48.198481 kernel: landlock: Up and running. Mar 17 17:24:48.198503 kernel: SELinux: Initializing. Mar 17 17:24:48.198520 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:48.198538 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:48.198555 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:48.198572 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:48.198589 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:24:48.198607 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:24:48.198624 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 17:24:48.198645 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 17:24:48.198663 kernel: Remapping and enabling EFI services. Mar 17 17:24:48.198681 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:24:48.198698 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:24:48.198715 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 17:24:48.198732 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 17 17:24:48.198749 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 17:24:48.198766 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:24:48.198783 kernel: SMP: Total of 2 processors activated. Mar 17 17:24:48.198800 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:24:48.198821 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 17:24:48.198839 kernel: CPU features: detected: CRC32 instructions Mar 17 17:24:48.198866 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:24:48.198888 kernel: alternatives: applying system-wide alternatives Mar 17 17:24:48.198906 kernel: devtmpfs: initialized Mar 17 17:24:48.198924 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:24:48.198942 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:24:48.198960 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:24:48.198977 kernel: SMBIOS 3.0.0 present. Mar 17 17:24:48.199000 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 17:24:48.199017 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:24:48.199036 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:24:48.199054 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:24:48.199072 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:24:48.199090 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:24:48.199108 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Mar 17 17:24:48.199148 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:24:48.199170 kernel: cpuidle: using governor menu Mar 17 17:24:48.199201 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:24:48.199222 kernel: ASID allocator initialised with 65536 entries Mar 17 17:24:48.199240 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:24:48.199258 kernel: Serial: AMBA PL011 UART driver Mar 17 17:24:48.199276 kernel: Modules: 17424 pages in range for non-PLT usage Mar 17 17:24:48.199295 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:24:48.199313 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:24:48.199335 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:24:48.199353 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:24:48.199371 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:24:48.199389 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:24:48.199407 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:24:48.199425 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:24:48.199442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:24:48.199460 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:24:48.199478 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:24:48.199500 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:24:48.199518 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:24:48.199536 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:24:48.199554 kernel: ACPI: Interpreter enabled Mar 17 17:24:48.199571 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:24:48.199589 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:24:48.199607 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 17:24:48.199904 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:24:48.200114 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:24:48.200428 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:24:48.200629 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 17:24:48.200827 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 17:24:48.200852 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 17:24:48.200871 kernel: acpiphp: Slot [1] registered Mar 17 17:24:48.200889 kernel: acpiphp: Slot [2] registered Mar 17 17:24:48.200907 kernel: acpiphp: Slot [3] registered Mar 17 17:24:48.200931 kernel: acpiphp: Slot [4] registered Mar 17 17:24:48.200949 kernel: acpiphp: Slot [5] registered Mar 17 17:24:48.200967 kernel: acpiphp: Slot [6] registered Mar 17 17:24:48.200984 kernel: acpiphp: Slot [7] registered Mar 17 17:24:48.201002 kernel: acpiphp: Slot [8] registered Mar 17 17:24:48.201020 kernel: acpiphp: Slot [9] registered Mar 17 17:24:48.201037 kernel: acpiphp: Slot [10] registered Mar 17 17:24:48.201055 kernel: acpiphp: Slot [11] registered Mar 17 17:24:48.201073 kernel: acpiphp: Slot [12] registered Mar 17 17:24:48.201091 kernel: acpiphp: Slot [13] registered Mar 17 17:24:48.201160 kernel: acpiphp: Slot [14] registered Mar 17 17:24:48.201179 kernel: acpiphp: Slot [15] registered Mar 17 17:24:48.201196 kernel: acpiphp: Slot [16] registered Mar 17 17:24:48.201214 kernel: acpiphp: Slot [17] registered Mar 17 17:24:48.201232 kernel: acpiphp: Slot [18] registered Mar 17 17:24:48.201249 kernel: acpiphp: Slot [19] registered Mar 17 17:24:48.201267 kernel: acpiphp: Slot [20] registered Mar 17 17:24:48.201285 kernel: acpiphp: Slot [21] registered Mar 17 17:24:48.201303 kernel: acpiphp: Slot [22] registered Mar 17 17:24:48.201326 kernel: acpiphp: Slot [23] registered Mar 17 17:24:48.201344 kernel: acpiphp: Slot [24] registered Mar 17 17:24:48.201362 kernel: acpiphp: Slot [25] registered Mar 17 17:24:48.201380 kernel: acpiphp: Slot [26] registered Mar 17 17:24:48.201398 kernel: acpiphp: Slot [27] registered Mar 17 17:24:48.201415 kernel: acpiphp: Slot [28] registered Mar 17 17:24:48.201433 kernel: acpiphp: Slot [29] registered Mar 17 17:24:48.201451 kernel: acpiphp: Slot [30] registered Mar 17 17:24:48.201469 kernel: acpiphp: Slot [31] registered Mar 17 17:24:48.201486 kernel: PCI host bridge to bus 0000:00 Mar 17 17:24:48.201702 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 17:24:48.201890 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:24:48.202148 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 17:24:48.202395 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 17:24:48.202705 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 17:24:48.202932 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 17:24:48.203180 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 17:24:48.203403 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:24:48.203620 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 17:24:48.203829 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:24:48.204051 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:24:48.206812 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 17:24:48.207066 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 17:24:48.211767 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 17:24:48.211997 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:24:48.212243 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 17:24:48.212449 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 17:24:48.212658 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 17:24:48.212862 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 17:24:48.213071 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 17:24:48.216432 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 17:24:48.216633 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:24:48.216839 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 17:24:48.216868 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:24:48.216887 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:24:48.216905 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:24:48.216924 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:24:48.216943 kernel: iommu: Default domain type: Translated Mar 17 17:24:48.216973 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:24:48.216992 kernel: efivars: Registered efivars operations Mar 17 17:24:48.217010 kernel: vgaarb: loaded Mar 17 17:24:48.217028 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:24:48.217047 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:24:48.217065 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:24:48.217083 kernel: pnp: PnP ACPI init Mar 17 17:24:48.217365 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 17:24:48.217402 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:24:48.217421 kernel: NET: Registered PF_INET protocol family Mar 17 17:24:48.217440 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:24:48.217460 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:24:48.217478 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:24:48.217497 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:24:48.217515 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:24:48.217533 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:24:48.217551 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:48.217574 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:48.217593 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:24:48.217611 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:24:48.217629 kernel: kvm [1]: HYP mode not available Mar 17 17:24:48.217648 kernel: Initialise system trusted keyrings Mar 17 17:24:48.217666 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:24:48.217684 kernel: Key type asymmetric registered Mar 17 17:24:48.217702 kernel: Asymmetric key parser 'x509' registered Mar 17 17:24:48.217720 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:24:48.217743 kernel: io scheduler mq-deadline registered Mar 17 17:24:48.217762 kernel: io scheduler kyber registered Mar 17 17:24:48.217780 kernel: io scheduler bfq registered Mar 17 17:24:48.218007 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 17:24:48.218037 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:24:48.218056 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:24:48.218074 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 17:24:48.218092 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 17:24:48.218155 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:24:48.218181 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:24:48.218397 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 17:24:48.218423 kernel: printk: console [ttyS0] disabled Mar 17 17:24:48.218441 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 17:24:48.218460 kernel: printk: console [ttyS0] enabled Mar 17 17:24:48.218478 kernel: printk: bootconsole [uart0] disabled Mar 17 17:24:48.218496 kernel: thunder_xcv, ver 1.0 Mar 17 17:24:48.218514 kernel: thunder_bgx, ver 1.0 Mar 17 17:24:48.218533 kernel: nicpf, ver 1.0 Mar 17 17:24:48.218558 kernel: nicvf, ver 1.0 Mar 17 17:24:48.218774 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:24:48.218965 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:24:47 UTC (1742232287) Mar 17 17:24:48.218990 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:24:48.219009 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 17:24:48.219027 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:24:48.219046 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:24:48.219069 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:24:48.219087 kernel: Segment Routing with IPv6 Mar 17 17:24:48.219105 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:24:48.219207 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:24:48.219229 kernel: Key type dns_resolver registered Mar 17 17:24:48.219247 kernel: registered taskstats version 1 Mar 17 17:24:48.219265 kernel: Loading compiled-in X.509 certificates Mar 17 17:24:48.219284 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:24:48.219301 kernel: Key type .fscrypt registered Mar 17 17:24:48.219319 kernel: Key type fscrypt-provisioning registered Mar 17 17:24:48.219343 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:24:48.219362 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:24:48.219379 kernel: ima: No architecture policies found Mar 17 17:24:48.219397 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:24:48.219415 kernel: clk: Disabling unused clocks Mar 17 17:24:48.219433 kernel: Freeing unused kernel memory: 39744K Mar 17 17:24:48.219451 kernel: Run /init as init process Mar 17 17:24:48.219469 kernel: with arguments: Mar 17 17:24:48.219486 kernel: /init Mar 17 17:24:48.219509 kernel: with environment: Mar 17 17:24:48.219526 kernel: HOME=/ Mar 17 17:24:48.219544 kernel: TERM=linux Mar 17 17:24:48.219562 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:24:48.219584 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:48.219607 systemd[1]: Detected virtualization amazon. Mar 17 17:24:48.219628 systemd[1]: Detected architecture arm64. Mar 17 17:24:48.219652 systemd[1]: Running in initrd. Mar 17 17:24:48.219671 systemd[1]: No hostname configured, using default hostname. Mar 17 17:24:48.219690 systemd[1]: Hostname set to . Mar 17 17:24:48.219710 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:24:48.219730 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:24:48.219751 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:48.219771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:48.219792 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:24:48.219817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:48.219837 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:24:48.219858 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:24:48.219880 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:24:48.219901 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:24:48.219921 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:48.219941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:48.219966 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:24:48.219986 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:48.220006 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:48.220027 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:24:48.220047 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:48.220067 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:48.220087 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:24:48.220108 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:24:48.220216 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:48.220248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:48.220270 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:48.220290 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:24:48.220311 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:24:48.220333 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:48.220353 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:24:48.220375 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:24:48.220395 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:48.220421 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:48.220441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:48.220461 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:48.220483 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:48.220504 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:24:48.220588 systemd-journald[252]: Collecting audit messages is disabled. Mar 17 17:24:48.220644 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:24:48.220665 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:24:48.220684 kernel: Bridge firewalling registered Mar 17 17:24:48.220709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:48.220729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:48.220749 systemd-journald[252]: Journal started Mar 17 17:24:48.220787 systemd-journald[252]: Runtime Journal (/run/log/journal/ec29b43f269b521ecd8718aa07439265) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:24:48.164928 systemd-modules-load[253]: Inserted module 'overlay' Mar 17 17:24:48.226865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:48.204356 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 17 17:24:48.240211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:48.250151 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:48.250183 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:48.265777 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:48.282472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:48.294533 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:24:48.300551 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:48.312536 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:24:48.323652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:48.334455 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:48.349466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:24:48.365355 dracut-cmdline[284]: dracut-dracut-053 Mar 17 17:24:48.372447 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:48.443378 systemd-resolved[288]: Positive Trust Anchors: Mar 17 17:24:48.443435 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:24:48.443498 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:24:48.535142 kernel: SCSI subsystem initialized Mar 17 17:24:48.541148 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:24:48.554162 kernel: iscsi: registered transport (tcp) Mar 17 17:24:48.576156 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:24:48.576232 kernel: QLogic iSCSI HBA Driver Mar 17 17:24:48.678163 kernel: random: crng init done Mar 17 17:24:48.678574 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 17 17:24:48.680321 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:24:48.691437 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:48.713170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:48.736490 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:24:48.767745 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:24:48.767821 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:24:48.767849 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:24:48.835183 kernel: raid6: neonx8 gen() 6769 MB/s Mar 17 17:24:48.852153 kernel: raid6: neonx4 gen() 6591 MB/s Mar 17 17:24:48.869151 kernel: raid6: neonx2 gen() 5478 MB/s Mar 17 17:24:48.886154 kernel: raid6: neonx1 gen() 3960 MB/s Mar 17 17:24:48.903151 kernel: raid6: int64x8 gen() 3827 MB/s Mar 17 17:24:48.920153 kernel: raid6: int64x4 gen() 3735 MB/s Mar 17 17:24:48.937153 kernel: raid6: int64x2 gen() 3624 MB/s Mar 17 17:24:48.954985 kernel: raid6: int64x1 gen() 2765 MB/s Mar 17 17:24:48.955019 kernel: raid6: using algorithm neonx8 gen() 6769 MB/s Mar 17 17:24:48.972969 kernel: raid6: .... xor() 4822 MB/s, rmw enabled Mar 17 17:24:48.973022 kernel: raid6: using neon recovery algorithm Mar 17 17:24:48.981484 kernel: xor: measuring software checksum speed Mar 17 17:24:48.981542 kernel: 8regs : 10971 MB/sec Mar 17 17:24:48.982612 kernel: 32regs : 11927 MB/sec Mar 17 17:24:48.983805 kernel: arm64_neon : 9558 MB/sec Mar 17 17:24:48.983836 kernel: xor: using function: 32regs (11927 MB/sec) Mar 17 17:24:49.067169 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:24:49.085865 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:49.095516 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:49.142378 systemd-udevd[470]: Using default interface naming scheme 'v255'. Mar 17 17:24:49.151585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:49.169353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:24:49.206657 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Mar 17 17:24:49.262376 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:49.272460 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:49.396419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:49.408500 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:24:49.443652 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:49.448766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:49.454026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:49.456940 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:49.474614 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:24:49.499905 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:49.589539 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:24:49.589608 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 17:24:49.622437 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:24:49.622687 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:24:49.622922 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c6:0f:21:13:55 Mar 17 17:24:49.592064 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:49.592331 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:49.597246 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:49.599400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:49.599677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:49.604592 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:49.625010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:49.627849 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:24:49.671351 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:24:49.671414 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:24:49.681145 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:24:49.685835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:49.693646 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:24:49.693684 kernel: GPT:9289727 != 16777215 Mar 17 17:24:49.693709 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:24:49.694594 kernel: GPT:9289727 != 16777215 Mar 17 17:24:49.696264 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:24:49.697784 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:49.710566 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:49.767143 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:49.783171 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (518) Mar 17 17:24:49.823459 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (515) Mar 17 17:24:49.858561 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:24:49.922990 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:24:49.940436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:24:49.954004 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:24:49.960468 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:24:49.971523 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:24:49.987700 disk-uuid[663]: Primary Header is updated. Mar 17 17:24:49.987700 disk-uuid[663]: Secondary Entries is updated. Mar 17 17:24:49.987700 disk-uuid[663]: Secondary Header is updated. Mar 17 17:24:50.001168 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:50.007186 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:51.017166 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:51.019393 disk-uuid[664]: The operation has completed successfully. Mar 17 17:24:51.199413 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:24:51.201452 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:24:51.256440 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:24:51.273100 sh[922]: Success Mar 17 17:24:51.298176 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:24:51.408338 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:24:51.418399 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:24:51.419519 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:24:51.452159 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:24:51.452221 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:51.455391 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:24:51.456710 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:24:51.456743 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:24:51.563155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:24:51.602200 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:24:51.605599 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:24:51.620465 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:24:51.630385 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:24:51.662555 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:51.662635 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:51.664158 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:51.671326 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:51.686021 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:24:51.691053 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:51.700934 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:24:51.713490 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:24:51.803819 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:51.820478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:24:51.872793 systemd-networkd[1115]: lo: Link UP Mar 17 17:24:51.872816 systemd-networkd[1115]: lo: Gained carrier Mar 17 17:24:51.876652 systemd-networkd[1115]: Enumeration completed Mar 17 17:24:51.878239 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:24:51.880683 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:51.880690 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:24:51.887634 systemd[1]: Reached target network.target - Network. Mar 17 17:24:51.893411 systemd-networkd[1115]: eth0: Link UP Mar 17 17:24:51.893419 systemd-networkd[1115]: eth0: Gained carrier Mar 17 17:24:51.893437 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:51.934190 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.21.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:24:52.086899 ignition[1035]: Ignition 2.20.0 Mar 17 17:24:52.086922 ignition[1035]: Stage: fetch-offline Mar 17 17:24:52.087381 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:52.087405 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:52.090200 ignition[1035]: Ignition finished successfully Mar 17 17:24:52.098186 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:24:52.108479 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:24:52.134731 ignition[1124]: Ignition 2.20.0 Mar 17 17:24:52.134754 ignition[1124]: Stage: fetch Mar 17 17:24:52.136098 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:52.136146 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:52.136331 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:52.171291 ignition[1124]: PUT result: OK Mar 17 17:24:52.174754 ignition[1124]: parsed url from cmdline: "" Mar 17 17:24:52.174770 ignition[1124]: no config URL provided Mar 17 17:24:52.174807 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:24:52.174846 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:24:52.176275 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:52.178829 ignition[1124]: PUT result: OK Mar 17 17:24:52.178908 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:24:52.190510 unknown[1124]: fetched base config from "system" Mar 17 17:24:52.180899 ignition[1124]: GET result: OK Mar 17 17:24:52.190526 unknown[1124]: fetched base config from "system" Mar 17 17:24:52.181030 ignition[1124]: parsing config with SHA512: f9fcc254f5a0448cc09f75b8d9575600557b1cb5bcf5f79710d7a25a74b3083bc21dc65dd9bf26c8cfc45c634da91b1b9b3b5f26a77fc1c68cf6ebbcedb06448 Mar 17 17:24:52.190540 unknown[1124]: fetched user config from "aws" Mar 17 17:24:52.191188 ignition[1124]: fetch: fetch complete Mar 17 17:24:52.205794 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:24:52.191200 ignition[1124]: fetch: fetch passed Mar 17 17:24:52.222534 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:24:52.191290 ignition[1124]: Ignition finished successfully Mar 17 17:24:52.246577 ignition[1130]: Ignition 2.20.0 Mar 17 17:24:52.247070 ignition[1130]: Stage: kargs Mar 17 17:24:52.247743 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:52.247768 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:52.247945 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:52.251158 ignition[1130]: PUT result: OK Mar 17 17:24:52.261273 ignition[1130]: kargs: kargs passed Mar 17 17:24:52.261600 ignition[1130]: Ignition finished successfully Mar 17 17:24:52.266544 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:24:52.277421 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:24:52.302234 ignition[1136]: Ignition 2.20.0 Mar 17 17:24:52.302263 ignition[1136]: Stage: disks Mar 17 17:24:52.303832 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:52.303858 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:52.304409 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:52.307314 ignition[1136]: PUT result: OK Mar 17 17:24:52.315615 ignition[1136]: disks: disks passed Mar 17 17:24:52.315721 ignition[1136]: Ignition finished successfully Mar 17 17:24:52.320470 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:24:52.323356 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:24:52.325647 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:24:52.334261 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:24:52.336257 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:24:52.339899 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:24:52.356500 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:24:52.397922 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:24:52.406479 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:24:52.417372 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:24:52.511151 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:24:52.513051 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:24:52.516300 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:24:52.531351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:24:52.538402 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:24:52.540996 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:24:52.541093 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:24:52.541168 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:24:52.565485 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1163) Mar 17 17:24:52.568994 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:52.569029 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:52.570234 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:52.576077 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:24:52.583166 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:52.587448 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:24:52.595536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:24:53.100736 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:24:53.120955 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:24:53.129169 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:24:53.137144 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:24:53.289263 systemd-networkd[1115]: eth0: Gained IPv6LL Mar 17 17:24:53.453109 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:24:53.464314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:24:53.471442 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:24:53.485804 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:24:53.488004 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:53.533834 ignition[1275]: INFO : Ignition 2.20.0 Mar 17 17:24:53.535739 ignition[1275]: INFO : Stage: mount Mar 17 17:24:53.537428 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:24:53.541435 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:53.543363 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:53.545587 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:53.548836 ignition[1275]: INFO : PUT result: OK Mar 17 17:24:53.555898 ignition[1275]: INFO : mount: mount passed Mar 17 17:24:53.557503 ignition[1275]: INFO : Ignition finished successfully Mar 17 17:24:53.562177 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:24:53.571308 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:24:53.602434 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:24:53.625163 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1287) Mar 17 17:24:53.628886 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:53.628928 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:53.628954 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:53.635152 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:53.638472 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:24:53.672189 ignition[1304]: INFO : Ignition 2.20.0 Mar 17 17:24:53.672189 ignition[1304]: INFO : Stage: files Mar 17 17:24:53.675463 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:53.675463 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:53.675463 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:53.682207 ignition[1304]: INFO : PUT result: OK Mar 17 17:24:53.686746 ignition[1304]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:24:53.690829 ignition[1304]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:24:53.690829 ignition[1304]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:24:53.724660 ignition[1304]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:24:53.727493 ignition[1304]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:24:53.730372 unknown[1304]: wrote ssh authorized keys file for user: core Mar 17 17:24:53.732611 ignition[1304]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:24:53.736846 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:24:53.740460 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Mar 17 17:24:53.844800 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:24:54.055710 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Mar 17 17:24:54.055710 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:24:54.062935 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:24:54.389071 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:24:54.524176 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:24:54.524176 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:24:54.532693 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 17:24:54.841756 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:24:55.220474 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 17:24:55.220474 ignition[1304]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:24:55.228259 ignition[1304]: INFO : files: files passed Mar 17 17:24:55.228259 ignition[1304]: INFO : Ignition finished successfully Mar 17 17:24:55.236605 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:24:55.275179 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:24:55.283028 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:24:55.291390 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:24:55.291587 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:24:55.324963 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:55.324963 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:55.334598 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:55.342222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:24:55.347285 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:24:55.371444 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:24:55.424766 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:24:55.424966 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:24:55.429042 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:24:55.431233 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:24:55.435928 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:24:55.449566 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:24:55.479186 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:24:55.491486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:24:55.520485 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:24:55.522339 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:24:55.527069 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:55.531569 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:55.533819 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:24:55.536054 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:24:55.536176 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:24:55.545025 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:24:55.547081 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:24:55.549637 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:24:55.549940 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:24:55.550551 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:24:55.550864 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:24:55.568274 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:55.570582 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:24:55.574661 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:24:55.576583 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:24:55.578175 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:24:55.578276 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:55.580574 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:55.586209 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:55.589769 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:24:55.589836 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:55.604256 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:24:55.604365 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:55.610444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:24:55.610534 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:24:55.612962 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:24:55.613041 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:24:55.628291 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:24:55.631370 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:24:55.631488 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:55.645292 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:24:55.649337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:24:55.649461 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:55.662860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:24:55.662979 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:55.680853 ignition[1358]: INFO : Ignition 2.20.0 Mar 17 17:24:55.680853 ignition[1358]: INFO : Stage: umount Mar 17 17:24:55.685490 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:55.685490 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:55.685490 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:55.693172 ignition[1358]: INFO : PUT result: OK Mar 17 17:24:55.698328 ignition[1358]: INFO : umount: umount passed Mar 17 17:24:55.700039 ignition[1358]: INFO : Ignition finished successfully Mar 17 17:24:55.703675 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:24:55.705525 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:24:55.709258 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:24:55.709438 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:24:55.714322 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:24:55.714779 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:24:55.721018 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:24:55.721512 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:24:55.729513 systemd[1]: Stopped target network.target - Network. Mar 17 17:24:55.733279 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:24:55.733484 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:24:55.739786 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:24:55.744902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:24:55.748253 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:55.750630 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:24:55.752944 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:24:55.760535 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:24:55.760617 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:55.762545 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:24:55.762617 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:55.764629 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:24:55.764717 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:24:55.766675 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:24:55.766753 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:55.769079 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:24:55.772301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:24:55.778276 systemd-networkd[1115]: eth0: DHCPv6 lease lost Mar 17 17:24:55.781894 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:24:55.786720 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:24:55.789849 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:24:55.807109 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:24:55.809023 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:55.831357 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:24:55.835220 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:24:55.835348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:55.840451 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:55.846040 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:24:55.846308 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:24:55.873578 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:24:55.875725 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:24:55.880894 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:24:55.881490 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:55.904181 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:24:55.904566 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:24:55.912640 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:24:55.912761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:55.918591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:24:55.918665 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:55.920787 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:24:55.920878 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:55.923084 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:24:55.923184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:55.925371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:55.925455 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:55.928916 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:24:55.929002 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:24:55.949597 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:24:55.962918 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:24:55.964463 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:55.966723 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:24:55.966809 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:55.968923 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:24:55.969000 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:55.971386 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:24:55.971464 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:55.975322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:55.975421 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:55.983436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:24:55.984836 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:24:55.991804 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:24:56.011340 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:24:56.029306 systemd[1]: Switching root. Mar 17 17:24:56.090998 systemd-journald[252]: Journal stopped Mar 17 17:24:58.565984 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 17 17:24:58.566114 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:24:58.566188 kernel: SELinux: policy capability open_perms=1 Mar 17 17:24:58.566220 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:24:58.566247 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:24:58.566277 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:24:58.566319 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:24:58.566349 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:24:58.566378 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:24:58.566408 kernel: audit: type=1403 audit(1742232296.733:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:24:58.566448 systemd[1]: Successfully loaded SELinux policy in 74.751ms. Mar 17 17:24:58.566496 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.334ms. Mar 17 17:24:58.566531 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:58.566561 systemd[1]: Detected virtualization amazon. Mar 17 17:24:58.566591 systemd[1]: Detected architecture arm64. Mar 17 17:24:58.566626 systemd[1]: Detected first boot. Mar 17 17:24:58.566657 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:24:58.566689 zram_generator::config[1398]: No configuration found. Mar 17 17:24:58.566724 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:24:58.566757 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:24:58.566789 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:24:58.566820 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:24:58.566850 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:24:58.566884 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:24:58.566918 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:24:58.566949 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:24:58.566981 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:24:58.567012 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:24:58.567044 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:24:58.567076 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:24:58.567105 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:58.569290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:58.569341 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:24:58.569374 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:24:58.569406 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:24:58.569442 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:58.569474 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:24:58.569505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:58.569537 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:24:58.569568 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:24:58.569604 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:24:58.569634 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:24:58.569663 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:58.569696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:58.569725 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:58.569757 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:58.569791 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:24:58.569824 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:24:58.569858 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:58.569891 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:58.569920 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:58.569953 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:24:58.569985 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:24:58.570017 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:24:58.570060 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:24:58.570093 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:24:58.570160 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:24:58.570201 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:24:58.570232 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:24:58.570262 systemd[1]: Reached target machines.target - Containers. Mar 17 17:24:58.570291 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:24:58.570321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:24:58.572358 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:58.572394 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:24:58.572427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:24:58.572462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:24:58.572493 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:24:58.572524 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:24:58.572553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:24:58.572586 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:24:58.572615 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:24:58.572644 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:24:58.572673 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:24:58.572704 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:24:58.572739 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:58.572768 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:58.572797 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:24:58.572826 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:24:58.572854 kernel: fuse: init (API version 7.39) Mar 17 17:24:58.572882 kernel: loop: module loaded Mar 17 17:24:58.572910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:58.572941 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:24:58.572982 systemd[1]: Stopped verity-setup.service. Mar 17 17:24:58.573015 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:24:58.573066 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:24:58.573101 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:24:58.573651 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:24:58.573691 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:24:58.573729 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:24:58.573760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:58.573789 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:24:58.573818 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:24:58.573847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:24:58.573879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:24:58.573908 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:24:58.573938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:24:58.573971 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:24:58.574001 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:24:58.574029 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:24:58.574058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:24:58.574157 systemd-journald[1479]: Collecting audit messages is disabled. Mar 17 17:24:58.574219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:58.574255 kernel: ACPI: bus type drm_connector registered Mar 17 17:24:58.574288 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:24:58.574318 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:24:58.574347 systemd-journald[1479]: Journal started Mar 17 17:24:58.574395 systemd-journald[1479]: Runtime Journal (/run/log/journal/ec29b43f269b521ecd8718aa07439265) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:24:57.976512 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:24:58.030631 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:24:58.031480 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:24:58.588227 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:24:58.606422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:24:58.606529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:24:58.620160 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:58.628177 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:58.630257 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:24:58.631206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:24:58.635362 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:24:58.638259 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:24:58.641538 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:24:58.655346 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:24:58.687587 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:24:58.687708 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:24:58.692085 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:24:58.708818 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:24:58.714813 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:24:58.719316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:24:58.729559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:24:58.735594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:24:58.737902 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:24:58.748486 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:24:58.756435 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:24:58.763407 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:24:58.768199 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:58.771310 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:24:58.804746 systemd-journald[1479]: Time spent on flushing to /var/log/journal/ec29b43f269b521ecd8718aa07439265 is 86.702ms for 911 entries. Mar 17 17:24:58.804746 systemd-journald[1479]: System Journal (/var/log/journal/ec29b43f269b521ecd8718aa07439265) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:24:58.917480 systemd-journald[1479]: Received client request to flush runtime journal. Mar 17 17:24:58.919185 kernel: loop0: detected capacity change from 0 to 201592 Mar 17 17:24:58.919303 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:24:58.818793 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:24:58.823476 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:24:58.836391 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:24:58.930228 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:24:58.946358 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:24:58.951009 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:24:58.964413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:58.968267 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:24:58.984204 kernel: loop1: detected capacity change from 0 to 116808 Mar 17 17:24:58.985280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:59.001491 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:24:59.038577 udevadm[1547]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:24:59.069799 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Mar 17 17:24:59.069842 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Mar 17 17:24:59.085632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:59.103195 kernel: loop2: detected capacity change from 0 to 113536 Mar 17 17:24:59.222187 kernel: loop3: detected capacity change from 0 to 53784 Mar 17 17:24:59.266005 kernel: loop4: detected capacity change from 0 to 201592 Mar 17 17:24:59.311216 kernel: loop5: detected capacity change from 0 to 116808 Mar 17 17:24:59.327587 kernel: loop6: detected capacity change from 0 to 113536 Mar 17 17:24:59.358641 kernel: loop7: detected capacity change from 0 to 53784 Mar 17 17:24:59.376428 (sd-merge)[1553]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:24:59.377489 (sd-merge)[1553]: Merged extensions into '/usr'. Mar 17 17:24:59.389834 systemd[1]: Reloading requested from client PID 1531 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:24:59.389867 systemd[1]: Reloading... Mar 17 17:24:59.568178 zram_generator::config[1579]: No configuration found. Mar 17 17:24:59.837936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:24:59.963632 systemd[1]: Reloading finished in 572 ms. Mar 17 17:25:00.008303 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:25:00.012538 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:25:00.023556 systemd[1]: Starting ensure-sysext.service... Mar 17 17:25:00.031514 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:25:00.039040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:25:00.064274 systemd[1]: Reloading requested from client PID 1631 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:25:00.064310 systemd[1]: Reloading... Mar 17 17:25:00.118697 systemd-udevd[1633]: Using default interface naming scheme 'v255'. Mar 17 17:25:00.123935 systemd-tmpfiles[1632]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:25:00.124637 systemd-tmpfiles[1632]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:25:00.131199 systemd-tmpfiles[1632]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:25:00.131735 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Mar 17 17:25:00.131874 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Mar 17 17:25:00.138580 systemd-tmpfiles[1632]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:25:00.138602 systemd-tmpfiles[1632]: Skipping /boot Mar 17 17:25:00.183951 systemd-tmpfiles[1632]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:25:00.184178 systemd-tmpfiles[1632]: Skipping /boot Mar 17 17:25:00.245767 zram_generator::config[1658]: No configuration found. Mar 17 17:25:00.500428 (udev-worker)[1674]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:00.573213 ldconfig[1527]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:25:00.709176 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1695) Mar 17 17:25:00.719806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:00.873893 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:25:00.874970 systemd[1]: Reloading finished in 810 ms. Mar 17 17:25:00.912608 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:25:00.917793 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:25:00.930157 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:25:00.976972 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:25:01.021854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:25:01.035596 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:25:01.042467 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:25:01.047603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:25:01.050074 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:25:01.066465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:25:01.072484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:25:01.079474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:25:01.087949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:25:01.092602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:25:01.096495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:25:01.106504 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:25:01.114236 lvm[1830]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:25:01.115090 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:25:01.129467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:25:01.132309 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:25:01.150589 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:25:01.156689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:25:01.163327 systemd[1]: Finished ensure-sysext.service. Mar 17 17:25:01.165982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:25:01.166356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:25:01.195606 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:25:01.198233 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:25:01.202049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:25:01.202776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:25:01.211524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:25:01.217234 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:25:01.237330 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:25:01.240379 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:25:01.250493 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:25:01.253303 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:25:01.253694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:25:01.257616 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:25:01.286406 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:25:01.291229 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:25:01.311249 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:25:01.321199 lvm[1860]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:25:01.322570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:25:01.369420 augenrules[1875]: No rules Mar 17 17:25:01.372062 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:25:01.372500 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:25:01.376712 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:25:01.401381 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:25:01.434298 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:25:01.450777 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:25:01.453749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:25:01.467241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:25:01.563964 systemd-resolved[1844]: Positive Trust Anchors: Mar 17 17:25:01.564026 systemd-resolved[1844]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:25:01.564093 systemd-resolved[1844]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:25:01.567856 systemd-networkd[1843]: lo: Link UP Mar 17 17:25:01.568362 systemd-networkd[1843]: lo: Gained carrier Mar 17 17:25:01.571243 systemd-networkd[1843]: Enumeration completed Mar 17 17:25:01.571613 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:25:01.573924 systemd-networkd[1843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:01.573940 systemd-networkd[1843]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:25:01.576262 systemd-networkd[1843]: eth0: Link UP Mar 17 17:25:01.576602 systemd-resolved[1844]: Defaulting to hostname 'linux'. Mar 17 17:25:01.576838 systemd-networkd[1843]: eth0: Gained carrier Mar 17 17:25:01.576957 systemd-networkd[1843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:25:01.586603 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:25:01.589184 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:25:01.591493 systemd[1]: Reached target network.target - Network. Mar 17 17:25:01.593265 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:25:01.595142 systemd-networkd[1843]: eth0: DHCPv4 address 172.31.21.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:25:01.595536 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:25:01.597737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:25:01.600190 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:25:01.602902 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:25:01.605352 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:25:01.606459 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:25:01.607446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:25:01.607489 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:25:01.607746 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:25:01.613157 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:25:01.617718 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:25:01.629158 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:25:01.632726 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:25:01.635059 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:25:01.636996 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:25:01.638938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:25:01.638990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:25:01.652339 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:25:01.657033 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:25:01.669598 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:25:01.675108 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:25:01.685466 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:25:01.689263 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:25:01.692483 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:25:01.699202 jq[1900]: false Mar 17 17:25:01.706579 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:25:01.713364 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:25:01.719378 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:25:01.724975 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:25:01.735497 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:25:01.746403 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:25:01.751319 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:25:01.752313 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:25:01.755465 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:25:01.762146 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:25:01.769834 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:25:01.772203 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:25:01.780252 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:25:01.783074 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:25:01.828733 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:25:01.831258 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:25:01.843869 dbus-daemon[1899]: [system] SELinux support is enabled Mar 17 17:25:01.853473 dbus-daemon[1899]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1843 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:25:01.856694 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:25:01.869852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:25:01.869967 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:25:01.873478 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:25:01.876099 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:25:01.873532 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:25:01.891468 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:25:01.913716 tar[1927]: linux-arm64/LICENSE Mar 17 17:25:01.915448 tar[1927]: linux-arm64/helm Mar 17 17:25:01.947659 jq[1912]: true Mar 17 17:25:01.971598 extend-filesystems[1901]: Found loop4 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found loop5 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found loop6 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found loop7 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1p1 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1p2 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1p3 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found usr Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1p4 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1p6 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1p7 Mar 17 17:25:01.978761 extend-filesystems[1901]: Found nvme0n1p9 Mar 17 17:25:02.025739 extend-filesystems[1901]: Checking size of /dev/nvme0n1p9 Mar 17 17:25:02.005953 (ntainerd)[1930]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:25:02.014636 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:25:02.051424 update_engine[1911]: I20250317 17:25:02.050735 1911 main.cc:92] Flatcar Update Engine starting Mar 17 17:25:02.059745 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:25:02.071290 update_engine[1911]: I20250317 17:25:02.070151 1911 update_check_scheduler.cc:74] Next update check in 4m6s Mar 17 17:25:02.072421 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:25:02.076304 jq[1939]: true Mar 17 17:25:02.095199 coreos-metadata[1898]: Mar 17 17:25:02.093 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:25:02.103192 coreos-metadata[1898]: Mar 17 17:25:02.100 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:25:02.104513 coreos-metadata[1898]: Mar 17 17:25:02.104 INFO Fetch successful Mar 17 17:25:02.104643 coreos-metadata[1898]: Mar 17 17:25:02.104 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:25:02.105274 coreos-metadata[1898]: Mar 17 17:25:02.105 INFO Fetch successful Mar 17 17:25:02.105274 coreos-metadata[1898]: Mar 17 17:25:02.105 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:25:02.107849 coreos-metadata[1898]: Mar 17 17:25:02.107 INFO Fetch successful Mar 17 17:25:02.107849 coreos-metadata[1898]: Mar 17 17:25:02.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:25:02.109457 coreos-metadata[1898]: Mar 17 17:25:02.109 INFO Fetch successful Mar 17 17:25:02.109457 coreos-metadata[1898]: Mar 17 17:25:02.109 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:25:02.112682 coreos-metadata[1898]: Mar 17 17:25:02.111 INFO Fetch failed with 404: resource not found Mar 17 17:25:02.112682 coreos-metadata[1898]: Mar 17 17:25:02.111 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:25:02.115674 coreos-metadata[1898]: Mar 17 17:25:02.115 INFO Fetch successful Mar 17 17:25:02.115674 coreos-metadata[1898]: Mar 17 17:25:02.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: ---------------------------------------------------- Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: corporation. Support and training for ntp-4 are Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: available at https://www.nwtime.org/support Mar 17 17:25:02.121253 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: ---------------------------------------------------- Mar 17 17:25:02.117523 ntpd[1903]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:25:02.117588 ntpd[1903]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:25:02.122642 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: proto: precision = 0.096 usec (-23) Mar 17 17:25:02.117609 ntpd[1903]: ---------------------------------------------------- Mar 17 17:25:02.117628 ntpd[1903]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:25:02.117647 ntpd[1903]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:25:02.117664 ntpd[1903]: corporation. Support and training for ntp-4 are Mar 17 17:25:02.117683 ntpd[1903]: available at https://www.nwtime.org/support Mar 17 17:25:02.117701 ntpd[1903]: ---------------------------------------------------- Mar 17 17:25:02.123099 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: basedate set to 2025-03-05 Mar 17 17:25:02.123099 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: gps base set to 2025-03-09 (week 2357) Mar 17 17:25:02.122398 ntpd[1903]: proto: precision = 0.096 usec (-23) Mar 17 17:25:02.135516 coreos-metadata[1898]: Mar 17 17:25:02.124 INFO Fetch successful Mar 17 17:25:02.135516 coreos-metadata[1898]: Mar 17 17:25:02.124 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:25:02.135516 coreos-metadata[1898]: Mar 17 17:25:02.124 INFO Fetch successful Mar 17 17:25:02.135516 coreos-metadata[1898]: Mar 17 17:25:02.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:25:02.135516 coreos-metadata[1898]: Mar 17 17:25:02.132 INFO Fetch successful Mar 17 17:25:02.135516 coreos-metadata[1898]: Mar 17 17:25:02.132 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:25:02.135516 coreos-metadata[1898]: Mar 17 17:25:02.133 INFO Fetch successful Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Listen normally on 3 eth0 172.31.21.92:123 Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Listen normally on 4 lo [::1]:123 Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: bind(21) AF_INET6 fe80::4c6:fff:fe21:1355%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: unable to create socket on eth0 (5) for fe80::4c6:fff:fe21:1355%2#123 Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: failed to init interface for address fe80::4c6:fff:fe21:1355%2 Mar 17 17:25:02.135850 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: Listening on routing socket on fd #21 for interface updates Mar 17 17:25:02.123023 ntpd[1903]: basedate set to 2025-03-05 Mar 17 17:25:02.123049 ntpd[1903]: gps base set to 2025-03-09 (week 2357) Mar 17 17:25:02.128355 ntpd[1903]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:25:02.128442 ntpd[1903]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:25:02.130376 ntpd[1903]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:25:02.131933 ntpd[1903]: Listen normally on 3 eth0 172.31.21.92:123 Mar 17 17:25:02.132034 ntpd[1903]: Listen normally on 4 lo [::1]:123 Mar 17 17:25:02.132141 ntpd[1903]: bind(21) AF_INET6 fe80::4c6:fff:fe21:1355%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:25:02.132184 ntpd[1903]: unable to create socket on eth0 (5) for fe80::4c6:fff:fe21:1355%2#123 Mar 17 17:25:02.133181 ntpd[1903]: failed to init interface for address fe80::4c6:fff:fe21:1355%2 Mar 17 17:25:02.133261 ntpd[1903]: Listening on routing socket on fd #21 for interface updates Mar 17 17:25:02.140001 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:02.152291 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:02.152291 ntpd[1903]: 17 Mar 17:25:02 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:02.140064 ntpd[1903]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:25:02.190306 extend-filesystems[1901]: Resized partition /dev/nvme0n1p9 Mar 17 17:25:02.201805 extend-filesystems[1962]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:25:02.236267 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:25:02.241913 systemd-logind[1909]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:25:02.241958 systemd-logind[1909]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 17:25:02.253245 systemd-logind[1909]: New seat seat0. Mar 17 17:25:02.258222 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:25:02.285766 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:25:02.303035 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:25:02.305885 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:25:02.364183 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:25:02.393360 extend-filesystems[1962]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:25:02.393360 extend-filesystems[1962]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:25:02.393360 extend-filesystems[1962]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:25:02.413334 extend-filesystems[1901]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:25:02.422319 bash[1975]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:25:02.402441 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:25:02.402801 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:25:02.409646 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:25:02.445738 systemd[1]: Starting sshkeys.service... Mar 17 17:25:02.461302 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:25:02.461551 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:25:02.465329 dbus-daemon[1899]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1929 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:25:02.489782 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:25:02.493587 locksmithd[1945]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:25:02.519454 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:25:02.527655 polkitd[1994]: Started polkitd version 121 Mar 17 17:25:02.619423 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1682) Mar 17 17:25:02.611888 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:25:02.540827 polkitd[1994]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:25:02.614831 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:25:02.540941 polkitd[1994]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:25:02.543200 polkitd[1994]: Finished loading, compiling and executing 2 rules Mar 17 17:25:02.544006 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:25:02.545161 polkitd[1994]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:25:02.640537 systemd-hostnamed[1929]: Hostname set to (transient) Mar 17 17:25:02.640715 systemd-resolved[1844]: System hostname changed to 'ip-172-31-21-92'. Mar 17 17:25:02.834752 containerd[1930]: time="2025-03-17T17:25:02.834622236Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:25:02.862240 coreos-metadata[2003]: Mar 17 17:25:02.861 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:25:02.867504 coreos-metadata[2003]: Mar 17 17:25:02.866 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:25:02.867504 coreos-metadata[2003]: Mar 17 17:25:02.867 INFO Fetch successful Mar 17 17:25:02.867693 coreos-metadata[2003]: Mar 17 17:25:02.867 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:25:02.873182 coreos-metadata[2003]: Mar 17 17:25:02.872 INFO Fetch successful Mar 17 17:25:02.879339 unknown[2003]: wrote ssh authorized keys file for user: core Mar 17 17:25:02.957146 update-ssh-keys[2087]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:25:02.965218 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:25:02.976811 systemd[1]: Finished sshkeys.service. Mar 17 17:25:03.017342 systemd-networkd[1843]: eth0: Gained IPv6LL Mar 17 17:25:03.030806 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:25:03.037176 containerd[1930]: time="2025-03-17T17:25:03.033831849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:03.037544 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:25:03.043713 containerd[1930]: time="2025-03-17T17:25:03.043644681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:03.044264 containerd[1930]: time="2025-03-17T17:25:03.044224701Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:25:03.044525 containerd[1930]: time="2025-03-17T17:25:03.044492793Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:25:03.044943 containerd[1930]: time="2025-03-17T17:25:03.044907033Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:25:03.045937 containerd[1930]: time="2025-03-17T17:25:03.045642057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:03.047579 containerd[1930]: time="2025-03-17T17:25:03.046308225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:03.047579 containerd[1930]: time="2025-03-17T17:25:03.046353597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:03.047579 containerd[1930]: time="2025-03-17T17:25:03.046661337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:03.047579 containerd[1930]: time="2025-03-17T17:25:03.046693437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:03.047579 containerd[1930]: time="2025-03-17T17:25:03.046724673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:03.047579 containerd[1930]: time="2025-03-17T17:25:03.046747905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:03.047579 containerd[1930]: time="2025-03-17T17:25:03.046914501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:03.048707 containerd[1930]: time="2025-03-17T17:25:03.048643173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:25:03.049372 containerd[1930]: time="2025-03-17T17:25:03.049295361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:25:03.049535 containerd[1930]: time="2025-03-17T17:25:03.049506129Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:25:03.049733 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:25:03.052618 containerd[1930]: time="2025-03-17T17:25:03.052316841Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:25:03.053904 containerd[1930]: time="2025-03-17T17:25:03.052970241Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:25:03.058691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:03.071391 containerd[1930]: time="2025-03-17T17:25:03.069280701Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:25:03.071391 containerd[1930]: time="2025-03-17T17:25:03.069389409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:25:03.071391 containerd[1930]: time="2025-03-17T17:25:03.069430005Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:25:03.071391 containerd[1930]: time="2025-03-17T17:25:03.069467397Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:25:03.071391 containerd[1930]: time="2025-03-17T17:25:03.069500313Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:25:03.071391 containerd[1930]: time="2025-03-17T17:25:03.069791157Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:25:03.071700 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:25:03.091356 containerd[1930]: time="2025-03-17T17:25:03.087557085Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:25:03.105150 containerd[1930]: time="2025-03-17T17:25:03.103357773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:25:03.105150 containerd[1930]: time="2025-03-17T17:25:03.103482429Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:25:03.105150 containerd[1930]: time="2025-03-17T17:25:03.103521489Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:25:03.106081 containerd[1930]: time="2025-03-17T17:25:03.105990117Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106209 containerd[1930]: time="2025-03-17T17:25:03.106094853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106209 containerd[1930]: time="2025-03-17T17:25:03.106169001Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106298 containerd[1930]: time="2025-03-17T17:25:03.106229805Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106298 containerd[1930]: time="2025-03-17T17:25:03.106267077Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106381 containerd[1930]: time="2025-03-17T17:25:03.106325517Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106381 containerd[1930]: time="2025-03-17T17:25:03.106358781Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106471 containerd[1930]: time="2025-03-17T17:25:03.106410645Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:25:03.106528 containerd[1930]: time="2025-03-17T17:25:03.106458285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.106528 containerd[1930]: time="2025-03-17T17:25:03.106516041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.108572 containerd[1930]: time="2025-03-17T17:25:03.106546569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.108718 containerd[1930]: time="2025-03-17T17:25:03.108611169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.110699 containerd[1930]: time="2025-03-17T17:25:03.110231085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.110699 containerd[1930]: time="2025-03-17T17:25:03.110327037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.110699 containerd[1930]: time="2025-03-17T17:25:03.110398773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.110699 containerd[1930]: time="2025-03-17T17:25:03.110436189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.110699 containerd[1930]: time="2025-03-17T17:25:03.110500893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.110699 containerd[1930]: time="2025-03-17T17:25:03.110563617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.110699 containerd[1930]: time="2025-03-17T17:25:03.110598645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.118283049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.118421253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.118460889Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.118576629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.118639809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.118670793Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.118910901Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.120336645Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.120403509Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.120442605Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.120468369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.120501585Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.120526713Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:25:03.126160 containerd[1930]: time="2025-03-17T17:25:03.120552093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:25:03.128932 containerd[1930]: time="2025-03-17T17:25:03.121095237Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:25:03.133923 containerd[1930]: time="2025-03-17T17:25:03.129293505Z" level=info msg="Connect containerd service" Mar 17 17:25:03.133923 containerd[1930]: time="2025-03-17T17:25:03.129447693Z" level=info msg="using legacy CRI server" Mar 17 17:25:03.133923 containerd[1930]: time="2025-03-17T17:25:03.129469737Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:25:03.133923 containerd[1930]: time="2025-03-17T17:25:03.133488645Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:25:03.139883 containerd[1930]: time="2025-03-17T17:25:03.139823001Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:25:03.143210 containerd[1930]: time="2025-03-17T17:25:03.142222677Z" level=info msg="Start subscribing containerd event" Mar 17 17:25:03.144169 containerd[1930]: time="2025-03-17T17:25:03.143873721Z" level=info msg="Start recovering state" Mar 17 17:25:03.145930 containerd[1930]: time="2025-03-17T17:25:03.144991653Z" level=info msg="Start event monitor" Mar 17 17:25:03.145930 containerd[1930]: time="2025-03-17T17:25:03.145611189Z" level=info msg="Start snapshots syncer" Mar 17 17:25:03.150038 containerd[1930]: time="2025-03-17T17:25:03.146109333Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:25:03.150038 containerd[1930]: time="2025-03-17T17:25:03.148740537Z" level=info msg="Start streaming server" Mar 17 17:25:03.150038 containerd[1930]: time="2025-03-17T17:25:03.147468225Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:25:03.152769 containerd[1930]: time="2025-03-17T17:25:03.152199201Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:25:03.161899 containerd[1930]: time="2025-03-17T17:25:03.156262065Z" level=info msg="containerd successfully booted in 0.325344s" Mar 17 17:25:03.156457 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:25:03.257112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:25:03.264360 amazon-ssm-agent[2098]: Initializing new seelog logger Mar 17 17:25:03.265477 amazon-ssm-agent[2098]: New Seelog Logger Creation Complete Mar 17 17:25:03.265650 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.266422 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.267626 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 processing appconfig overrides Mar 17 17:25:03.268680 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.268784 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.269005 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 processing appconfig overrides Mar 17 17:25:03.270716 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.270716 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.270716 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 processing appconfig overrides Mar 17 17:25:03.270716 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO Proxy environment variables: Mar 17 17:25:03.277544 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.277544 amazon-ssm-agent[2098]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:25:03.277544 amazon-ssm-agent[2098]: 2025/03/17 17:25:03 processing appconfig overrides Mar 17 17:25:03.375349 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO https_proxy: Mar 17 17:25:03.475052 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO http_proxy: Mar 17 17:25:03.576569 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO no_proxy: Mar 17 17:25:03.675304 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:25:03.774673 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:25:03.873399 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO Agent will take identity from EC2 Mar 17 17:25:03.911660 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:25:03.912533 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:25:03.912656 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:25:03.912794 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:25:03.912908 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 17 17:25:03.913067 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:25:03.913067 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:25:03.913303 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [Registrar] Starting registrar module Mar 17 17:25:03.913303 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:25:03.913303 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:25:03.914558 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:25:03.914558 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:25:03.914558 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:25:03.972814 amazon-ssm-agent[2098]: 2025-03-17 17:25:03 INFO [CredentialRefresher] Next credential rotation will be in 31.233283383533333 minutes Mar 17 17:25:04.211045 tar[1927]: linux-arm64/README.md Mar 17 17:25:04.245917 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:25:04.374232 sshd_keygen[1949]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:25:04.417208 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:25:04.429160 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:25:04.439760 systemd[1]: Started sshd@0-172.31.21.92:22-139.178.68.195:42904.service - OpenSSH per-connection server daemon (139.178.68.195:42904). Mar 17 17:25:04.462547 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:25:04.464927 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:25:04.479786 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:25:04.517961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:25:04.529683 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:25:04.544862 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:25:04.547490 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:25:04.689983 sshd[2135]: Accepted publickey for core from 139.178.68.195 port 42904 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:04.692609 sshd-session[2135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:04.709399 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:25:04.718719 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:25:04.727009 systemd-logind[1909]: New session 1 of user core. Mar 17 17:25:04.759212 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:25:04.776885 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:25:04.782404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:04.789492 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:25:04.798644 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:04.802729 (systemd)[2150]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:25:04.973148 amazon-ssm-agent[2098]: 2025-03-17 17:25:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:25:05.044354 systemd[2150]: Queued start job for default target default.target. Mar 17 17:25:05.052851 systemd[2150]: Created slice app.slice - User Application Slice. Mar 17 17:25:05.053579 systemd[2150]: Reached target paths.target - Paths. Mar 17 17:25:05.053621 systemd[2150]: Reached target timers.target - Timers. Mar 17 17:25:05.057508 systemd[2150]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:25:05.075166 amazon-ssm-agent[2098]: 2025-03-17 17:25:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2161) started Mar 17 17:25:05.092642 systemd[2150]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:25:05.093462 systemd[2150]: Reached target sockets.target - Sockets. Mar 17 17:25:05.093496 systemd[2150]: Reached target basic.target - Basic System. Mar 17 17:25:05.093588 systemd[2150]: Reached target default.target - Main User Target. Mar 17 17:25:05.093655 systemd[2150]: Startup finished in 278ms. Mar 17 17:25:05.094525 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:25:05.104448 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:25:05.106742 systemd[1]: Startup finished in 1.085s (kernel) + 8.909s (initrd) + 8.446s (userspace) = 18.440s. Mar 17 17:25:05.118926 ntpd[1903]: 17 Mar 17:25:05 ntpd[1903]: Listen normally on 6 eth0 [fe80::4c6:fff:fe21:1355%2]:123 Mar 17 17:25:05.118381 ntpd[1903]: Listen normally on 6 eth0 [fe80::4c6:fff:fe21:1355%2]:123 Mar 17 17:25:05.173765 amazon-ssm-agent[2098]: 2025-03-17 17:25:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:25:05.283290 systemd[1]: Started sshd@1-172.31.21.92:22-139.178.68.195:42910.service - OpenSSH per-connection server daemon (139.178.68.195:42910). Mar 17 17:25:05.485956 sshd[2183]: Accepted publickey for core from 139.178.68.195 port 42910 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:05.488549 sshd-session[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:05.496220 systemd-logind[1909]: New session 2 of user core. Mar 17 17:25:05.506410 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:25:05.638313 sshd[2185]: Connection closed by 139.178.68.195 port 42910 Mar 17 17:25:05.639182 sshd-session[2183]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:05.646410 systemd[1]: sshd@1-172.31.21.92:22-139.178.68.195:42910.service: Deactivated successfully. Mar 17 17:25:05.652862 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:25:05.654544 systemd-logind[1909]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:25:05.657312 systemd-logind[1909]: Removed session 2. Mar 17 17:25:05.678713 systemd[1]: Started sshd@2-172.31.21.92:22-139.178.68.195:42924.service - OpenSSH per-connection server daemon (139.178.68.195:42924). Mar 17 17:25:05.873368 kubelet[2151]: E0317 17:25:05.873305 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:05.877815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:05.878197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:05.880379 sshd[2191]: Accepted publickey for core from 139.178.68.195 port 42924 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:05.879657 systemd[1]: kubelet.service: Consumed 1.379s CPU time. Mar 17 17:25:05.881587 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:05.890015 systemd-logind[1909]: New session 3 of user core. Mar 17 17:25:05.895402 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:25:06.013226 sshd[2195]: Connection closed by 139.178.68.195 port 42924 Mar 17 17:25:06.014011 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:06.020007 systemd[1]: sshd@2-172.31.21.92:22-139.178.68.195:42924.service: Deactivated successfully. Mar 17 17:25:06.022974 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:25:06.024055 systemd-logind[1909]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:25:06.026220 systemd-logind[1909]: Removed session 3. Mar 17 17:25:06.050510 systemd[1]: Started sshd@3-172.31.21.92:22-139.178.68.195:42928.service - OpenSSH per-connection server daemon (139.178.68.195:42928). Mar 17 17:25:06.243761 sshd[2200]: Accepted publickey for core from 139.178.68.195 port 42928 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:06.246136 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:06.255436 systemd-logind[1909]: New session 4 of user core. Mar 17 17:25:06.258111 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:25:06.384209 sshd[2202]: Connection closed by 139.178.68.195 port 42928 Mar 17 17:25:06.384933 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:06.389618 systemd[1]: sshd@3-172.31.21.92:22-139.178.68.195:42928.service: Deactivated successfully. Mar 17 17:25:06.393815 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:25:06.397620 systemd-logind[1909]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:25:06.399888 systemd-logind[1909]: Removed session 4. Mar 17 17:25:06.423623 systemd[1]: Started sshd@4-172.31.21.92:22-139.178.68.195:33430.service - OpenSSH per-connection server daemon (139.178.68.195:33430). Mar 17 17:25:06.603965 sshd[2207]: Accepted publickey for core from 139.178.68.195 port 33430 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:06.606587 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:06.616495 systemd-logind[1909]: New session 5 of user core. Mar 17 17:25:06.623414 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:25:06.741058 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:25:06.741751 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:06.757071 sudo[2210]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:06.781841 sshd[2209]: Connection closed by 139.178.68.195 port 33430 Mar 17 17:25:06.780673 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:06.786748 systemd[1]: sshd@4-172.31.21.92:22-139.178.68.195:33430.service: Deactivated successfully. Mar 17 17:25:06.789877 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:25:06.791191 systemd-logind[1909]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:25:06.793695 systemd-logind[1909]: Removed session 5. Mar 17 17:25:06.812449 systemd[1]: Started sshd@5-172.31.21.92:22-139.178.68.195:33434.service - OpenSSH per-connection server daemon (139.178.68.195:33434). Mar 17 17:25:07.014782 sshd[2215]: Accepted publickey for core from 139.178.68.195 port 33434 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:07.016984 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:07.026061 systemd-logind[1909]: New session 6 of user core. Mar 17 17:25:07.029398 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:25:07.133280 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:25:07.133901 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:07.140306 sudo[2219]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:07.150300 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:25:07.150927 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:07.176929 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:25:07.224298 augenrules[2241]: No rules Mar 17 17:25:07.226723 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:25:07.227083 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:25:07.230246 sudo[2218]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:07.255157 sshd[2217]: Connection closed by 139.178.68.195 port 33434 Mar 17 17:25:07.255409 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:07.259478 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:25:07.262464 systemd[1]: sshd@5-172.31.21.92:22-139.178.68.195:33434.service: Deactivated successfully. Mar 17 17:25:07.267476 systemd-logind[1909]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:25:07.269817 systemd-logind[1909]: Removed session 6. Mar 17 17:25:07.293622 systemd[1]: Started sshd@6-172.31.21.92:22-139.178.68.195:33450.service - OpenSSH per-connection server daemon (139.178.68.195:33450). Mar 17 17:25:07.484406 sshd[2249]: Accepted publickey for core from 139.178.68.195 port 33450 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:07.486817 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:07.495014 systemd-logind[1909]: New session 7 of user core. Mar 17 17:25:07.504370 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:25:07.606674 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:25:07.607334 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:25:08.233613 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:25:08.247671 (dockerd)[2271]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:25:08.712178 dockerd[2271]: time="2025-03-17T17:25:08.711846761Z" level=info msg="Starting up" Mar 17 17:25:08.981552 dockerd[2271]: time="2025-03-17T17:25:08.981397182Z" level=info msg="Loading containers: start." Mar 17 17:25:08.621329 systemd-resolved[1844]: Clock change detected. Flushing caches. Mar 17 17:25:08.632223 systemd-journald[1479]: Time jumped backwards, rotating. Mar 17 17:25:08.782230 kernel: Initializing XFRM netlink socket Mar 17 17:25:08.826369 (udev-worker)[2296]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:08.916212 systemd-networkd[1843]: docker0: Link UP Mar 17 17:25:08.958156 dockerd[2271]: time="2025-03-17T17:25:08.958088817Z" level=info msg="Loading containers: done." Mar 17 17:25:08.980913 dockerd[2271]: time="2025-03-17T17:25:08.980836869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:25:08.981091 dockerd[2271]: time="2025-03-17T17:25:08.981007725Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:25:08.981237 dockerd[2271]: time="2025-03-17T17:25:08.981191409Z" level=info msg="Daemon has completed initialization" Mar 17 17:25:09.038014 dockerd[2271]: time="2025-03-17T17:25:09.037650498Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:25:09.038003 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:25:09.932791 containerd[1930]: time="2025-03-17T17:25:09.932723482Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 17:25:10.581587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011591800.mount: Deactivated successfully. Mar 17 17:25:12.492971 containerd[1930]: time="2025-03-17T17:25:12.492871091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:12.494986 containerd[1930]: time="2025-03-17T17:25:12.494911187Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=26231950" Mar 17 17:25:12.495912 containerd[1930]: time="2025-03-17T17:25:12.495516251Z" level=info msg="ImageCreate event name:\"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:12.501092 containerd[1930]: time="2025-03-17T17:25:12.501011531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:12.503740 containerd[1930]: time="2025-03-17T17:25:12.503310659Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"26228750\" in 2.570527729s" Mar 17 17:25:12.503740 containerd[1930]: time="2025-03-17T17:25:12.503370131Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:25dd33975ea35cef2fa9b105778dbe3369de267e9ddf81427b7b82e98ff374e5\"" Mar 17 17:25:12.504612 containerd[1930]: time="2025-03-17T17:25:12.504342923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 17:25:14.536598 containerd[1930]: time="2025-03-17T17:25:14.536528413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:14.538160 containerd[1930]: time="2025-03-17T17:25:14.538070209Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=22530032" Mar 17 17:25:14.539042 containerd[1930]: time="2025-03-17T17:25:14.538923349Z" level=info msg="ImageCreate event name:\"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:14.544821 containerd[1930]: time="2025-03-17T17:25:14.544768501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:14.547621 containerd[1930]: time="2025-03-17T17:25:14.547435909Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"23970828\" in 2.04303607s" Mar 17 17:25:14.547621 containerd[1930]: time="2025-03-17T17:25:14.547487905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:9e29b4db8c5cdf9970961ed3a47137ea71ad067643b8e5cccb58085f22a9b315\"" Mar 17 17:25:14.548567 containerd[1930]: time="2025-03-17T17:25:14.548513329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 17:25:15.522910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:25:15.530320 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:15.867384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:15.877105 (kubelet)[2527]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:15.966679 kubelet[2527]: E0317 17:25:15.966186 2527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:15.974617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:15.974972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:16.477522 containerd[1930]: time="2025-03-17T17:25:16.477457971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:16.479482 containerd[1930]: time="2025-03-17T17:25:16.479401995Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=17482561" Mar 17 17:25:16.480774 containerd[1930]: time="2025-03-17T17:25:16.480691227Z" level=info msg="ImageCreate event name:\"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:16.486654 containerd[1930]: time="2025-03-17T17:25:16.486118515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:16.488717 containerd[1930]: time="2025-03-17T17:25:16.488521707Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"18923375\" in 1.939805422s" Mar 17 17:25:16.488717 containerd[1930]: time="2025-03-17T17:25:16.488578503Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:6b8dfebcc65dc9d4765a91d2923c304e13beca7111c57dfc99f1c3267a6e9f30\"" Mar 17 17:25:16.489820 containerd[1930]: time="2025-03-17T17:25:16.489401667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 17:25:17.741005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175769503.mount: Deactivated successfully. Mar 17 17:25:18.254005 containerd[1930]: time="2025-03-17T17:25:18.253901283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:18.255387 containerd[1930]: time="2025-03-17T17:25:18.255255735Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=27370095" Mar 17 17:25:18.256463 containerd[1930]: time="2025-03-17T17:25:18.256373187Z" level=info msg="ImageCreate event name:\"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:18.259925 containerd[1930]: time="2025-03-17T17:25:18.259824027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:18.261575 containerd[1930]: time="2025-03-17T17:25:18.261394143Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"27369114\" in 1.771941044s" Mar 17 17:25:18.261575 containerd[1930]: time="2025-03-17T17:25:18.261442923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 17:25:18.263088 containerd[1930]: time="2025-03-17T17:25:18.263048799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 17:25:18.856469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055632668.mount: Deactivated successfully. Mar 17 17:25:20.187032 containerd[1930]: time="2025-03-17T17:25:20.186256061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:20.188409 containerd[1930]: time="2025-03-17T17:25:20.188328185Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Mar 17 17:25:20.189492 containerd[1930]: time="2025-03-17T17:25:20.189405293Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:20.195523 containerd[1930]: time="2025-03-17T17:25:20.195466961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:20.198351 containerd[1930]: time="2025-03-17T17:25:20.198023417Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.934785534s" Mar 17 17:25:20.198351 containerd[1930]: time="2025-03-17T17:25:20.198075797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Mar 17 17:25:20.199389 containerd[1930]: time="2025-03-17T17:25:20.199159217Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:25:20.739560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478398892.mount: Deactivated successfully. Mar 17 17:25:20.746741 containerd[1930]: time="2025-03-17T17:25:20.746461916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:20.748402 containerd[1930]: time="2025-03-17T17:25:20.748319336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 17 17:25:20.749677 containerd[1930]: time="2025-03-17T17:25:20.749607704Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:20.753744 containerd[1930]: time="2025-03-17T17:25:20.753647168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:20.756606 containerd[1930]: time="2025-03-17T17:25:20.755482688Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.249311ms" Mar 17 17:25:20.756606 containerd[1930]: time="2025-03-17T17:25:20.755542052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 17:25:20.756606 containerd[1930]: time="2025-03-17T17:25:20.756531152Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 17:25:21.349921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685070783.mount: Deactivated successfully. Mar 17 17:25:25.582836 containerd[1930]: time="2025-03-17T17:25:25.582754080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:25.585621 containerd[1930]: time="2025-03-17T17:25:25.585543696Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Mar 17 17:25:25.587968 containerd[1930]: time="2025-03-17T17:25:25.587846844Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:25.595964 containerd[1930]: time="2025-03-17T17:25:25.595138308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:25.598115 containerd[1930]: time="2025-03-17T17:25:25.598068012Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.841490384s" Mar 17 17:25:25.598269 containerd[1930]: time="2025-03-17T17:25:25.598239408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Mar 17 17:25:26.012333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:25:26.032293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:26.376449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:26.382620 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:25:26.458856 kubelet[2683]: E0317 17:25:26.458793 2683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:25:26.463786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:25:26.464195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:25:32.159307 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:25:33.374405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:33.382505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:33.439802 systemd[1]: Reloading requested from client PID 2701 ('systemctl') (unit session-7.scope)... Mar 17 17:25:33.439837 systemd[1]: Reloading... Mar 17 17:25:33.682980 zram_generator::config[2747]: No configuration found. Mar 17 17:25:33.906699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:34.069553 systemd[1]: Reloading finished in 629 ms. Mar 17 17:25:34.159412 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:25:34.159629 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:25:34.160227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:34.168503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:34.482197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:34.494474 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:25:34.565063 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:34.565063 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:25:34.565063 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:34.565593 kubelet[2803]: I0317 17:25:34.565295 2803 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:25:35.781374 kubelet[2803]: I0317 17:25:35.781324 2803 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:25:35.783974 kubelet[2803]: I0317 17:25:35.781926 2803 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:25:35.783974 kubelet[2803]: I0317 17:25:35.782526 2803 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:25:35.819900 kubelet[2803]: E0317 17:25:35.819811 2803 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:35.825072 kubelet[2803]: I0317 17:25:35.824845 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:25:35.841648 kubelet[2803]: E0317 17:25:35.841595 2803 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:25:35.841834 kubelet[2803]: I0317 17:25:35.841811 2803 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:25:35.848000 kubelet[2803]: I0317 17:25:35.846680 2803 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:25:35.848000 kubelet[2803]: I0317 17:25:35.847126 2803 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:25:35.848000 kubelet[2803]: I0317 17:25:35.847171 2803 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:25:35.848000 kubelet[2803]: I0317 17:25:35.847469 2803 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:25:35.848381 kubelet[2803]: I0317 17:25:35.847488 2803 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:25:35.848381 kubelet[2803]: I0317 17:25:35.847727 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:35.853627 kubelet[2803]: I0317 17:25:35.853411 2803 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:25:35.853627 kubelet[2803]: I0317 17:25:35.853456 2803 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:25:35.853627 kubelet[2803]: I0317 17:25:35.853491 2803 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:25:35.853627 kubelet[2803]: I0317 17:25:35.853511 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:25:35.859983 kubelet[2803]: W0317 17:25:35.859252 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:35.859983 kubelet[2803]: E0317 17:25:35.859384 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:35.859983 kubelet[2803]: I0317 17:25:35.859509 2803 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:25:35.860611 kubelet[2803]: I0317 17:25:35.860582 2803 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:25:35.860827 kubelet[2803]: W0317 17:25:35.860807 2803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:25:35.862556 kubelet[2803]: I0317 17:25:35.862514 2803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:25:35.862772 kubelet[2803]: I0317 17:25:35.862751 2803 server.go:1287] "Started kubelet" Mar 17 17:25:35.871563 kubelet[2803]: W0317 17:25:35.871265 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:35.871563 kubelet[2803]: E0317 17:25:35.871364 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:35.871814 kubelet[2803]: E0317 17:25:35.871457 2803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.92:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-92.182da714274bd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-92,UID:ip-172-31-21-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-92,},FirstTimestamp:2025-03-17 17:25:35.862716791 +0000 UTC m=+1.361929652,LastTimestamp:2025-03-17 17:25:35.862716791 +0000 UTC m=+1.361929652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-92,}" Mar 17 17:25:35.873745 kubelet[2803]: I0317 17:25:35.873712 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:25:35.873968 kubelet[2803]: I0317 17:25:35.873889 2803 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:25:35.875579 kubelet[2803]: I0317 17:25:35.875524 2803 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:25:35.877601 kubelet[2803]: I0317 17:25:35.877499 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:25:35.878031 kubelet[2803]: I0317 17:25:35.877877 2803 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:25:35.879183 kubelet[2803]: E0317 17:25:35.878908 2803 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:25:35.880905 kubelet[2803]: I0317 17:25:35.879401 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:25:35.884493 kubelet[2803]: E0317 17:25:35.884437 2803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Mar 17 17:25:35.884636 kubelet[2803]: I0317 17:25:35.884504 2803 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:25:35.884865 kubelet[2803]: I0317 17:25:35.884826 2803 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:25:35.885079 kubelet[2803]: I0317 17:25:35.884921 2803 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:25:35.886192 kubelet[2803]: W0317 17:25:35.886113 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:35.886303 kubelet[2803]: E0317 17:25:35.886209 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:35.886588 kubelet[2803]: I0317 17:25:35.886542 2803 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:25:35.886710 kubelet[2803]: I0317 17:25:35.886671 2803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:25:35.889223 kubelet[2803]: I0317 17:25:35.889173 2803 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:25:35.900975 kubelet[2803]: E0317 17:25:35.899855 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="200ms" Mar 17 17:25:35.912358 kubelet[2803]: I0317 17:25:35.912068 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:25:35.914270 kubelet[2803]: I0317 17:25:35.914231 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:25:35.914452 kubelet[2803]: I0317 17:25:35.914432 2803 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:25:35.914569 kubelet[2803]: I0317 17:25:35.914548 2803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:25:35.914663 kubelet[2803]: I0317 17:25:35.914645 2803 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:25:35.914830 kubelet[2803]: E0317 17:25:35.914800 2803 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:25:35.923510 kubelet[2803]: W0317 17:25:35.923416 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:35.923510 kubelet[2803]: E0317 17:25:35.923500 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:35.939188 kubelet[2803]: I0317 17:25:35.939141 2803 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:25:35.939188 kubelet[2803]: I0317 17:25:35.939176 2803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:25:35.939366 kubelet[2803]: I0317 17:25:35.939213 2803 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:35.943093 kubelet[2803]: I0317 17:25:35.943048 2803 policy_none.go:49] "None policy: Start" Mar 17 17:25:35.943093 kubelet[2803]: I0317 17:25:35.943086 2803 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:25:35.943246 kubelet[2803]: I0317 17:25:35.943110 2803 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:25:35.954132 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:25:35.969883 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:25:35.976109 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:25:35.985596 kubelet[2803]: E0317 17:25:35.985137 2803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Mar 17 17:25:35.985596 kubelet[2803]: I0317 17:25:35.985563 2803 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:25:35.986074 kubelet[2803]: I0317 17:25:35.985854 2803 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:25:35.986074 kubelet[2803]: I0317 17:25:35.985886 2803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:25:35.987679 kubelet[2803]: I0317 17:25:35.987542 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:25:35.991145 kubelet[2803]: E0317 17:25:35.991058 2803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:25:35.991145 kubelet[2803]: E0317 17:25:35.991145 2803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-92\" not found" Mar 17 17:25:36.032141 systemd[1]: Created slice kubepods-burstable-pod025e96a9b73cf48c0b282709ea163300.slice - libcontainer container kubepods-burstable-pod025e96a9b73cf48c0b282709ea163300.slice. Mar 17 17:25:36.053405 kubelet[2803]: E0317 17:25:36.053341 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:36.061609 systemd[1]: Created slice kubepods-burstable-pod2a0ca01207a2feb18f18e7137357138d.slice - libcontainer container kubepods-burstable-pod2a0ca01207a2feb18f18e7137357138d.slice. Mar 17 17:25:36.065443 kubelet[2803]: E0317 17:25:36.065387 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:36.070044 systemd[1]: Created slice kubepods-burstable-pod3178abaf5d82aabf52282aef991ac38d.slice - libcontainer container kubepods-burstable-pod3178abaf5d82aabf52282aef991ac38d.slice. Mar 17 17:25:36.073675 kubelet[2803]: E0317 17:25:36.073566 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:36.088555 kubelet[2803]: I0317 17:25:36.088486 2803 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Mar 17 17:25:36.089077 kubelet[2803]: E0317 17:25:36.089034 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Mar 17 17:25:36.102525 kubelet[2803]: E0317 17:25:36.102466 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="400ms" Mar 17 17:25:36.186110 kubelet[2803]: I0317 17:25:36.186038 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:36.186246 kubelet[2803]: I0317 17:25:36.186122 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:36.186246 kubelet[2803]: I0317 17:25:36.186206 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a0ca01207a2feb18f18e7137357138d-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-92\" (UID: \"2a0ca01207a2feb18f18e7137357138d\") " pod="kube-system/kube-scheduler-ip-172-31-21-92" Mar 17 17:25:36.186341 kubelet[2803]: I0317 17:25:36.186254 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:36.186341 kubelet[2803]: I0317 17:25:36.186306 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:36.186435 kubelet[2803]: I0317 17:25:36.186347 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:36.186435 kubelet[2803]: I0317 17:25:36.186402 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3178abaf5d82aabf52282aef991ac38d-ca-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"3178abaf5d82aabf52282aef991ac38d\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:36.186544 kubelet[2803]: I0317 17:25:36.186449 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3178abaf5d82aabf52282aef991ac38d-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"3178abaf5d82aabf52282aef991ac38d\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:36.186544 kubelet[2803]: I0317 17:25:36.186499 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3178abaf5d82aabf52282aef991ac38d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"3178abaf5d82aabf52282aef991ac38d\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:36.291452 kubelet[2803]: I0317 17:25:36.291330 2803 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Mar 17 17:25:36.291958 kubelet[2803]: E0317 17:25:36.291871 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Mar 17 17:25:36.355537 containerd[1930]: time="2025-03-17T17:25:36.355472877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-92,Uid:025e96a9b73cf48c0b282709ea163300,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:36.367225 containerd[1930]: time="2025-03-17T17:25:36.366998085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-92,Uid:2a0ca01207a2feb18f18e7137357138d,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:36.375682 containerd[1930]: time="2025-03-17T17:25:36.375315669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-92,Uid:3178abaf5d82aabf52282aef991ac38d,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:36.503708 kubelet[2803]: E0317 17:25:36.503653 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="800ms" Mar 17 17:25:36.694413 kubelet[2803]: I0317 17:25:36.694187 2803 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Mar 17 17:25:36.694777 kubelet[2803]: E0317 17:25:36.694719 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Mar 17 17:25:36.723458 kubelet[2803]: W0317 17:25:36.723312 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:36.723458 kubelet[2803]: E0317 17:25:36.723402 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:36.813841 kubelet[2803]: E0317 17:25:36.813689 2803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.92:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-92.182da714274bd977 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-92,UID:ip-172-31-21-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-92,},FirstTimestamp:2025-03-17 17:25:35.862716791 +0000 UTC m=+1.361929652,LastTimestamp:2025-03-17 17:25:35.862716791 +0000 UTC m=+1.361929652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-92,}" Mar 17 17:25:36.863048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393998294.mount: Deactivated successfully. Mar 17 17:25:36.879451 containerd[1930]: time="2025-03-17T17:25:36.879376644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:36.881659 containerd[1930]: time="2025-03-17T17:25:36.881601060Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:36.884522 containerd[1930]: time="2025-03-17T17:25:36.884436420Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:25:36.886171 containerd[1930]: time="2025-03-17T17:25:36.886100724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:25:36.890238 containerd[1930]: time="2025-03-17T17:25:36.890134704Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:36.893754 containerd[1930]: time="2025-03-17T17:25:36.893578092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:25:36.894721 containerd[1930]: time="2025-03-17T17:25:36.894257988Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:36.909963 containerd[1930]: time="2025-03-17T17:25:36.908421288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:25:36.911778 containerd[1930]: time="2025-03-17T17:25:36.911707824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.277771ms" Mar 17 17:25:36.916171 containerd[1930]: time="2025-03-17T17:25:36.916114152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.526975ms" Mar 17 17:25:36.917602 containerd[1930]: time="2025-03-17T17:25:36.917550792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.437891ms" Mar 17 17:25:37.128360 containerd[1930]: time="2025-03-17T17:25:37.127846689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:37.128360 containerd[1930]: time="2025-03-17T17:25:37.127970517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:37.128360 containerd[1930]: time="2025-03-17T17:25:37.128009013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:37.129704 containerd[1930]: time="2025-03-17T17:25:37.126748569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:37.129704 containerd[1930]: time="2025-03-17T17:25:37.129516093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:37.129704 containerd[1930]: time="2025-03-17T17:25:37.129549321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:37.130010 containerd[1930]: time="2025-03-17T17:25:37.129710985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:37.131545 containerd[1930]: time="2025-03-17T17:25:37.131413377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:37.135789 containerd[1930]: time="2025-03-17T17:25:37.135334845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:37.135789 containerd[1930]: time="2025-03-17T17:25:37.135443889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:37.135789 containerd[1930]: time="2025-03-17T17:25:37.135479625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:37.137460 containerd[1930]: time="2025-03-17T17:25:37.136982697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:37.174285 systemd[1]: Started cri-containerd-33cc922542a77611e9eb7601d701cc32e05ede8a4a65b8fa4960e08e840424e0.scope - libcontainer container 33cc922542a77611e9eb7601d701cc32e05ede8a4a65b8fa4960e08e840424e0. Mar 17 17:25:37.200433 systemd[1]: Started cri-containerd-12c6d2864aab473daa7e12dcd6cb854740196cd8aa041e0945e3ccf5ba588338.scope - libcontainer container 12c6d2864aab473daa7e12dcd6cb854740196cd8aa041e0945e3ccf5ba588338. Mar 17 17:25:37.206289 systemd[1]: Started cri-containerd-d4623cc6bb3e35495378c07d8127a57bca7037747ffeeee7fc3bb90c5c795424.scope - libcontainer container d4623cc6bb3e35495378c07d8127a57bca7037747ffeeee7fc3bb90c5c795424. Mar 17 17:25:37.306062 kubelet[2803]: E0317 17:25:37.305615 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": dial tcp 172.31.21.92:6443: connect: connection refused" interval="1.6s" Mar 17 17:25:37.319610 containerd[1930]: time="2025-03-17T17:25:37.319391050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-92,Uid:3178abaf5d82aabf52282aef991ac38d,Namespace:kube-system,Attempt:0,} returns sandbox id \"33cc922542a77611e9eb7601d701cc32e05ede8a4a65b8fa4960e08e840424e0\"" Mar 17 17:25:37.330685 containerd[1930]: time="2025-03-17T17:25:37.330630034Z" level=info msg="CreateContainer within sandbox \"33cc922542a77611e9eb7601d701cc32e05ede8a4a65b8fa4960e08e840424e0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:25:37.332775 kubelet[2803]: W0317 17:25:37.332558 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:37.332775 kubelet[2803]: E0317 17:25:37.332623 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:37.333507 containerd[1930]: time="2025-03-17T17:25:37.332391286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-92,Uid:025e96a9b73cf48c0b282709ea163300,Namespace:kube-system,Attempt:0,} returns sandbox id \"12c6d2864aab473daa7e12dcd6cb854740196cd8aa041e0945e3ccf5ba588338\"" Mar 17 17:25:37.339261 containerd[1930]: time="2025-03-17T17:25:37.338995390Z" level=info msg="CreateContainer within sandbox \"12c6d2864aab473daa7e12dcd6cb854740196cd8aa041e0945e3ccf5ba588338\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:25:37.345679 containerd[1930]: time="2025-03-17T17:25:37.345208102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-92,Uid:2a0ca01207a2feb18f18e7137357138d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4623cc6bb3e35495378c07d8127a57bca7037747ffeeee7fc3bb90c5c795424\"" Mar 17 17:25:37.352758 containerd[1930]: time="2025-03-17T17:25:37.352688206Z" level=info msg="CreateContainer within sandbox \"d4623cc6bb3e35495378c07d8127a57bca7037747ffeeee7fc3bb90c5c795424\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:25:37.367301 kubelet[2803]: W0317 17:25:37.367212 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:37.367435 kubelet[2803]: E0317 17:25:37.367311 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-92&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:37.368172 containerd[1930]: time="2025-03-17T17:25:37.368105398Z" level=info msg="CreateContainer within sandbox \"33cc922542a77611e9eb7601d701cc32e05ede8a4a65b8fa4960e08e840424e0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43dd68d09c9860757edd154bf839436a20e4d4f6caf1ae2f8bab750b5d560677\"" Mar 17 17:25:37.369515 containerd[1930]: time="2025-03-17T17:25:37.369110038Z" level=info msg="StartContainer for \"43dd68d09c9860757edd154bf839436a20e4d4f6caf1ae2f8bab750b5d560677\"" Mar 17 17:25:37.387002 containerd[1930]: time="2025-03-17T17:25:37.384516694Z" level=info msg="CreateContainer within sandbox \"12c6d2864aab473daa7e12dcd6cb854740196cd8aa041e0945e3ccf5ba588338\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d\"" Mar 17 17:25:37.388455 containerd[1930]: time="2025-03-17T17:25:37.388395994Z" level=info msg="StartContainer for \"d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d\"" Mar 17 17:25:37.405332 containerd[1930]: time="2025-03-17T17:25:37.405270190Z" level=info msg="CreateContainer within sandbox \"d4623cc6bb3e35495378c07d8127a57bca7037747ffeeee7fc3bb90c5c795424\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c\"" Mar 17 17:25:37.408174 containerd[1930]: time="2025-03-17T17:25:37.407500726Z" level=info msg="StartContainer for \"ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c\"" Mar 17 17:25:37.426300 systemd[1]: Started cri-containerd-43dd68d09c9860757edd154bf839436a20e4d4f6caf1ae2f8bab750b5d560677.scope - libcontainer container 43dd68d09c9860757edd154bf839436a20e4d4f6caf1ae2f8bab750b5d560677. Mar 17 17:25:37.440916 kubelet[2803]: W0317 17:25:37.440831 2803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.92:6443: connect: connection refused Mar 17 17:25:37.440916 kubelet[2803]: E0317 17:25:37.440926 2803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.92:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:25:37.470314 systemd[1]: Started cri-containerd-d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d.scope - libcontainer container d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d. Mar 17 17:25:37.503030 kubelet[2803]: I0317 17:25:37.502745 2803 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Mar 17 17:25:37.504578 kubelet[2803]: E0317 17:25:37.504517 2803 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.21.92:6443/api/v1/nodes\": dial tcp 172.31.21.92:6443: connect: connection refused" node="ip-172-31-21-92" Mar 17 17:25:37.507613 systemd[1]: Started cri-containerd-ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c.scope - libcontainer container ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c. Mar 17 17:25:37.571247 containerd[1930]: time="2025-03-17T17:25:37.570629219Z" level=info msg="StartContainer for \"43dd68d09c9860757edd154bf839436a20e4d4f6caf1ae2f8bab750b5d560677\" returns successfully" Mar 17 17:25:37.618932 containerd[1930]: time="2025-03-17T17:25:37.618581496Z" level=info msg="StartContainer for \"d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d\" returns successfully" Mar 17 17:25:37.683919 containerd[1930]: time="2025-03-17T17:25:37.682626756Z" level=info msg="StartContainer for \"ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c\" returns successfully" Mar 17 17:25:37.944283 kubelet[2803]: E0317 17:25:37.944161 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:37.953314 kubelet[2803]: E0317 17:25:37.952607 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:37.955441 kubelet[2803]: E0317 17:25:37.955374 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:38.958878 kubelet[2803]: E0317 17:25:38.958098 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:38.958878 kubelet[2803]: E0317 17:25:38.958600 2803 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-92\" not found" node="ip-172-31-21-92" Mar 17 17:25:39.106715 kubelet[2803]: I0317 17:25:39.106681 2803 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Mar 17 17:25:40.860231 kubelet[2803]: I0317 17:25:40.860157 2803 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-21-92" Mar 17 17:25:40.872449 kubelet[2803]: I0317 17:25:40.872376 2803 apiserver.go:52] "Watching apiserver" Mar 17 17:25:40.885452 kubelet[2803]: I0317 17:25:40.885394 2803 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:25:40.887969 kubelet[2803]: I0317 17:25:40.887319 2803 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:40.909196 kubelet[2803]: E0317 17:25:40.909135 2803 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:40.909196 kubelet[2803]: I0317 17:25:40.909187 2803 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:40.922189 kubelet[2803]: E0317 17:25:40.920285 2803 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:40.922189 kubelet[2803]: I0317 17:25:40.920339 2803 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-92" Mar 17 17:25:40.924559 kubelet[2803]: E0317 17:25:40.924492 2803 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-92" Mar 17 17:25:42.525395 kubelet[2803]: I0317 17:25:42.525350 2803 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:42.991063 systemd[1]: Reloading requested from client PID 3083 ('systemctl') (unit session-7.scope)... Mar 17 17:25:42.991567 systemd[1]: Reloading... Mar 17 17:25:43.203991 zram_generator::config[3123]: No configuration found. Mar 17 17:25:43.430362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:43.624437 systemd[1]: Reloading finished in 631 ms. Mar 17 17:25:43.701843 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:43.725618 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:25:43.726071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:43.726151 systemd[1]: kubelet.service: Consumed 2.006s CPU time, 123.2M memory peak, 0B memory swap peak. Mar 17 17:25:43.734479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:44.034289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:44.056531 (kubelet)[3183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:25:44.150754 kubelet[3183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:44.150754 kubelet[3183]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 17:25:44.150754 kubelet[3183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:44.151447 kubelet[3183]: I0317 17:25:44.150866 3183 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:25:44.169824 kubelet[3183]: I0317 17:25:44.166039 3183 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 17:25:44.169824 kubelet[3183]: I0317 17:25:44.166092 3183 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:25:44.169824 kubelet[3183]: I0317 17:25:44.169243 3183 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 17:25:44.174538 kubelet[3183]: I0317 17:25:44.173894 3183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:25:44.182648 kubelet[3183]: I0317 17:25:44.182601 3183 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:25:44.190688 kubelet[3183]: E0317 17:25:44.190466 3183 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:25:44.190688 kubelet[3183]: I0317 17:25:44.190519 3183 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:25:44.198742 kubelet[3183]: I0317 17:25:44.198457 3183 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:25:44.199367 sudo[3197]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:25:44.200993 kubelet[3183]: I0317 17:25:44.200088 3183 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:25:44.200993 kubelet[3183]: I0317 17:25:44.200161 3183 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:25:44.200993 kubelet[3183]: I0317 17:25:44.200594 3183 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:25:44.200993 kubelet[3183]: I0317 17:25:44.200616 3183 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 17:25:44.200190 sudo[3197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:25:44.201498 kubelet[3183]: I0317 17:25:44.201051 3183 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:44.202749 kubelet[3183]: I0317 17:25:44.202455 3183 kubelet.go:446] "Attempting to sync node with API server" Mar 17 17:25:44.204174 kubelet[3183]: I0317 17:25:44.203847 3183 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:25:44.204174 kubelet[3183]: I0317 17:25:44.203908 3183 kubelet.go:352] "Adding apiserver pod source" Mar 17 17:25:44.204174 kubelet[3183]: I0317 17:25:44.203974 3183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:25:44.210977 kubelet[3183]: I0317 17:25:44.209170 3183 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:25:44.210977 kubelet[3183]: I0317 17:25:44.209927 3183 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:25:44.210977 kubelet[3183]: I0317 17:25:44.210670 3183 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 17:25:44.210977 kubelet[3183]: I0317 17:25:44.210715 3183 server.go:1287] "Started kubelet" Mar 17 17:25:44.220856 kubelet[3183]: I0317 17:25:44.220801 3183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:25:44.242034 kubelet[3183]: I0317 17:25:44.241625 3183 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:25:44.252679 kubelet[3183]: I0317 17:25:44.252619 3183 server.go:490] "Adding debug handlers to kubelet server" Mar 17 17:25:44.257216 kubelet[3183]: I0317 17:25:44.256819 3183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:25:44.257347 kubelet[3183]: I0317 17:25:44.257327 3183 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:25:44.260247 kubelet[3183]: I0317 17:25:44.259560 3183 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:25:44.272456 kubelet[3183]: I0317 17:25:44.271869 3183 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 17:25:44.272456 kubelet[3183]: E0317 17:25:44.272127 3183 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-21-92\" not found" Mar 17 17:25:44.306255 kubelet[3183]: I0317 17:25:44.302601 3183 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:25:44.306255 kubelet[3183]: I0317 17:25:44.302855 3183 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:25:44.322884 kubelet[3183]: I0317 17:25:44.322822 3183 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:25:44.326000 kubelet[3183]: I0317 17:25:44.324354 3183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:25:44.326000 kubelet[3183]: I0317 17:25:44.324675 3183 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:25:44.330509 kubelet[3183]: I0317 17:25:44.330449 3183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:25:44.330509 kubelet[3183]: I0317 17:25:44.330499 3183 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 17:25:44.330702 kubelet[3183]: I0317 17:25:44.330542 3183 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 17:25:44.330702 kubelet[3183]: I0317 17:25:44.330557 3183 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 17:25:44.330702 kubelet[3183]: E0317 17:25:44.330624 3183 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:25:44.363048 kubelet[3183]: I0317 17:25:44.359758 3183 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:25:44.377479 kubelet[3183]: E0317 17:25:44.375117 3183 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:25:44.433842 kubelet[3183]: E0317 17:25:44.431793 3183 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:25:44.500731 kubelet[3183]: I0317 17:25:44.500570 3183 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 17:25:44.500731 kubelet[3183]: I0317 17:25:44.500606 3183 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 17:25:44.500731 kubelet[3183]: I0317 17:25:44.500642 3183 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:44.502068 kubelet[3183]: I0317 17:25:44.501950 3183 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:25:44.502068 kubelet[3183]: I0317 17:25:44.501990 3183 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:25:44.502068 kubelet[3183]: I0317 17:25:44.502028 3183 policy_none.go:49] "None policy: Start" Mar 17 17:25:44.502068 kubelet[3183]: I0317 17:25:44.502047 3183 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 17:25:44.502068 kubelet[3183]: I0317 17:25:44.502072 3183 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:25:44.502334 kubelet[3183]: I0317 17:25:44.502273 3183 state_mem.go:75] "Updated machine memory state" Mar 17 17:25:44.511999 kubelet[3183]: I0317 17:25:44.511903 3183 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:25:44.512244 kubelet[3183]: I0317 17:25:44.512214 3183 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:25:44.512310 kubelet[3183]: I0317 17:25:44.512244 3183 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:25:44.513925 kubelet[3183]: I0317 17:25:44.513709 3183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:25:44.518815 kubelet[3183]: E0317 17:25:44.518414 3183 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 17:25:44.634143 kubelet[3183]: I0317 17:25:44.632985 3183 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-21-92" Mar 17 17:25:44.636069 kubelet[3183]: I0317 17:25:44.636019 3183 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-92" Mar 17 17:25:44.636672 kubelet[3183]: I0317 17:25:44.636639 3183 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:44.640176 kubelet[3183]: I0317 17:25:44.640134 3183 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:44.667670 kubelet[3183]: E0317 17:25:44.667613 3183 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-92\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:44.671675 kubelet[3183]: I0317 17:25:44.671620 3183 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-21-92" Mar 17 17:25:44.671848 kubelet[3183]: I0317 17:25:44.671736 3183 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-21-92" Mar 17 17:25:44.708963 kubelet[3183]: I0317 17:25:44.706859 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3178abaf5d82aabf52282aef991ac38d-ca-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"3178abaf5d82aabf52282aef991ac38d\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:44.708963 kubelet[3183]: I0317 17:25:44.706926 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3178abaf5d82aabf52282aef991ac38d-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"3178abaf5d82aabf52282aef991ac38d\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:44.709443 kubelet[3183]: I0317 17:25:44.709277 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3178abaf5d82aabf52282aef991ac38d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-92\" (UID: \"3178abaf5d82aabf52282aef991ac38d\") " pod="kube-system/kube-apiserver-ip-172-31-21-92" Mar 17 17:25:44.709443 kubelet[3183]: I0317 17:25:44.709372 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:44.709751 kubelet[3183]: I0317 17:25:44.709417 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:44.709751 kubelet[3183]: I0317 17:25:44.709634 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:44.709751 kubelet[3183]: I0317 17:25:44.709710 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:44.710182 kubelet[3183]: I0317 17:25:44.710048 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/025e96a9b73cf48c0b282709ea163300-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-92\" (UID: \"025e96a9b73cf48c0b282709ea163300\") " pod="kube-system/kube-controller-manager-ip-172-31-21-92" Mar 17 17:25:44.710182 kubelet[3183]: I0317 17:25:44.710128 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a0ca01207a2feb18f18e7137357138d-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-92\" (UID: \"2a0ca01207a2feb18f18e7137357138d\") " pod="kube-system/kube-scheduler-ip-172-31-21-92" Mar 17 17:25:45.169579 sudo[3197]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:45.218674 kubelet[3183]: I0317 17:25:45.218601 3183 apiserver.go:52] "Watching apiserver" Mar 17 17:25:45.303106 kubelet[3183]: I0317 17:25:45.303029 3183 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:25:45.454499 kubelet[3183]: I0317 17:25:45.453636 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-92" podStartSLOduration=1.453616554 podStartE2EDuration="1.453616554s" podCreationTimestamp="2025-03-17 17:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:45.45020937 +0000 UTC m=+1.386566059" watchObservedRunningTime="2025-03-17 17:25:45.453616554 +0000 UTC m=+1.389973219" Mar 17 17:25:45.487194 kubelet[3183]: I0317 17:25:45.487110 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-92" podStartSLOduration=1.487086463 podStartE2EDuration="1.487086463s" podCreationTimestamp="2025-03-17 17:25:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:45.470085787 +0000 UTC m=+1.406442452" watchObservedRunningTime="2025-03-17 17:25:45.487086463 +0000 UTC m=+1.423443116" Mar 17 17:25:45.487636 kubelet[3183]: I0317 17:25:45.487256 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-92" podStartSLOduration=3.487244191 podStartE2EDuration="3.487244191s" podCreationTimestamp="2025-03-17 17:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:45.485683531 +0000 UTC m=+1.422040208" watchObservedRunningTime="2025-03-17 17:25:45.487244191 +0000 UTC m=+1.423600868" Mar 17 17:25:47.170727 update_engine[1911]: I20250317 17:25:47.170418 1911 update_attempter.cc:509] Updating boot flags... Mar 17 17:25:47.261112 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3239) Mar 17 17:25:47.650374 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3239) Mar 17 17:25:47.667692 kubelet[3183]: I0317 17:25:47.667319 3183 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:25:47.673212 containerd[1930]: time="2025-03-17T17:25:47.671220741Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:25:47.673717 kubelet[3183]: I0317 17:25:47.671605 3183 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:25:48.070840 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3239) Mar 17 17:25:48.438052 kubelet[3183]: I0317 17:25:48.436221 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-xtables-lock\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438052 kubelet[3183]: I0317 17:25:48.436285 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-config-path\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438052 kubelet[3183]: I0317 17:25:48.436330 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-net\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438052 kubelet[3183]: I0317 17:25:48.436370 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrmnz\" (UniqueName: \"kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-kube-api-access-jrmnz\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438052 kubelet[3183]: I0317 17:25:48.436413 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-run\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438052 kubelet[3183]: I0317 17:25:48.436450 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cni-path\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438418 kubelet[3183]: I0317 17:25:48.436484 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5bcd66f9-d41f-434f-8235-86715e4fcd1c-clustermesh-secrets\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438418 kubelet[3183]: I0317 17:25:48.436535 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hubble-tls\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438418 kubelet[3183]: I0317 17:25:48.436574 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-kube-proxy\") pod \"kube-proxy-hk4zq\" (UID: \"38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d\") " pod="kube-system/kube-proxy-hk4zq" Mar 17 17:25:48.438418 kubelet[3183]: I0317 17:25:48.436609 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-xtables-lock\") pod \"kube-proxy-hk4zq\" (UID: \"38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d\") " pod="kube-system/kube-proxy-hk4zq" Mar 17 17:25:48.438418 kubelet[3183]: I0317 17:25:48.436677 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-cgroup\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438418 kubelet[3183]: I0317 17:25:48.436713 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hostproc\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438685 kubelet[3183]: I0317 17:25:48.436747 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-etc-cni-netd\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438685 kubelet[3183]: I0317 17:25:48.436781 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-kernel\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438685 kubelet[3183]: I0317 17:25:48.436819 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-lib-modules\") pod \"kube-proxy-hk4zq\" (UID: \"38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d\") " pod="kube-system/kube-proxy-hk4zq" Mar 17 17:25:48.438685 kubelet[3183]: I0317 17:25:48.436857 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-lib-modules\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.438685 kubelet[3183]: I0317 17:25:48.436892 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5x9m\" (UniqueName: \"kubernetes.io/projected/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-kube-api-access-t5x9m\") pod \"kube-proxy-hk4zq\" (UID: \"38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d\") " pod="kube-system/kube-proxy-hk4zq" Mar 17 17:25:48.450147 kubelet[3183]: I0317 17:25:48.436931 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-bpf-maps\") pod \"cilium-f79k4\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " pod="kube-system/cilium-f79k4" Mar 17 17:25:48.464705 kubelet[3183]: I0317 17:25:48.464530 3183 status_manager.go:890] "Failed to get status for pod" podUID="38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d" pod="kube-system/kube-proxy-hk4zq" err="pods \"kube-proxy-hk4zq\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" Mar 17 17:25:48.467147 kubelet[3183]: W0317 17:25:48.464831 3183 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-21-92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-92' and this object Mar 17 17:25:48.467147 kubelet[3183]: E0317 17:25:48.464886 3183 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" logger="UnhandledError" Mar 17 17:25:48.467147 kubelet[3183]: W0317 17:25:48.465258 3183 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-21-92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-92' and this object Mar 17 17:25:48.467147 kubelet[3183]: E0317 17:25:48.465294 3183 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" logger="UnhandledError" Mar 17 17:25:48.467147 kubelet[3183]: W0317 17:25:48.465377 3183 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-21-92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-92' and this object Mar 17 17:25:48.467456 kubelet[3183]: E0317 17:25:48.465406 3183 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" logger="UnhandledError" Mar 17 17:25:48.467456 kubelet[3183]: W0317 17:25:48.465483 3183 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-21-92" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-92' and this object Mar 17 17:25:48.467456 kubelet[3183]: E0317 17:25:48.465508 3183 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" logger="UnhandledError" Mar 17 17:25:48.467456 kubelet[3183]: W0317 17:25:48.465580 3183 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-21-92" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-92' and this object Mar 17 17:25:48.467456 kubelet[3183]: E0317 17:25:48.465605 3183 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" logger="UnhandledError" Mar 17 17:25:48.523290 systemd[1]: Created slice kubepods-besteffort-pod38ea490a_fcc7_4cab_9f3a_90a07c7a2e8d.slice - libcontainer container kubepods-besteffort-pod38ea490a_fcc7_4cab_9f3a_90a07c7a2e8d.slice. Mar 17 17:25:48.552997 systemd[1]: Created slice kubepods-burstable-pod5bcd66f9_d41f_434f_8235_86715e4fcd1c.slice - libcontainer container kubepods-burstable-pod5bcd66f9_d41f_434f_8235_86715e4fcd1c.slice. Mar 17 17:25:48.940267 kubelet[3183]: I0317 17:25:48.940158 3183 status_manager.go:890] "Failed to get status for pod" podUID="3236e17d-ea36-411f-ad17-3e48a99fb57b" pod="kube-system/cilium-operator-6c4d7847fc-mzv8h" err="pods \"cilium-operator-6c4d7847fc-mzv8h\" is forbidden: User \"system:node:ip-172-31-21-92\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-92' and this object" Mar 17 17:25:48.946005 systemd[1]: Created slice kubepods-besteffort-pod3236e17d_ea36_411f_ad17_3e48a99fb57b.slice - libcontainer container kubepods-besteffort-pod3236e17d_ea36_411f_ad17_3e48a99fb57b.slice. Mar 17 17:25:48.963212 kubelet[3183]: I0317 17:25:48.963135 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6268\" (UniqueName: \"kubernetes.io/projected/3236e17d-ea36-411f-ad17-3e48a99fb57b-kube-api-access-f6268\") pod \"cilium-operator-6c4d7847fc-mzv8h\" (UID: \"3236e17d-ea36-411f-ad17-3e48a99fb57b\") " pod="kube-system/cilium-operator-6c4d7847fc-mzv8h" Mar 17 17:25:48.963365 kubelet[3183]: I0317 17:25:48.963222 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3236e17d-ea36-411f-ad17-3e48a99fb57b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mzv8h\" (UID: \"3236e17d-ea36-411f-ad17-3e48a99fb57b\") " pod="kube-system/cilium-operator-6c4d7847fc-mzv8h" Mar 17 17:25:48.988841 sudo[2252]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:49.011210 sshd[2251]: Connection closed by 139.178.68.195 port 33450 Mar 17 17:25:49.012085 sshd-session[2249]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:49.018744 systemd-logind[1909]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:25:49.019129 systemd[1]: sshd@6-172.31.21.92:22-139.178.68.195:33450.service: Deactivated successfully. Mar 17 17:25:49.022578 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:25:49.024376 systemd[1]: session-7.scope: Consumed 12.066s CPU time, 153.2M memory peak, 0B memory swap peak. Mar 17 17:25:49.027128 systemd-logind[1909]: Removed session 7. Mar 17 17:25:49.565789 kubelet[3183]: E0317 17:25:49.565732 3183 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:49.565920 kubelet[3183]: E0317 17:25:49.565862 3183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-kube-proxy podName:38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d nodeName:}" failed. No retries permitted until 2025-03-17 17:25:50.065830827 +0000 UTC m=+6.002187468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-kube-proxy") pod "kube-proxy-hk4zq" (UID: "38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:49.566259 kubelet[3183]: E0317 17:25:49.565751 3183 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Mar 17 17:25:49.566259 kubelet[3183]: E0317 17:25:49.566135 3183 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-f79k4: failed to sync secret cache: timed out waiting for the condition Mar 17 17:25:49.566259 kubelet[3183]: E0317 17:25:49.566221 3183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hubble-tls podName:5bcd66f9-d41f-434f-8235-86715e4fcd1c nodeName:}" failed. No retries permitted until 2025-03-17 17:25:50.066200811 +0000 UTC m=+6.002557464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hubble-tls") pod "cilium-f79k4" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:25:49.568136 kubelet[3183]: E0317 17:25:49.568009 3183 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 17 17:25:49.568303 kubelet[3183]: E0317 17:25:49.568172 3183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5bcd66f9-d41f-434f-8235-86715e4fcd1c-clustermesh-secrets podName:5bcd66f9-d41f-434f-8235-86715e4fcd1c nodeName:}" failed. No retries permitted until 2025-03-17 17:25:50.068147007 +0000 UTC m=+6.004503648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5bcd66f9-d41f-434f-8235-86715e4fcd1c-clustermesh-secrets") pod "cilium-f79k4" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c") : failed to sync secret cache: timed out waiting for the condition Mar 17 17:25:49.652201 kubelet[3183]: E0317 17:25:49.652094 3183 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:49.652201 kubelet[3183]: E0317 17:25:49.652142 3183 projected.go:194] Error preparing data for projected volume kube-api-access-jrmnz for pod kube-system/cilium-f79k4: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:49.652384 kubelet[3183]: E0317 17:25:49.652218 3183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-kube-api-access-jrmnz podName:5bcd66f9-d41f-434f-8235-86715e4fcd1c nodeName:}" failed. No retries permitted until 2025-03-17 17:25:50.152191527 +0000 UTC m=+6.088548192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jrmnz" (UniqueName: "kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-kube-api-access-jrmnz") pod "cilium-f79k4" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:49.656266 kubelet[3183]: E0317 17:25:49.656082 3183 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:49.656266 kubelet[3183]: E0317 17:25:49.656131 3183 projected.go:194] Error preparing data for projected volume kube-api-access-t5x9m for pod kube-system/kube-proxy-hk4zq: failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:49.656266 kubelet[3183]: E0317 17:25:49.656209 3183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-kube-api-access-t5x9m podName:38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d nodeName:}" failed. No retries permitted until 2025-03-17 17:25:50.156182403 +0000 UTC m=+6.092539056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t5x9m" (UniqueName: "kubernetes.io/projected/38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d-kube-api-access-t5x9m") pod "kube-proxy-hk4zq" (UID: "38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d") : failed to sync configmap cache: timed out waiting for the condition Mar 17 17:25:50.155508 containerd[1930]: time="2025-03-17T17:25:50.155456602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mzv8h,Uid:3236e17d-ea36-411f-ad17-3e48a99fb57b,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:50.220822 containerd[1930]: time="2025-03-17T17:25:50.220460410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:50.220822 containerd[1930]: time="2025-03-17T17:25:50.220545298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:50.220822 containerd[1930]: time="2025-03-17T17:25:50.220570306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:50.220822 containerd[1930]: time="2025-03-17T17:25:50.220703290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:50.257820 systemd[1]: Started cri-containerd-65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656.scope - libcontainer container 65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656. Mar 17 17:25:50.316090 containerd[1930]: time="2025-03-17T17:25:50.316020863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mzv8h,Uid:3236e17d-ea36-411f-ad17-3e48a99fb57b,Namespace:kube-system,Attempt:0,} returns sandbox id \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\"" Mar 17 17:25:50.321521 containerd[1930]: time="2025-03-17T17:25:50.321438419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:25:50.341840 containerd[1930]: time="2025-03-17T17:25:50.341704055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk4zq,Uid:38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:50.368984 containerd[1930]: time="2025-03-17T17:25:50.368862107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f79k4,Uid:5bcd66f9-d41f-434f-8235-86715e4fcd1c,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:50.391153 containerd[1930]: time="2025-03-17T17:25:50.390676247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:50.391153 containerd[1930]: time="2025-03-17T17:25:50.390758471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:50.391153 containerd[1930]: time="2025-03-17T17:25:50.390783659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:50.391588 containerd[1930]: time="2025-03-17T17:25:50.390934463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:50.425995 containerd[1930]: time="2025-03-17T17:25:50.425396975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:50.425995 containerd[1930]: time="2025-03-17T17:25:50.425501939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:50.425995 containerd[1930]: time="2025-03-17T17:25:50.425538767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:50.425995 containerd[1930]: time="2025-03-17T17:25:50.425678219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:50.433258 systemd[1]: Started cri-containerd-6b4975a248c51ce2ada7da254c1ddaef142411b536fbeb0703f2410eaebf10cc.scope - libcontainer container 6b4975a248c51ce2ada7da254c1ddaef142411b536fbeb0703f2410eaebf10cc. Mar 17 17:25:50.474590 systemd[1]: Started cri-containerd-2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc.scope - libcontainer container 2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc. Mar 17 17:25:50.501017 containerd[1930]: time="2025-03-17T17:25:50.500819988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk4zq,Uid:38ea490a-fcc7-4cab-9f3a-90a07c7a2e8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b4975a248c51ce2ada7da254c1ddaef142411b536fbeb0703f2410eaebf10cc\"" Mar 17 17:25:50.514564 containerd[1930]: time="2025-03-17T17:25:50.514374372Z" level=info msg="CreateContainer within sandbox \"6b4975a248c51ce2ada7da254c1ddaef142411b536fbeb0703f2410eaebf10cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:25:50.548609 containerd[1930]: time="2025-03-17T17:25:50.548436912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f79k4,Uid:5bcd66f9-d41f-434f-8235-86715e4fcd1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\"" Mar 17 17:25:50.566467 containerd[1930]: time="2025-03-17T17:25:50.566407596Z" level=info msg="CreateContainer within sandbox \"6b4975a248c51ce2ada7da254c1ddaef142411b536fbeb0703f2410eaebf10cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b30d7a09b38530bb8ef03ecad39358949198dca599ac9a1c507077b0d3fb9cc\"" Mar 17 17:25:50.567667 containerd[1930]: time="2025-03-17T17:25:50.567622380Z" level=info msg="StartContainer for \"1b30d7a09b38530bb8ef03ecad39358949198dca599ac9a1c507077b0d3fb9cc\"" Mar 17 17:25:50.627382 systemd[1]: Started cri-containerd-1b30d7a09b38530bb8ef03ecad39358949198dca599ac9a1c507077b0d3fb9cc.scope - libcontainer container 1b30d7a09b38530bb8ef03ecad39358949198dca599ac9a1c507077b0d3fb9cc. Mar 17 17:25:50.692025 containerd[1930]: time="2025-03-17T17:25:50.691693824Z" level=info msg="StartContainer for \"1b30d7a09b38530bb8ef03ecad39358949198dca599ac9a1c507077b0d3fb9cc\" returns successfully" Mar 17 17:25:51.460652 kubelet[3183]: I0317 17:25:51.460208 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hk4zq" podStartSLOduration=3.460185456 podStartE2EDuration="3.460185456s" podCreationTimestamp="2025-03-17 17:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:51.460139064 +0000 UTC m=+7.396495729" watchObservedRunningTime="2025-03-17 17:25:51.460185456 +0000 UTC m=+7.396542097" Mar 17 17:25:51.970811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17860667.mount: Deactivated successfully. Mar 17 17:25:54.484043 containerd[1930]: time="2025-03-17T17:25:54.483963927Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:54.485199 containerd[1930]: time="2025-03-17T17:25:54.485108199Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:25:54.486513 containerd[1930]: time="2025-03-17T17:25:54.486440619Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:54.489340 containerd[1930]: time="2025-03-17T17:25:54.489289983Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.167767192s" Mar 17 17:25:54.489665 containerd[1930]: time="2025-03-17T17:25:54.489513951Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:25:54.492557 containerd[1930]: time="2025-03-17T17:25:54.492397851Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:25:54.496805 containerd[1930]: time="2025-03-17T17:25:54.496733967Z" level=info msg="CreateContainer within sandbox \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:25:54.524254 containerd[1930]: time="2025-03-17T17:25:54.523663072Z" level=info msg="CreateContainer within sandbox \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\"" Mar 17 17:25:54.526810 containerd[1930]: time="2025-03-17T17:25:54.525210268Z" level=info msg="StartContainer for \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\"" Mar 17 17:25:54.526668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259023402.mount: Deactivated successfully. Mar 17 17:25:54.580272 systemd[1]: run-containerd-runc-k8s.io-0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8-runc.Am6FEf.mount: Deactivated successfully. Mar 17 17:25:54.600239 systemd[1]: Started cri-containerd-0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8.scope - libcontainer container 0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8. Mar 17 17:25:54.671235 containerd[1930]: time="2025-03-17T17:25:54.671048368Z" level=info msg="StartContainer for \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\" returns successfully" Mar 17 17:25:56.724346 kubelet[3183]: I0317 17:25:56.724207 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mzv8h" podStartSLOduration=4.551363498 podStartE2EDuration="8.72418083s" podCreationTimestamp="2025-03-17 17:25:48 +0000 UTC" firstStartedPulling="2025-03-17 17:25:50.318304559 +0000 UTC m=+6.254661212" lastFinishedPulling="2025-03-17 17:25:54.491121891 +0000 UTC m=+10.427478544" observedRunningTime="2025-03-17 17:25:55.589460177 +0000 UTC m=+11.525816842" watchObservedRunningTime="2025-03-17 17:25:56.72418083 +0000 UTC m=+12.660537495" Mar 17 17:26:02.498586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589576773.mount: Deactivated successfully. Mar 17 17:26:05.076010 containerd[1930]: time="2025-03-17T17:26:05.075454752Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:05.077900 containerd[1930]: time="2025-03-17T17:26:05.077846316Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:26:05.080534 containerd[1930]: time="2025-03-17T17:26:05.080473212Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:26:05.085581 containerd[1930]: time="2025-03-17T17:26:05.085519740Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.593008525s" Mar 17 17:26:05.085786 containerd[1930]: time="2025-03-17T17:26:05.085584120Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:26:05.090279 containerd[1930]: time="2025-03-17T17:26:05.090219936Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:26:05.114619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3949791575.mount: Deactivated successfully. Mar 17 17:26:05.136264 containerd[1930]: time="2025-03-17T17:26:05.136185120Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\"" Mar 17 17:26:05.137467 containerd[1930]: time="2025-03-17T17:26:05.137252820Z" level=info msg="StartContainer for \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\"" Mar 17 17:26:05.189884 systemd[1]: run-containerd-runc-k8s.io-4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb-runc.dX4bF0.mount: Deactivated successfully. Mar 17 17:26:05.205299 systemd[1]: Started cri-containerd-4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb.scope - libcontainer container 4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb. Mar 17 17:26:05.252464 containerd[1930]: time="2025-03-17T17:26:05.252391705Z" level=info msg="StartContainer for \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\" returns successfully" Mar 17 17:26:05.269628 systemd[1]: cri-containerd-4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb.scope: Deactivated successfully. Mar 17 17:26:06.109315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb-rootfs.mount: Deactivated successfully. Mar 17 17:26:06.390877 containerd[1930]: time="2025-03-17T17:26:06.390529466Z" level=info msg="shim disconnected" id=4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb namespace=k8s.io Mar 17 17:26:06.390877 containerd[1930]: time="2025-03-17T17:26:06.390640958Z" level=warning msg="cleaning up after shim disconnected" id=4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb namespace=k8s.io Mar 17 17:26:06.390877 containerd[1930]: time="2025-03-17T17:26:06.390660062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:06.498542 containerd[1930]: time="2025-03-17T17:26:06.498277443Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:26:06.523460 containerd[1930]: time="2025-03-17T17:26:06.523396263Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\"" Mar 17 17:26:06.526723 containerd[1930]: time="2025-03-17T17:26:06.524834151Z" level=info msg="StartContainer for \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\"" Mar 17 17:26:06.590509 systemd[1]: Started cri-containerd-19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4.scope - libcontainer container 19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4. Mar 17 17:26:06.640037 containerd[1930]: time="2025-03-17T17:26:06.638731036Z" level=info msg="StartContainer for \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\" returns successfully" Mar 17 17:26:06.672192 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:26:06.672713 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:26:06.672916 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:26:06.684650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:26:06.685852 systemd[1]: cri-containerd-19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4.scope: Deactivated successfully. Mar 17 17:26:06.734872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:26:06.749299 containerd[1930]: time="2025-03-17T17:26:06.749218900Z" level=info msg="shim disconnected" id=19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4 namespace=k8s.io Mar 17 17:26:06.750177 containerd[1930]: time="2025-03-17T17:26:06.749296480Z" level=warning msg="cleaning up after shim disconnected" id=19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4 namespace=k8s.io Mar 17 17:26:06.750177 containerd[1930]: time="2025-03-17T17:26:06.749320648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:07.108839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4-rootfs.mount: Deactivated successfully. Mar 17 17:26:07.518072 containerd[1930]: time="2025-03-17T17:26:07.516896680Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:26:07.547334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32627846.mount: Deactivated successfully. Mar 17 17:26:07.549252 containerd[1930]: time="2025-03-17T17:26:07.549198088Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\"" Mar 17 17:26:07.553193 containerd[1930]: time="2025-03-17T17:26:07.553044400Z" level=info msg="StartContainer for \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\"" Mar 17 17:26:07.613914 systemd[1]: Started cri-containerd-611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a.scope - libcontainer container 611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a. Mar 17 17:26:07.675784 containerd[1930]: time="2025-03-17T17:26:07.675697397Z" level=info msg="StartContainer for \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\" returns successfully" Mar 17 17:26:07.683595 systemd[1]: cri-containerd-611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a.scope: Deactivated successfully. Mar 17 17:26:07.731235 containerd[1930]: time="2025-03-17T17:26:07.731091797Z" level=info msg="shim disconnected" id=611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a namespace=k8s.io Mar 17 17:26:07.731235 containerd[1930]: time="2025-03-17T17:26:07.731164613Z" level=warning msg="cleaning up after shim disconnected" id=611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a namespace=k8s.io Mar 17 17:26:07.731235 containerd[1930]: time="2025-03-17T17:26:07.731183477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:08.109549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a-rootfs.mount: Deactivated successfully. Mar 17 17:26:08.519519 containerd[1930]: time="2025-03-17T17:26:08.519073169Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:26:08.543382 containerd[1930]: time="2025-03-17T17:26:08.543313757Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\"" Mar 17 17:26:08.547448 containerd[1930]: time="2025-03-17T17:26:08.544707401Z" level=info msg="StartContainer for \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\"" Mar 17 17:26:08.605283 systemd[1]: Started cri-containerd-636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233.scope - libcontainer container 636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233. Mar 17 17:26:08.652303 systemd[1]: cri-containerd-636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233.scope: Deactivated successfully. Mar 17 17:26:08.661809 containerd[1930]: time="2025-03-17T17:26:08.661741182Z" level=info msg="StartContainer for \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\" returns successfully" Mar 17 17:26:08.699207 containerd[1930]: time="2025-03-17T17:26:08.699132150Z" level=info msg="shim disconnected" id=636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233 namespace=k8s.io Mar 17 17:26:08.699492 containerd[1930]: time="2025-03-17T17:26:08.699445350Z" level=warning msg="cleaning up after shim disconnected" id=636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233 namespace=k8s.io Mar 17 17:26:08.699596 containerd[1930]: time="2025-03-17T17:26:08.699570714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:26:09.109732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233-rootfs.mount: Deactivated successfully. Mar 17 17:26:09.528200 containerd[1930]: time="2025-03-17T17:26:09.528129834Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:26:09.563291 containerd[1930]: time="2025-03-17T17:26:09.563227734Z" level=info msg="CreateContainer within sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\"" Mar 17 17:26:09.564380 containerd[1930]: time="2025-03-17T17:26:09.564267510Z" level=info msg="StartContainer for \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\"" Mar 17 17:26:09.622245 systemd[1]: Started cri-containerd-8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385.scope - libcontainer container 8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385. Mar 17 17:26:09.679574 containerd[1930]: time="2025-03-17T17:26:09.679515787Z" level=info msg="StartContainer for \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\" returns successfully" Mar 17 17:26:09.813685 kubelet[3183]: I0317 17:26:09.811696 3183 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 17:26:09.886309 systemd[1]: Created slice kubepods-burstable-pod589bdfde_07e3_4b9d_9325_21992d76eb1d.slice - libcontainer container kubepods-burstable-pod589bdfde_07e3_4b9d_9325_21992d76eb1d.slice. Mar 17 17:26:09.911093 systemd[1]: Created slice kubepods-burstable-poda9e1ac25_091b_4161_a703_d57e89c8a0a8.slice - libcontainer container kubepods-burstable-poda9e1ac25_091b_4161_a703_d57e89c8a0a8.slice. Mar 17 17:26:09.917832 kubelet[3183]: I0317 17:26:09.917553 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/589bdfde-07e3-4b9d-9325-21992d76eb1d-config-volume\") pod \"coredns-668d6bf9bc-mrwg4\" (UID: \"589bdfde-07e3-4b9d-9325-21992d76eb1d\") " pod="kube-system/coredns-668d6bf9bc-mrwg4" Mar 17 17:26:09.917832 kubelet[3183]: I0317 17:26:09.917678 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6l55\" (UniqueName: \"kubernetes.io/projected/589bdfde-07e3-4b9d-9325-21992d76eb1d-kube-api-access-q6l55\") pod \"coredns-668d6bf9bc-mrwg4\" (UID: \"589bdfde-07e3-4b9d-9325-21992d76eb1d\") " pod="kube-system/coredns-668d6bf9bc-mrwg4" Mar 17 17:26:09.917832 kubelet[3183]: I0317 17:26:09.917768 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9e1ac25-091b-4161-a703-d57e89c8a0a8-config-volume\") pod \"coredns-668d6bf9bc-dtwkt\" (UID: \"a9e1ac25-091b-4161-a703-d57e89c8a0a8\") " pod="kube-system/coredns-668d6bf9bc-dtwkt" Mar 17 17:26:09.918193 kubelet[3183]: I0317 17:26:09.917853 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9jv\" (UniqueName: \"kubernetes.io/projected/a9e1ac25-091b-4161-a703-d57e89c8a0a8-kube-api-access-kz9jv\") pod \"coredns-668d6bf9bc-dtwkt\" (UID: \"a9e1ac25-091b-4161-a703-d57e89c8a0a8\") " pod="kube-system/coredns-668d6bf9bc-dtwkt" Mar 17 17:26:10.197114 containerd[1930]: time="2025-03-17T17:26:10.196905233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mrwg4,Uid:589bdfde-07e3-4b9d-9325-21992d76eb1d,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:10.240055 containerd[1930]: time="2025-03-17T17:26:10.232254186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dtwkt,Uid:a9e1ac25-091b-4161-a703-d57e89c8a0a8,Namespace:kube-system,Attempt:0,}" Mar 17 17:26:12.541852 (udev-worker)[4251]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:12.545301 (udev-worker)[4284]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:12.545305 systemd-networkd[1843]: cilium_host: Link UP Mar 17 17:26:12.545616 systemd-networkd[1843]: cilium_net: Link UP Mar 17 17:26:12.547058 systemd-networkd[1843]: cilium_net: Gained carrier Mar 17 17:26:12.547487 systemd-networkd[1843]: cilium_host: Gained carrier Mar 17 17:26:12.557097 systemd-networkd[1843]: cilium_host: Gained IPv6LL Mar 17 17:26:12.723386 (udev-worker)[4297]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:12.735075 systemd-networkd[1843]: cilium_vxlan: Link UP Mar 17 17:26:12.735094 systemd-networkd[1843]: cilium_vxlan: Gained carrier Mar 17 17:26:12.992381 systemd-networkd[1843]: cilium_net: Gained IPv6LL Mar 17 17:26:13.223275 kernel: NET: Registered PF_ALG protocol family Mar 17 17:26:13.944177 systemd-networkd[1843]: cilium_vxlan: Gained IPv6LL Mar 17 17:26:14.507224 systemd-networkd[1843]: lxc_health: Link UP Mar 17 17:26:14.516414 (udev-worker)[4298]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:26:14.521000 systemd-networkd[1843]: lxc_health: Gained carrier Mar 17 17:26:14.861899 systemd-networkd[1843]: lxc081e70cee4b1: Link UP Mar 17 17:26:14.870695 kernel: eth0: renamed from tmp7c2b1 Mar 17 17:26:14.876627 systemd-networkd[1843]: lxc081e70cee4b1: Gained carrier Mar 17 17:26:14.912004 systemd-networkd[1843]: lxce3339ca07697: Link UP Mar 17 17:26:14.929975 kernel: eth0: renamed from tmp4548a Mar 17 17:26:14.938500 systemd-networkd[1843]: lxce3339ca07697: Gained carrier Mar 17 17:26:16.056268 systemd-networkd[1843]: lxc_health: Gained IPv6LL Mar 17 17:26:16.428411 kubelet[3183]: I0317 17:26:16.427780 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f79k4" podStartSLOduration=13.891801384 podStartE2EDuration="28.427752504s" podCreationTimestamp="2025-03-17 17:25:48 +0000 UTC" firstStartedPulling="2025-03-17 17:25:50.550533 +0000 UTC m=+6.486889653" lastFinishedPulling="2025-03-17 17:26:05.08648412 +0000 UTC m=+21.022840773" observedRunningTime="2025-03-17 17:26:10.566076475 +0000 UTC m=+26.502433152" watchObservedRunningTime="2025-03-17 17:26:16.427752504 +0000 UTC m=+32.364109157" Mar 17 17:26:16.504226 systemd-networkd[1843]: lxce3339ca07697: Gained IPv6LL Mar 17 17:26:16.952226 systemd-networkd[1843]: lxc081e70cee4b1: Gained IPv6LL Mar 17 17:26:19.621247 ntpd[1903]: Listen normally on 7 cilium_host 192.168.0.147:123 Mar 17 17:26:19.623264 ntpd[1903]: 17 Mar 17:26:19 ntpd[1903]: Listen normally on 7 cilium_host 192.168.0.147:123 Mar 17 17:26:19.623264 ntpd[1903]: 17 Mar 17:26:19 ntpd[1903]: Listen normally on 8 cilium_net [fe80::dcf4:1aff:fed1:8d9d%4]:123 Mar 17 17:26:19.623264 ntpd[1903]: 17 Mar 17:26:19 ntpd[1903]: Listen normally on 9 cilium_host [fe80::2ce8:d8ff:fe0c:72b6%5]:123 Mar 17 17:26:19.623264 ntpd[1903]: 17 Mar 17:26:19 ntpd[1903]: Listen normally on 10 cilium_vxlan [fe80::8c14:eff:fe61:c516%6]:123 Mar 17 17:26:19.623264 ntpd[1903]: 17 Mar 17:26:19 ntpd[1903]: Listen normally on 11 lxc_health [fe80::7c4e:bbff:fef3:be8d%8]:123 Mar 17 17:26:19.623264 ntpd[1903]: 17 Mar 17:26:19 ntpd[1903]: Listen normally on 12 lxc081e70cee4b1 [fe80::cc31:acff:fe33:dcc5%10]:123 Mar 17 17:26:19.623264 ntpd[1903]: 17 Mar 17:26:19 ntpd[1903]: Listen normally on 13 lxce3339ca07697 [fe80::1024:1aff:fe26:1728%12]:123 Mar 17 17:26:19.621370 ntpd[1903]: Listen normally on 8 cilium_net [fe80::dcf4:1aff:fed1:8d9d%4]:123 Mar 17 17:26:19.621448 ntpd[1903]: Listen normally on 9 cilium_host [fe80::2ce8:d8ff:fe0c:72b6%5]:123 Mar 17 17:26:19.621522 ntpd[1903]: Listen normally on 10 cilium_vxlan [fe80::8c14:eff:fe61:c516%6]:123 Mar 17 17:26:19.621588 ntpd[1903]: Listen normally on 11 lxc_health [fe80::7c4e:bbff:fef3:be8d%8]:123 Mar 17 17:26:19.621654 ntpd[1903]: Listen normally on 12 lxc081e70cee4b1 [fe80::cc31:acff:fe33:dcc5%10]:123 Mar 17 17:26:19.621720 ntpd[1903]: Listen normally on 13 lxce3339ca07697 [fe80::1024:1aff:fe26:1728%12]:123 Mar 17 17:26:23.352891 containerd[1930]: time="2025-03-17T17:26:23.350781895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:23.352891 containerd[1930]: time="2025-03-17T17:26:23.350933791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:23.352891 containerd[1930]: time="2025-03-17T17:26:23.352636267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:23.359312 containerd[1930]: time="2025-03-17T17:26:23.359165095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:23.426253 systemd[1]: Started cri-containerd-7c2b1c8d98b1868b999ae247cc5a8f96c4722827044f8c0b37d103a95feff129.scope - libcontainer container 7c2b1c8d98b1868b999ae247cc5a8f96c4722827044f8c0b37d103a95feff129. Mar 17 17:26:23.537989 containerd[1930]: time="2025-03-17T17:26:23.537772112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:26:23.540506 containerd[1930]: time="2025-03-17T17:26:23.537886616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:26:23.541005 containerd[1930]: time="2025-03-17T17:26:23.540432344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:23.541150 containerd[1930]: time="2025-03-17T17:26:23.540961448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:26:23.608565 containerd[1930]: time="2025-03-17T17:26:23.608124788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mrwg4,Uid:589bdfde-07e3-4b9d-9325-21992d76eb1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c2b1c8d98b1868b999ae247cc5a8f96c4722827044f8c0b37d103a95feff129\"" Mar 17 17:26:23.611306 systemd[1]: Started cri-containerd-4548addc8026f187efb36d5dbb49f58bb998d9946839804ba84e0cc9db87e208.scope - libcontainer container 4548addc8026f187efb36d5dbb49f58bb998d9946839804ba84e0cc9db87e208. Mar 17 17:26:23.620971 containerd[1930]: time="2025-03-17T17:26:23.620483492Z" level=info msg="CreateContainer within sandbox \"7c2b1c8d98b1868b999ae247cc5a8f96c4722827044f8c0b37d103a95feff129\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:26:23.652481 containerd[1930]: time="2025-03-17T17:26:23.652396208Z" level=info msg="CreateContainer within sandbox \"7c2b1c8d98b1868b999ae247cc5a8f96c4722827044f8c0b37d103a95feff129\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1254b502a99422af74fe57bdc552e3d2aa3830f64912d1ec01baa5bb35bc4f42\"" Mar 17 17:26:23.657501 containerd[1930]: time="2025-03-17T17:26:23.654501908Z" level=info msg="StartContainer for \"1254b502a99422af74fe57bdc552e3d2aa3830f64912d1ec01baa5bb35bc4f42\"" Mar 17 17:26:23.709447 systemd[1]: Started sshd@7-172.31.21.92:22-139.178.68.195:37022.service - OpenSSH per-connection server daemon (139.178.68.195:37022). Mar 17 17:26:23.746285 systemd[1]: Started cri-containerd-1254b502a99422af74fe57bdc552e3d2aa3830f64912d1ec01baa5bb35bc4f42.scope - libcontainer container 1254b502a99422af74fe57bdc552e3d2aa3830f64912d1ec01baa5bb35bc4f42. Mar 17 17:26:23.827985 containerd[1930]: time="2025-03-17T17:26:23.826579449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dtwkt,Uid:a9e1ac25-091b-4161-a703-d57e89c8a0a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4548addc8026f187efb36d5dbb49f58bb998d9946839804ba84e0cc9db87e208\"" Mar 17 17:26:23.841031 containerd[1930]: time="2025-03-17T17:26:23.840535389Z" level=info msg="CreateContainer within sandbox \"4548addc8026f187efb36d5dbb49f58bb998d9946839804ba84e0cc9db87e208\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:26:23.885548 containerd[1930]: time="2025-03-17T17:26:23.883719669Z" level=info msg="StartContainer for \"1254b502a99422af74fe57bdc552e3d2aa3830f64912d1ec01baa5bb35bc4f42\" returns successfully" Mar 17 17:26:23.889245 containerd[1930]: time="2025-03-17T17:26:23.889168605Z" level=info msg="CreateContainer within sandbox \"4548addc8026f187efb36d5dbb49f58bb998d9946839804ba84e0cc9db87e208\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"617f7743849fc098442e0291c38a6a549b88d1fbf6a9f45d6eaae6938ed762ba\"" Mar 17 17:26:23.890747 containerd[1930]: time="2025-03-17T17:26:23.890688969Z" level=info msg="StartContainer for \"617f7743849fc098442e0291c38a6a549b88d1fbf6a9f45d6eaae6938ed762ba\"" Mar 17 17:26:23.975566 sshd[4759]: Accepted publickey for core from 139.178.68.195 port 37022 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:23.976239 systemd[1]: Started cri-containerd-617f7743849fc098442e0291c38a6a549b88d1fbf6a9f45d6eaae6938ed762ba.scope - libcontainer container 617f7743849fc098442e0291c38a6a549b88d1fbf6a9f45d6eaae6938ed762ba. Mar 17 17:26:23.979860 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:23.997991 systemd-logind[1909]: New session 8 of user core. Mar 17 17:26:24.000280 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:26:24.105063 containerd[1930]: time="2025-03-17T17:26:24.103532502Z" level=info msg="StartContainer for \"617f7743849fc098442e0291c38a6a549b88d1fbf6a9f45d6eaae6938ed762ba\" returns successfully" Mar 17 17:26:24.310246 sshd[4817]: Connection closed by 139.178.68.195 port 37022 Mar 17 17:26:24.311272 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:24.317413 systemd[1]: sshd@7-172.31.21.92:22-139.178.68.195:37022.service: Deactivated successfully. Mar 17 17:26:24.321450 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:26:24.323094 systemd-logind[1909]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:26:24.325474 systemd-logind[1909]: Removed session 8. Mar 17 17:26:24.597730 kubelet[3183]: I0317 17:26:24.597539 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dtwkt" podStartSLOduration=36.597517617 podStartE2EDuration="36.597517617s" podCreationTimestamp="2025-03-17 17:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:26:24.597185121 +0000 UTC m=+40.533541810" watchObservedRunningTime="2025-03-17 17:26:24.597517617 +0000 UTC m=+40.533874270" Mar 17 17:26:24.623825 kubelet[3183]: I0317 17:26:24.623734 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mrwg4" podStartSLOduration=36.623709945 podStartE2EDuration="36.623709945s" podCreationTimestamp="2025-03-17 17:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:26:24.621812685 +0000 UTC m=+40.558169530" watchObservedRunningTime="2025-03-17 17:26:24.623709945 +0000 UTC m=+40.560066598" Mar 17 17:26:29.351500 systemd[1]: Started sshd@8-172.31.21.92:22-139.178.68.195:60066.service - OpenSSH per-connection server daemon (139.178.68.195:60066). Mar 17 17:26:29.541836 sshd[4856]: Accepted publickey for core from 139.178.68.195 port 60066 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:29.544900 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:29.553216 systemd-logind[1909]: New session 9 of user core. Mar 17 17:26:29.561279 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:26:29.811030 sshd[4858]: Connection closed by 139.178.68.195 port 60066 Mar 17 17:26:29.812278 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:29.816745 systemd[1]: sshd@8-172.31.21.92:22-139.178.68.195:60066.service: Deactivated successfully. Mar 17 17:26:29.821401 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:26:29.826522 systemd-logind[1909]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:26:29.828398 systemd-logind[1909]: Removed session 9. Mar 17 17:26:34.859407 systemd[1]: Started sshd@9-172.31.21.92:22-139.178.68.195:60068.service - OpenSSH per-connection server daemon (139.178.68.195:60068). Mar 17 17:26:35.051360 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 60068 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:35.053854 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:35.061075 systemd-logind[1909]: New session 10 of user core. Mar 17 17:26:35.071205 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:26:35.313000 sshd[4872]: Connection closed by 139.178.68.195 port 60068 Mar 17 17:26:35.313869 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:35.319892 systemd[1]: sshd@9-172.31.21.92:22-139.178.68.195:60068.service: Deactivated successfully. Mar 17 17:26:35.325351 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:26:35.328366 systemd-logind[1909]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:26:35.330475 systemd-logind[1909]: Removed session 10. Mar 17 17:26:40.358464 systemd[1]: Started sshd@10-172.31.21.92:22-139.178.68.195:46928.service - OpenSSH per-connection server daemon (139.178.68.195:46928). Mar 17 17:26:40.552584 sshd[4884]: Accepted publickey for core from 139.178.68.195 port 46928 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:40.555113 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:40.564237 systemd-logind[1909]: New session 11 of user core. Mar 17 17:26:40.569237 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:26:40.813588 sshd[4886]: Connection closed by 139.178.68.195 port 46928 Mar 17 17:26:40.814475 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:40.819808 systemd[1]: sshd@10-172.31.21.92:22-139.178.68.195:46928.service: Deactivated successfully. Mar 17 17:26:40.820541 systemd-logind[1909]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:26:40.823608 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:26:40.830320 systemd-logind[1909]: Removed session 11. Mar 17 17:26:45.864133 systemd[1]: Started sshd@11-172.31.21.92:22-139.178.68.195:46870.service - OpenSSH per-connection server daemon (139.178.68.195:46870). Mar 17 17:26:46.043807 sshd[4899]: Accepted publickey for core from 139.178.68.195 port 46870 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:46.046265 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:46.054072 systemd-logind[1909]: New session 12 of user core. Mar 17 17:26:46.063208 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:26:46.312016 sshd[4901]: Connection closed by 139.178.68.195 port 46870 Mar 17 17:26:46.312823 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:46.319433 systemd[1]: sshd@11-172.31.21.92:22-139.178.68.195:46870.service: Deactivated successfully. Mar 17 17:26:46.324075 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:26:46.325731 systemd-logind[1909]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:26:46.327789 systemd-logind[1909]: Removed session 12. Mar 17 17:26:46.350470 systemd[1]: Started sshd@12-172.31.21.92:22-139.178.68.195:46880.service - OpenSSH per-connection server daemon (139.178.68.195:46880). Mar 17 17:26:46.534820 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 46880 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:46.537351 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:46.544689 systemd-logind[1909]: New session 13 of user core. Mar 17 17:26:46.549208 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:26:46.871840 sshd[4915]: Connection closed by 139.178.68.195 port 46880 Mar 17 17:26:46.872968 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:46.880652 systemd-logind[1909]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:26:46.882429 systemd[1]: sshd@12-172.31.21.92:22-139.178.68.195:46880.service: Deactivated successfully. Mar 17 17:26:46.893773 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:26:46.915380 systemd-logind[1909]: Removed session 13. Mar 17 17:26:46.925384 systemd[1]: Started sshd@13-172.31.21.92:22-139.178.68.195:46886.service - OpenSSH per-connection server daemon (139.178.68.195:46886). Mar 17 17:26:47.128811 sshd[4923]: Accepted publickey for core from 139.178.68.195 port 46886 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:47.131341 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:47.140013 systemd-logind[1909]: New session 14 of user core. Mar 17 17:26:47.145385 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:26:47.393195 sshd[4925]: Connection closed by 139.178.68.195 port 46886 Mar 17 17:26:47.394263 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:47.404167 systemd[1]: sshd@13-172.31.21.92:22-139.178.68.195:46886.service: Deactivated successfully. Mar 17 17:26:47.411689 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:26:47.414344 systemd-logind[1909]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:26:47.416346 systemd-logind[1909]: Removed session 14. Mar 17 17:26:52.436479 systemd[1]: Started sshd@14-172.31.21.92:22-139.178.68.195:46898.service - OpenSSH per-connection server daemon (139.178.68.195:46898). Mar 17 17:26:52.629332 sshd[4939]: Accepted publickey for core from 139.178.68.195 port 46898 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:52.631772 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:52.640132 systemd-logind[1909]: New session 15 of user core. Mar 17 17:26:52.645262 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:26:52.894256 sshd[4941]: Connection closed by 139.178.68.195 port 46898 Mar 17 17:26:52.895320 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:52.901832 systemd[1]: sshd@14-172.31.21.92:22-139.178.68.195:46898.service: Deactivated successfully. Mar 17 17:26:52.906008 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:26:52.908716 systemd-logind[1909]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:26:52.910667 systemd-logind[1909]: Removed session 15. Mar 17 17:26:57.931514 systemd[1]: Started sshd@15-172.31.21.92:22-139.178.68.195:45992.service - OpenSSH per-connection server daemon (139.178.68.195:45992). Mar 17 17:26:58.123785 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 45992 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:58.126227 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:58.134326 systemd-logind[1909]: New session 16 of user core. Mar 17 17:26:58.143225 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:26:58.395411 sshd[4955]: Connection closed by 139.178.68.195 port 45992 Mar 17 17:26:58.396302 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:58.402171 systemd-logind[1909]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:26:58.402490 systemd[1]: sshd@15-172.31.21.92:22-139.178.68.195:45992.service: Deactivated successfully. Mar 17 17:26:58.406311 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:26:58.410662 systemd-logind[1909]: Removed session 16. Mar 17 17:27:03.437476 systemd[1]: Started sshd@16-172.31.21.92:22-139.178.68.195:45996.service - OpenSSH per-connection server daemon (139.178.68.195:45996). Mar 17 17:27:03.621991 sshd[4966]: Accepted publickey for core from 139.178.68.195 port 45996 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:03.624419 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:03.632465 systemd-logind[1909]: New session 17 of user core. Mar 17 17:27:03.640191 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:27:03.886993 sshd[4968]: Connection closed by 139.178.68.195 port 45996 Mar 17 17:27:03.887905 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:03.893220 systemd-logind[1909]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:27:03.894507 systemd[1]: sshd@16-172.31.21.92:22-139.178.68.195:45996.service: Deactivated successfully. Mar 17 17:27:03.898558 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:27:03.903380 systemd-logind[1909]: Removed session 17. Mar 17 17:27:03.928457 systemd[1]: Started sshd@17-172.31.21.92:22-139.178.68.195:46000.service - OpenSSH per-connection server daemon (139.178.68.195:46000). Mar 17 17:27:04.108872 sshd[4979]: Accepted publickey for core from 139.178.68.195 port 46000 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:04.111353 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:04.119933 systemd-logind[1909]: New session 18 of user core. Mar 17 17:27:04.128382 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:27:04.415916 sshd[4981]: Connection closed by 139.178.68.195 port 46000 Mar 17 17:27:04.415204 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:04.420730 systemd-logind[1909]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:27:04.421502 systemd[1]: sshd@17-172.31.21.92:22-139.178.68.195:46000.service: Deactivated successfully. Mar 17 17:27:04.426133 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:27:04.430569 systemd-logind[1909]: Removed session 18. Mar 17 17:27:04.453470 systemd[1]: Started sshd@18-172.31.21.92:22-139.178.68.195:46012.service - OpenSSH per-connection server daemon (139.178.68.195:46012). Mar 17 17:27:04.650282 sshd[4990]: Accepted publickey for core from 139.178.68.195 port 46012 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:04.654909 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:04.663034 systemd-logind[1909]: New session 19 of user core. Mar 17 17:27:04.666465 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:27:05.816202 sshd[4992]: Connection closed by 139.178.68.195 port 46012 Mar 17 17:27:05.817134 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:05.826699 systemd[1]: sshd@18-172.31.21.92:22-139.178.68.195:46012.service: Deactivated successfully. Mar 17 17:27:05.834623 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:27:05.840118 systemd-logind[1909]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:27:05.867415 systemd[1]: Started sshd@19-172.31.21.92:22-139.178.68.195:58942.service - OpenSSH per-connection server daemon (139.178.68.195:58942). Mar 17 17:27:05.869928 systemd-logind[1909]: Removed session 19. Mar 17 17:27:06.065975 sshd[5009]: Accepted publickey for core from 139.178.68.195 port 58942 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:06.068438 sshd-session[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:06.078598 systemd-logind[1909]: New session 20 of user core. Mar 17 17:27:06.086202 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:27:06.613070 sshd[5011]: Connection closed by 139.178.68.195 port 58942 Mar 17 17:27:06.615967 sshd-session[5009]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:06.622202 systemd[1]: sshd@19-172.31.21.92:22-139.178.68.195:58942.service: Deactivated successfully. Mar 17 17:27:06.625574 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:27:06.628825 systemd-logind[1909]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:27:06.630797 systemd-logind[1909]: Removed session 20. Mar 17 17:27:06.654139 systemd[1]: Started sshd@20-172.31.21.92:22-139.178.68.195:58948.service - OpenSSH per-connection server daemon (139.178.68.195:58948). Mar 17 17:27:06.859422 sshd[5021]: Accepted publickey for core from 139.178.68.195 port 58948 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:06.861870 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:06.870585 systemd-logind[1909]: New session 21 of user core. Mar 17 17:27:06.878264 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:27:07.121298 sshd[5023]: Connection closed by 139.178.68.195 port 58948 Mar 17 17:27:07.122348 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:07.129822 systemd[1]: sshd@20-172.31.21.92:22-139.178.68.195:58948.service: Deactivated successfully. Mar 17 17:27:07.137786 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:27:07.140606 systemd-logind[1909]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:27:07.142901 systemd-logind[1909]: Removed session 21. Mar 17 17:27:12.160434 systemd[1]: Started sshd@21-172.31.21.92:22-139.178.68.195:58962.service - OpenSSH per-connection server daemon (139.178.68.195:58962). Mar 17 17:27:12.357124 sshd[5034]: Accepted publickey for core from 139.178.68.195 port 58962 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:12.359726 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:12.368035 systemd-logind[1909]: New session 22 of user core. Mar 17 17:27:12.376209 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:27:12.615401 sshd[5036]: Connection closed by 139.178.68.195 port 58962 Mar 17 17:27:12.616265 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:12.622884 systemd[1]: sshd@21-172.31.21.92:22-139.178.68.195:58962.service: Deactivated successfully. Mar 17 17:27:12.628647 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:27:12.629966 systemd-logind[1909]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:27:12.632251 systemd-logind[1909]: Removed session 22. Mar 17 17:27:17.661507 systemd[1]: Started sshd@22-172.31.21.92:22-139.178.68.195:40474.service - OpenSSH per-connection server daemon (139.178.68.195:40474). Mar 17 17:27:17.850668 sshd[5048]: Accepted publickey for core from 139.178.68.195 port 40474 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:17.854759 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:17.864657 systemd-logind[1909]: New session 23 of user core. Mar 17 17:27:17.873196 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:27:18.134702 sshd[5050]: Connection closed by 139.178.68.195 port 40474 Mar 17 17:27:18.136184 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:18.142693 systemd[1]: sshd@22-172.31.21.92:22-139.178.68.195:40474.service: Deactivated successfully. Mar 17 17:27:18.146224 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:27:18.148337 systemd-logind[1909]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:27:18.150613 systemd-logind[1909]: Removed session 23. Mar 17 17:27:23.181801 systemd[1]: Started sshd@23-172.31.21.92:22-139.178.68.195:40480.service - OpenSSH per-connection server daemon (139.178.68.195:40480). Mar 17 17:27:23.370901 sshd[5064]: Accepted publickey for core from 139.178.68.195 port 40480 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:23.373439 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:23.382237 systemd-logind[1909]: New session 24 of user core. Mar 17 17:27:23.391235 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:27:23.637312 sshd[5066]: Connection closed by 139.178.68.195 port 40480 Mar 17 17:27:23.637194 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:23.643619 systemd[1]: sshd@23-172.31.21.92:22-139.178.68.195:40480.service: Deactivated successfully. Mar 17 17:27:23.645231 systemd-logind[1909]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:27:23.650213 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:27:23.656009 systemd-logind[1909]: Removed session 24. Mar 17 17:27:28.680416 systemd[1]: Started sshd@24-172.31.21.92:22-139.178.68.195:42154.service - OpenSSH per-connection server daemon (139.178.68.195:42154). Mar 17 17:27:28.860491 sshd[5077]: Accepted publickey for core from 139.178.68.195 port 42154 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:28.863046 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:28.873148 systemd-logind[1909]: New session 25 of user core. Mar 17 17:27:28.880219 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:27:29.119604 sshd[5079]: Connection closed by 139.178.68.195 port 42154 Mar 17 17:27:29.120472 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:29.126359 systemd[1]: sshd@24-172.31.21.92:22-139.178.68.195:42154.service: Deactivated successfully. Mar 17 17:27:29.130789 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:27:29.132869 systemd-logind[1909]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:27:29.135035 systemd-logind[1909]: Removed session 25. Mar 17 17:27:29.156440 systemd[1]: Started sshd@25-172.31.21.92:22-139.178.68.195:42166.service - OpenSSH per-connection server daemon (139.178.68.195:42166). Mar 17 17:27:29.349187 sshd[5089]: Accepted publickey for core from 139.178.68.195 port 42166 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:29.352103 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:29.359465 systemd-logind[1909]: New session 26 of user core. Mar 17 17:27:29.370275 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:27:30.988130 containerd[1930]: time="2025-03-17T17:27:30.988067847Z" level=info msg="StopContainer for \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\" with timeout 30 (s)" Mar 17 17:27:30.993177 containerd[1930]: time="2025-03-17T17:27:30.993119091Z" level=info msg="Stop container \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\" with signal terminated" Mar 17 17:27:31.022528 systemd[1]: cri-containerd-0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8.scope: Deactivated successfully. Mar 17 17:27:31.039272 containerd[1930]: time="2025-03-17T17:27:31.039198443Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:27:31.059156 containerd[1930]: time="2025-03-17T17:27:31.059069051Z" level=info msg="StopContainer for \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\" with timeout 2 (s)" Mar 17 17:27:31.059847 containerd[1930]: time="2025-03-17T17:27:31.059762795Z" level=info msg="Stop container \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\" with signal terminated" Mar 17 17:27:31.072996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8-rootfs.mount: Deactivated successfully. Mar 17 17:27:31.080996 systemd-networkd[1843]: lxc_health: Link DOWN Mar 17 17:27:31.081016 systemd-networkd[1843]: lxc_health: Lost carrier Mar 17 17:27:31.091631 containerd[1930]: time="2025-03-17T17:27:31.090525443Z" level=info msg="shim disconnected" id=0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8 namespace=k8s.io Mar 17 17:27:31.091631 containerd[1930]: time="2025-03-17T17:27:31.090615431Z" level=warning msg="cleaning up after shim disconnected" id=0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8 namespace=k8s.io Mar 17 17:27:31.091631 containerd[1930]: time="2025-03-17T17:27:31.090635939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:31.117394 systemd[1]: cri-containerd-8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385.scope: Deactivated successfully. Mar 17 17:27:31.117839 systemd[1]: cri-containerd-8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385.scope: Consumed 14.244s CPU time. Mar 17 17:27:31.139710 containerd[1930]: time="2025-03-17T17:27:31.139643591Z" level=info msg="StopContainer for \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\" returns successfully" Mar 17 17:27:31.141584 containerd[1930]: time="2025-03-17T17:27:31.141313223Z" level=info msg="StopPodSandbox for \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\"" Mar 17 17:27:31.141584 containerd[1930]: time="2025-03-17T17:27:31.141386483Z" level=info msg="Container to stop \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.146497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656-shm.mount: Deactivated successfully. Mar 17 17:27:31.169071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385-rootfs.mount: Deactivated successfully. Mar 17 17:27:31.171525 systemd[1]: cri-containerd-65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656.scope: Deactivated successfully. Mar 17 17:27:31.183017 containerd[1930]: time="2025-03-17T17:27:31.182886276Z" level=info msg="shim disconnected" id=8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385 namespace=k8s.io Mar 17 17:27:31.183329 containerd[1930]: time="2025-03-17T17:27:31.183294300Z" level=warning msg="cleaning up after shim disconnected" id=8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385 namespace=k8s.io Mar 17 17:27:31.183476 containerd[1930]: time="2025-03-17T17:27:31.183446868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:31.214612 containerd[1930]: time="2025-03-17T17:27:31.214468704Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:27:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:27:31.217566 containerd[1930]: time="2025-03-17T17:27:31.217229532Z" level=info msg="shim disconnected" id=65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656 namespace=k8s.io Mar 17 17:27:31.217566 containerd[1930]: time="2025-03-17T17:27:31.217320012Z" level=warning msg="cleaning up after shim disconnected" id=65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656 namespace=k8s.io Mar 17 17:27:31.217566 containerd[1930]: time="2025-03-17T17:27:31.217339440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:31.222554 containerd[1930]: time="2025-03-17T17:27:31.222413196Z" level=info msg="StopContainer for \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\" returns successfully" Mar 17 17:27:31.223229 containerd[1930]: time="2025-03-17T17:27:31.223183332Z" level=info msg="StopPodSandbox for \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\"" Mar 17 17:27:31.223339 containerd[1930]: time="2025-03-17T17:27:31.223249968Z" level=info msg="Container to stop \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.223339 containerd[1930]: time="2025-03-17T17:27:31.223275684Z" level=info msg="Container to stop \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.223339 containerd[1930]: time="2025-03-17T17:27:31.223297188Z" level=info msg="Container to stop \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.223339 containerd[1930]: time="2025-03-17T17:27:31.223322520Z" level=info msg="Container to stop \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.223861 containerd[1930]: time="2025-03-17T17:27:31.223345032Z" level=info msg="Container to stop \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:31.238440 systemd[1]: cri-containerd-2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc.scope: Deactivated successfully. Mar 17 17:27:31.254674 containerd[1930]: time="2025-03-17T17:27:31.254597136Z" level=info msg="TearDown network for sandbox \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" successfully" Mar 17 17:27:31.255148 containerd[1930]: time="2025-03-17T17:27:31.254780244Z" level=info msg="StopPodSandbox for \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" returns successfully" Mar 17 17:27:31.301110 containerd[1930]: time="2025-03-17T17:27:31.301004040Z" level=info msg="shim disconnected" id=2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc namespace=k8s.io Mar 17 17:27:31.301110 containerd[1930]: time="2025-03-17T17:27:31.301083996Z" level=warning msg="cleaning up after shim disconnected" id=2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc namespace=k8s.io Mar 17 17:27:31.301110 containerd[1930]: time="2025-03-17T17:27:31.301104744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:31.324576 containerd[1930]: time="2025-03-17T17:27:31.324396048Z" level=info msg="TearDown network for sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" successfully" Mar 17 17:27:31.324576 containerd[1930]: time="2025-03-17T17:27:31.324443652Z" level=info msg="StopPodSandbox for \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" returns successfully" Mar 17 17:27:31.353664 kubelet[3183]: I0317 17:27:31.353084 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3236e17d-ea36-411f-ad17-3e48a99fb57b-cilium-config-path\") pod \"3236e17d-ea36-411f-ad17-3e48a99fb57b\" (UID: \"3236e17d-ea36-411f-ad17-3e48a99fb57b\") " Mar 17 17:27:31.353664 kubelet[3183]: I0317 17:27:31.353164 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6268\" (UniqueName: \"kubernetes.io/projected/3236e17d-ea36-411f-ad17-3e48a99fb57b-kube-api-access-f6268\") pod \"3236e17d-ea36-411f-ad17-3e48a99fb57b\" (UID: \"3236e17d-ea36-411f-ad17-3e48a99fb57b\") " Mar 17 17:27:31.362034 kubelet[3183]: I0317 17:27:31.361922 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3236e17d-ea36-411f-ad17-3e48a99fb57b-kube-api-access-f6268" (OuterVolumeSpecName: "kube-api-access-f6268") pod "3236e17d-ea36-411f-ad17-3e48a99fb57b" (UID: "3236e17d-ea36-411f-ad17-3e48a99fb57b"). InnerVolumeSpecName "kube-api-access-f6268". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:27:31.362177 kubelet[3183]: I0317 17:27:31.362086 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3236e17d-ea36-411f-ad17-3e48a99fb57b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3236e17d-ea36-411f-ad17-3e48a99fb57b" (UID: "3236e17d-ea36-411f-ad17-3e48a99fb57b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:27:31.454076 kubelet[3183]: I0317 17:27:31.454005 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-xtables-lock\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454230 kubelet[3183]: I0317 17:27:31.454082 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5bcd66f9-d41f-434f-8235-86715e4fcd1c-clustermesh-secrets\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454230 kubelet[3183]: I0317 17:27:31.454120 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-etc-cni-netd\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454230 kubelet[3183]: I0317 17:27:31.454153 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-kernel\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454230 kubelet[3183]: I0317 17:27:31.454195 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-config-path\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454447 kubelet[3183]: I0317 17:27:31.454265 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrmnz\" (UniqueName: \"kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-kube-api-access-jrmnz\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454447 kubelet[3183]: I0317 17:27:31.454304 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-bpf-maps\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454447 kubelet[3183]: I0317 17:27:31.454345 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-net\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454447 kubelet[3183]: I0317 17:27:31.454379 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-lib-modules\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454447 kubelet[3183]: I0317 17:27:31.454417 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hubble-tls\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454691 kubelet[3183]: I0317 17:27:31.454450 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-run\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454691 kubelet[3183]: I0317 17:27:31.454484 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cni-path\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454691 kubelet[3183]: I0317 17:27:31.454522 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-cgroup\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454691 kubelet[3183]: I0317 17:27:31.454554 3183 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hostproc\") pod \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\" (UID: \"5bcd66f9-d41f-434f-8235-86715e4fcd1c\") " Mar 17 17:27:31.454691 kubelet[3183]: I0317 17:27:31.454619 3183 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3236e17d-ea36-411f-ad17-3e48a99fb57b-cilium-config-path\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.454691 kubelet[3183]: I0317 17:27:31.454645 3183 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f6268\" (UniqueName: \"kubernetes.io/projected/3236e17d-ea36-411f-ad17-3e48a99fb57b-kube-api-access-f6268\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.455035 kubelet[3183]: I0317 17:27:31.454706 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hostproc" (OuterVolumeSpecName: "hostproc") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.455035 kubelet[3183]: I0317 17:27:31.454765 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.456982 kubelet[3183]: I0317 17:27:31.455195 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.456982 kubelet[3183]: I0317 17:27:31.455254 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.456982 kubelet[3183]: I0317 17:27:31.455290 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.457367 kubelet[3183]: I0317 17:27:31.457325 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.458763 kubelet[3183]: I0317 17:27:31.458711 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.461122 kubelet[3183]: I0317 17:27:31.458901 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cni-path" (OuterVolumeSpecName: "cni-path") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.463608 kubelet[3183]: I0317 17:27:31.458932 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.469473 kubelet[3183]: I0317 17:27:31.469415 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 17:27:31.469831 kubelet[3183]: I0317 17:27:31.469798 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bcd66f9-d41f-434f-8235-86715e4fcd1c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 17:27:31.470109 kubelet[3183]: I0317 17:27:31.469844 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 17:27:31.470264 kubelet[3183]: I0317 17:27:31.470203 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:27:31.470695 kubelet[3183]: I0317 17:27:31.470645 3183 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-kube-api-access-jrmnz" (OuterVolumeSpecName: "kube-api-access-jrmnz") pod "5bcd66f9-d41f-434f-8235-86715e4fcd1c" (UID: "5bcd66f9-d41f-434f-8235-86715e4fcd1c"). InnerVolumeSpecName "kube-api-access-jrmnz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 17:27:31.555657 kubelet[3183]: I0317 17:27:31.555594 3183 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hostproc\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555657 kubelet[3183]: I0317 17:27:31.555653 3183 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-run\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555677 3183 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cni-path\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555699 3183 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-cgroup\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555721 3183 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-xtables-lock\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555742 3183 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5bcd66f9-d41f-434f-8235-86715e4fcd1c-clustermesh-secrets\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555766 3183 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-etc-cni-netd\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555787 3183 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-kernel\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555809 3183 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5bcd66f9-d41f-434f-8235-86715e4fcd1c-cilium-config-path\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.555877 kubelet[3183]: I0317 17:27:31.555830 3183 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrmnz\" (UniqueName: \"kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-kube-api-access-jrmnz\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.556291 kubelet[3183]: I0317 17:27:31.555850 3183 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-bpf-maps\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.556291 kubelet[3183]: I0317 17:27:31.555871 3183 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-host-proc-sys-net\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.556291 kubelet[3183]: I0317 17:27:31.555892 3183 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bcd66f9-d41f-434f-8235-86715e4fcd1c-lib-modules\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.556291 kubelet[3183]: I0317 17:27:31.555912 3183 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5bcd66f9-d41f-434f-8235-86715e4fcd1c-hubble-tls\") on node \"ip-172-31-21-92\" DevicePath \"\"" Mar 17 17:27:31.769324 kubelet[3183]: I0317 17:27:31.769265 3183 scope.go:117] "RemoveContainer" containerID="0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8" Mar 17 17:27:31.776295 containerd[1930]: time="2025-03-17T17:27:31.776243091Z" level=info msg="RemoveContainer for \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\"" Mar 17 17:27:31.782646 systemd[1]: Removed slice kubepods-besteffort-pod3236e17d_ea36_411f_ad17_3e48a99fb57b.slice - libcontainer container kubepods-besteffort-pod3236e17d_ea36_411f_ad17_3e48a99fb57b.slice. Mar 17 17:27:31.790686 containerd[1930]: time="2025-03-17T17:27:31.789690927Z" level=info msg="RemoveContainer for \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\" returns successfully" Mar 17 17:27:31.792029 kubelet[3183]: I0317 17:27:31.791848 3183 scope.go:117] "RemoveContainer" containerID="0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8" Mar 17 17:27:31.793826 containerd[1930]: time="2025-03-17T17:27:31.793731675Z" level=error msg="ContainerStatus for \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\": not found" Mar 17 17:27:31.796599 kubelet[3183]: E0317 17:27:31.796458 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\": not found" containerID="0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8" Mar 17 17:27:31.796599 kubelet[3183]: I0317 17:27:31.796518 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8"} err="failed to get container status \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f44d8dc2c5b9735dffd714181d2ca4fa9bf1a4b5811f4642c661d6d0c359bf8\": not found" Mar 17 17:27:31.796812 kubelet[3183]: I0317 17:27:31.796629 3183 scope.go:117] "RemoveContainer" containerID="8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385" Mar 17 17:27:31.801072 containerd[1930]: time="2025-03-17T17:27:31.800504907Z" level=info msg="RemoveContainer for \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\"" Mar 17 17:27:31.801093 systemd[1]: Removed slice kubepods-burstable-pod5bcd66f9_d41f_434f_8235_86715e4fcd1c.slice - libcontainer container kubepods-burstable-pod5bcd66f9_d41f_434f_8235_86715e4fcd1c.slice. Mar 17 17:27:31.801319 systemd[1]: kubepods-burstable-pod5bcd66f9_d41f_434f_8235_86715e4fcd1c.slice: Consumed 14.390s CPU time. Mar 17 17:27:31.808188 containerd[1930]: time="2025-03-17T17:27:31.807344007Z" level=info msg="RemoveContainer for \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\" returns successfully" Mar 17 17:27:31.808323 kubelet[3183]: I0317 17:27:31.807686 3183 scope.go:117] "RemoveContainer" containerID="636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233" Mar 17 17:27:31.815548 containerd[1930]: time="2025-03-17T17:27:31.815501151Z" level=info msg="RemoveContainer for \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\"" Mar 17 17:27:31.823289 containerd[1930]: time="2025-03-17T17:27:31.823212483Z" level=info msg="RemoveContainer for \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\" returns successfully" Mar 17 17:27:31.823716 kubelet[3183]: I0317 17:27:31.823654 3183 scope.go:117] "RemoveContainer" containerID="611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a" Mar 17 17:27:31.828875 containerd[1930]: time="2025-03-17T17:27:31.828036939Z" level=info msg="RemoveContainer for \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\"" Mar 17 17:27:31.841375 containerd[1930]: time="2025-03-17T17:27:31.840442551Z" level=info msg="RemoveContainer for \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\" returns successfully" Mar 17 17:27:31.841827 kubelet[3183]: I0317 17:27:31.840795 3183 scope.go:117] "RemoveContainer" containerID="19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4" Mar 17 17:27:31.860107 containerd[1930]: time="2025-03-17T17:27:31.859424943Z" level=info msg="RemoveContainer for \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\"" Mar 17 17:27:31.868523 containerd[1930]: time="2025-03-17T17:27:31.868444659Z" level=info msg="RemoveContainer for \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\" returns successfully" Mar 17 17:27:31.870373 kubelet[3183]: I0317 17:27:31.870052 3183 scope.go:117] "RemoveContainer" containerID="4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb" Mar 17 17:27:31.877270 containerd[1930]: time="2025-03-17T17:27:31.876267783Z" level=info msg="RemoveContainer for \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\"" Mar 17 17:27:31.886461 containerd[1930]: time="2025-03-17T17:27:31.886362267Z" level=info msg="RemoveContainer for \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\" returns successfully" Mar 17 17:27:31.887750 kubelet[3183]: I0317 17:27:31.887708 3183 scope.go:117] "RemoveContainer" containerID="8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385" Mar 17 17:27:31.888536 containerd[1930]: time="2025-03-17T17:27:31.888318723Z" level=error msg="ContainerStatus for \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\": not found" Mar 17 17:27:31.890630 kubelet[3183]: E0317 17:27:31.890396 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\": not found" containerID="8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385" Mar 17 17:27:31.890630 kubelet[3183]: I0317 17:27:31.890453 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385"} err="failed to get container status \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ed6656c4654ae22bb3bf01895cbe69cffe92bb4e98fb8e0c2380c144b119385\": not found" Mar 17 17:27:31.890630 kubelet[3183]: I0317 17:27:31.890493 3183 scope.go:117] "RemoveContainer" containerID="636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233" Mar 17 17:27:31.891007 containerd[1930]: time="2025-03-17T17:27:31.890900175Z" level=error msg="ContainerStatus for \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\": not found" Mar 17 17:27:31.891250 kubelet[3183]: E0317 17:27:31.891200 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\": not found" containerID="636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233" Mar 17 17:27:31.891342 kubelet[3183]: I0317 17:27:31.891255 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233"} err="failed to get container status \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\": rpc error: code = NotFound desc = an error occurred when try to find container \"636f895946ff3e8c18d6fad747110e573736ffac3588a7101be231eb31b21233\": not found" Mar 17 17:27:31.891342 kubelet[3183]: I0317 17:27:31.891290 3183 scope.go:117] "RemoveContainer" containerID="611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a" Mar 17 17:27:31.892043 containerd[1930]: time="2025-03-17T17:27:31.891742143Z" level=error msg="ContainerStatus for \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\": not found" Mar 17 17:27:31.892880 kubelet[3183]: E0317 17:27:31.892776 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\": not found" containerID="611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a" Mar 17 17:27:31.893043 kubelet[3183]: I0317 17:27:31.892884 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a"} err="failed to get container status \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\": rpc error: code = NotFound desc = an error occurred when try to find container \"611ca2c693566e71bebb7ed7bfa4b9a58996a512065d67156ef63f00ce31657a\": not found" Mar 17 17:27:31.893043 kubelet[3183]: I0317 17:27:31.892966 3183 scope.go:117] "RemoveContainer" containerID="19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4" Mar 17 17:27:31.893568 containerd[1930]: time="2025-03-17T17:27:31.893503599Z" level=error msg="ContainerStatus for \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\": not found" Mar 17 17:27:31.894055 kubelet[3183]: E0317 17:27:31.893759 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\": not found" containerID="19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4" Mar 17 17:27:31.894055 kubelet[3183]: I0317 17:27:31.893802 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4"} err="failed to get container status \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"19bfeac81b5ba00dd641124609c46b279a51768226d8623cdf0028e10e9d86b4\": not found" Mar 17 17:27:31.894055 kubelet[3183]: I0317 17:27:31.893835 3183 scope.go:117] "RemoveContainer" containerID="4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb" Mar 17 17:27:31.894273 containerd[1930]: time="2025-03-17T17:27:31.894192759Z" level=error msg="ContainerStatus for \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\": not found" Mar 17 17:27:31.894789 kubelet[3183]: E0317 17:27:31.894610 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\": not found" containerID="4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb" Mar 17 17:27:31.895015 kubelet[3183]: I0317 17:27:31.894922 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb"} err="failed to get container status \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d36bfcff8f7bce56fc6420b1c00d2783880000ac2a91ce55089cca8410119cb\": not found" Mar 17 17:27:31.998782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc-rootfs.mount: Deactivated successfully. Mar 17 17:27:31.998992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc-shm.mount: Deactivated successfully. Mar 17 17:27:31.999214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656-rootfs.mount: Deactivated successfully. Mar 17 17:27:31.999355 systemd[1]: var-lib-kubelet-pods-5bcd66f9\x2dd41f\x2d434f\x2d8235\x2d86715e4fcd1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrmnz.mount: Deactivated successfully. Mar 17 17:27:31.999491 systemd[1]: var-lib-kubelet-pods-5bcd66f9\x2dd41f\x2d434f\x2d8235\x2d86715e4fcd1c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:27:31.999628 systemd[1]: var-lib-kubelet-pods-5bcd66f9\x2dd41f\x2d434f\x2d8235\x2d86715e4fcd1c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:27:31.999759 systemd[1]: var-lib-kubelet-pods-3236e17d\x2dea36\x2d411f\x2dad17\x2d3e48a99fb57b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6268.mount: Deactivated successfully. Mar 17 17:27:32.336019 kubelet[3183]: I0317 17:27:32.335931 3183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3236e17d-ea36-411f-ad17-3e48a99fb57b" path="/var/lib/kubelet/pods/3236e17d-ea36-411f-ad17-3e48a99fb57b/volumes" Mar 17 17:27:32.337038 kubelet[3183]: I0317 17:27:32.336989 3183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bcd66f9-d41f-434f-8235-86715e4fcd1c" path="/var/lib/kubelet/pods/5bcd66f9-d41f-434f-8235-86715e4fcd1c/volumes" Mar 17 17:27:32.918634 sshd[5091]: Connection closed by 139.178.68.195 port 42166 Mar 17 17:27:32.919797 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:32.925730 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:27:32.929330 systemd[1]: sshd@25-172.31.21.92:22-139.178.68.195:42166.service: Deactivated successfully. Mar 17 17:27:32.935614 systemd-logind[1909]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:27:32.938729 systemd-logind[1909]: Removed session 26. Mar 17 17:27:32.960566 systemd[1]: Started sshd@26-172.31.21.92:22-139.178.68.195:42174.service - OpenSSH per-connection server daemon (139.178.68.195:42174). Mar 17 17:27:33.155841 sshd[5250]: Accepted publickey for core from 139.178.68.195 port 42174 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:33.158584 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:33.171244 systemd-logind[1909]: New session 27 of user core. Mar 17 17:27:33.180233 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:27:33.621213 ntpd[1903]: Deleting interface #11 lxc_health, fe80::7c4e:bbff:fef3:be8d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Mar 17 17:27:33.621682 ntpd[1903]: 17 Mar 17:27:33 ntpd[1903]: Deleting interface #11 lxc_health, fe80::7c4e:bbff:fef3:be8d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Mar 17 17:27:34.553126 kubelet[3183]: E0317 17:27:34.553018 3183 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:27:34.700963 sshd[5252]: Connection closed by 139.178.68.195 port 42174 Mar 17 17:27:34.701818 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:34.715454 systemd[1]: sshd@26-172.31.21.92:22-139.178.68.195:42174.service: Deactivated successfully. Mar 17 17:27:34.724393 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:27:34.727261 systemd[1]: session-27.scope: Consumed 1.324s CPU time. Mar 17 17:27:34.730403 systemd-logind[1909]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:27:34.759502 systemd[1]: Started sshd@27-172.31.21.92:22-139.178.68.195:42182.service - OpenSSH per-connection server daemon (139.178.68.195:42182). Mar 17 17:27:34.761919 systemd-logind[1909]: Removed session 27. Mar 17 17:27:34.820090 kubelet[3183]: I0317 17:27:34.817450 3183 memory_manager.go:355] "RemoveStaleState removing state" podUID="3236e17d-ea36-411f-ad17-3e48a99fb57b" containerName="cilium-operator" Mar 17 17:27:34.820090 kubelet[3183]: I0317 17:27:34.817495 3183 memory_manager.go:355] "RemoveStaleState removing state" podUID="5bcd66f9-d41f-434f-8235-86715e4fcd1c" containerName="cilium-agent" Mar 17 17:27:34.840297 systemd[1]: Created slice kubepods-burstable-poda1f7e974_fbaf_460a_ad27_77b614393699.slice - libcontainer container kubepods-burstable-poda1f7e974_fbaf_460a_ad27_77b614393699.slice. Mar 17 17:27:34.970353 sshd[5261]: Accepted publickey for core from 139.178.68.195 port 42182 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:34.972741 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:34.984046 systemd-logind[1909]: New session 28 of user core. Mar 17 17:27:34.986199 kubelet[3183]: I0317 17:27:34.986126 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-cilium-run\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986351 kubelet[3183]: I0317 17:27:34.986248 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1f7e974-fbaf-460a-ad27-77b614393699-cilium-ipsec-secrets\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986351 kubelet[3183]: I0317 17:27:34.986292 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-host-proc-sys-net\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986351 kubelet[3183]: I0317 17:27:34.986329 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84jvx\" (UniqueName: \"kubernetes.io/projected/a1f7e974-fbaf-460a-ad27-77b614393699-kube-api-access-84jvx\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986507 kubelet[3183]: I0317 17:27:34.986375 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-xtables-lock\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986507 kubelet[3183]: I0317 17:27:34.986416 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1f7e974-fbaf-460a-ad27-77b614393699-cilium-config-path\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986507 kubelet[3183]: I0317 17:27:34.986457 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-host-proc-sys-kernel\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986507 kubelet[3183]: I0317 17:27:34.986494 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-etc-cni-netd\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986702 kubelet[3183]: I0317 17:27:34.986534 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1f7e974-fbaf-460a-ad27-77b614393699-hubble-tls\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986702 kubelet[3183]: I0317 17:27:34.986577 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-bpf-maps\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986702 kubelet[3183]: I0317 17:27:34.986612 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1f7e974-fbaf-460a-ad27-77b614393699-clustermesh-secrets\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986702 kubelet[3183]: I0317 17:27:34.986650 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-cilium-cgroup\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986702 kubelet[3183]: I0317 17:27:34.986688 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-cni-path\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986925 kubelet[3183]: I0317 17:27:34.986730 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-hostproc\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.986925 kubelet[3183]: I0317 17:27:34.986765 3183 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1f7e974-fbaf-460a-ad27-77b614393699-lib-modules\") pod \"cilium-9cfc2\" (UID: \"a1f7e974-fbaf-460a-ad27-77b614393699\") " pod="kube-system/cilium-9cfc2" Mar 17 17:27:34.989206 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:27:35.110327 sshd[5263]: Connection closed by 139.178.68.195 port 42182 Mar 17 17:27:35.112248 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:35.134508 systemd[1]: sshd@27-172.31.21.92:22-139.178.68.195:42182.service: Deactivated successfully. Mar 17 17:27:35.146850 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:27:35.152261 containerd[1930]: time="2025-03-17T17:27:35.152172267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cfc2,Uid:a1f7e974-fbaf-460a-ad27-77b614393699,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:35.161022 systemd-logind[1909]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:27:35.170606 systemd[1]: Started sshd@28-172.31.21.92:22-139.178.68.195:42188.service - OpenSSH per-connection server daemon (139.178.68.195:42188). Mar 17 17:27:35.187647 systemd-logind[1909]: Removed session 28. Mar 17 17:27:35.214034 containerd[1930]: time="2025-03-17T17:27:35.212676016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:35.214034 containerd[1930]: time="2025-03-17T17:27:35.212774272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:35.214034 containerd[1930]: time="2025-03-17T17:27:35.212849632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:35.214497 containerd[1930]: time="2025-03-17T17:27:35.213097276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:35.247288 systemd[1]: Started cri-containerd-5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72.scope - libcontainer container 5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72. Mar 17 17:27:35.289849 containerd[1930]: time="2025-03-17T17:27:35.289792672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9cfc2,Uid:a1f7e974-fbaf-460a-ad27-77b614393699,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\"" Mar 17 17:27:35.296535 containerd[1930]: time="2025-03-17T17:27:35.296187532Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:27:35.320971 containerd[1930]: time="2025-03-17T17:27:35.320886448Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f\"" Mar 17 17:27:35.323047 containerd[1930]: time="2025-03-17T17:27:35.321742444Z" level=info msg="StartContainer for \"f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f\"" Mar 17 17:27:35.367264 systemd[1]: Started cri-containerd-f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f.scope - libcontainer container f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f. Mar 17 17:27:35.390519 sshd[5273]: Accepted publickey for core from 139.178.68.195 port 42188 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:35.395684 sshd-session[5273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:35.411158 systemd-logind[1909]: New session 29 of user core. Mar 17 17:27:35.417222 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:27:35.428118 containerd[1930]: time="2025-03-17T17:27:35.427884833Z" level=info msg="StartContainer for \"f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f\" returns successfully" Mar 17 17:27:35.444797 systemd[1]: cri-containerd-f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f.scope: Deactivated successfully. Mar 17 17:27:35.495213 containerd[1930]: time="2025-03-17T17:27:35.495136829Z" level=info msg="shim disconnected" id=f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f namespace=k8s.io Mar 17 17:27:35.495760 containerd[1930]: time="2025-03-17T17:27:35.495488633Z" level=warning msg="cleaning up after shim disconnected" id=f375e190f68fcf95d76788de2c121a7745f744a309c6a619c7ff0f120d9c5d7f namespace=k8s.io Mar 17 17:27:35.495760 containerd[1930]: time="2025-03-17T17:27:35.495517673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:35.807340 containerd[1930]: time="2025-03-17T17:27:35.807269827Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:27:35.836089 containerd[1930]: time="2025-03-17T17:27:35.835691707Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8\"" Mar 17 17:27:35.839544 containerd[1930]: time="2025-03-17T17:27:35.837467383Z" level=info msg="StartContainer for \"c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8\"" Mar 17 17:27:35.879274 systemd[1]: Started cri-containerd-c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8.scope - libcontainer container c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8. Mar 17 17:27:35.932380 containerd[1930]: time="2025-03-17T17:27:35.932323231Z" level=info msg="StartContainer for \"c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8\" returns successfully" Mar 17 17:27:35.946443 systemd[1]: cri-containerd-c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8.scope: Deactivated successfully. Mar 17 17:27:35.983783 containerd[1930]: time="2025-03-17T17:27:35.983687683Z" level=info msg="shim disconnected" id=c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8 namespace=k8s.io Mar 17 17:27:35.984356 containerd[1930]: time="2025-03-17T17:27:35.984072391Z" level=warning msg="cleaning up after shim disconnected" id=c24d71466627079a3ae4824b516da9a089ac1d6df426c2b8d43d60851ca9d4e8 namespace=k8s.io Mar 17 17:27:35.984356 containerd[1930]: time="2025-03-17T17:27:35.984100471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:36.003457 containerd[1930]: time="2025-03-17T17:27:36.003377656Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:27:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:27:36.769664 kubelet[3183]: I0317 17:27:36.769594 3183 setters.go:602] "Node became not ready" node="ip-172-31-21-92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:27:36Z","lastTransitionTime":"2025-03-17T17:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:27:36.814215 containerd[1930]: time="2025-03-17T17:27:36.814150220Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:27:36.874368 containerd[1930]: time="2025-03-17T17:27:36.874275632Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926\"" Mar 17 17:27:36.878047 containerd[1930]: time="2025-03-17T17:27:36.875321708Z" level=info msg="StartContainer for \"f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926\"" Mar 17 17:27:36.994211 systemd[1]: Started cri-containerd-f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926.scope - libcontainer container f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926. Mar 17 17:27:37.062501 containerd[1930]: time="2025-03-17T17:27:37.062433209Z" level=info msg="StartContainer for \"f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926\" returns successfully" Mar 17 17:27:37.069291 systemd[1]: cri-containerd-f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926.scope: Deactivated successfully. Mar 17 17:27:37.109554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926-rootfs.mount: Deactivated successfully. Mar 17 17:27:37.117667 containerd[1930]: time="2025-03-17T17:27:37.117540749Z" level=info msg="shim disconnected" id=f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926 namespace=k8s.io Mar 17 17:27:37.117667 containerd[1930]: time="2025-03-17T17:27:37.117652817Z" level=warning msg="cleaning up after shim disconnected" id=f3040b140daa4c3db95805db329f8d896fda92d41544fc2efb93e491edeff926 namespace=k8s.io Mar 17 17:27:37.117979 containerd[1930]: time="2025-03-17T17:27:37.117673853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:37.821090 containerd[1930]: time="2025-03-17T17:27:37.820603881Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:27:37.853985 containerd[1930]: time="2025-03-17T17:27:37.852292557Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab\"" Mar 17 17:27:37.855266 containerd[1930]: time="2025-03-17T17:27:37.855189201Z" level=info msg="StartContainer for \"3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab\"" Mar 17 17:27:37.908276 systemd[1]: Started cri-containerd-3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab.scope - libcontainer container 3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab. Mar 17 17:27:37.955558 systemd[1]: cri-containerd-3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab.scope: Deactivated successfully. Mar 17 17:27:37.960999 containerd[1930]: time="2025-03-17T17:27:37.958994385Z" level=info msg="StartContainer for \"3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab\" returns successfully" Mar 17 17:27:37.998235 containerd[1930]: time="2025-03-17T17:27:37.998159733Z" level=info msg="shim disconnected" id=3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab namespace=k8s.io Mar 17 17:27:37.998751 containerd[1930]: time="2025-03-17T17:27:37.998491137Z" level=warning msg="cleaning up after shim disconnected" id=3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab namespace=k8s.io Mar 17 17:27:37.998751 containerd[1930]: time="2025-03-17T17:27:37.998522277Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:38.110525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f6ab9067cb5fb3ca0c5abb6e9cb6897fb6f66365796baad09c78e0d157fbcab-rootfs.mount: Deactivated successfully. Mar 17 17:27:38.833176 containerd[1930]: time="2025-03-17T17:27:38.832740778Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:27:38.867167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553843291.mount: Deactivated successfully. Mar 17 17:27:38.872255 containerd[1930]: time="2025-03-17T17:27:38.872180650Z" level=info msg="CreateContainer within sandbox \"5d3d591a4ed81220a412c7954f894dae3d08af8b7d6f05decedd827361bfea72\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6cd626699c39c56d375419fa041e61cb6885b7f04d1173e327d5e28e808cf7fe\"" Mar 17 17:27:38.873156 containerd[1930]: time="2025-03-17T17:27:38.873085894Z" level=info msg="StartContainer for \"6cd626699c39c56d375419fa041e61cb6885b7f04d1173e327d5e28e808cf7fe\"" Mar 17 17:27:38.927444 systemd[1]: Started cri-containerd-6cd626699c39c56d375419fa041e61cb6885b7f04d1173e327d5e28e808cf7fe.scope - libcontainer container 6cd626699c39c56d375419fa041e61cb6885b7f04d1173e327d5e28e808cf7fe. Mar 17 17:27:38.983394 containerd[1930]: time="2025-03-17T17:27:38.983138314Z" level=info msg="StartContainer for \"6cd626699c39c56d375419fa041e61cb6885b7f04d1173e327d5e28e808cf7fe\" returns successfully" Mar 17 17:27:39.769976 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:27:39.869495 kubelet[3183]: I0317 17:27:39.869368 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9cfc2" podStartSLOduration=5.869343443 podStartE2EDuration="5.869343443s" podCreationTimestamp="2025-03-17 17:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:39.868836023 +0000 UTC m=+115.805192700" watchObservedRunningTime="2025-03-17 17:27:39.869343443 +0000 UTC m=+115.805700096" Mar 17 17:27:43.935501 systemd-networkd[1843]: lxc_health: Link UP Mar 17 17:27:43.949826 (udev-worker)[6092]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:43.957271 systemd-networkd[1843]: lxc_health: Gained carrier Mar 17 17:27:44.369805 containerd[1930]: time="2025-03-17T17:27:44.369718201Z" level=info msg="StopPodSandbox for \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\"" Mar 17 17:27:44.370516 containerd[1930]: time="2025-03-17T17:27:44.369953005Z" level=info msg="TearDown network for sandbox \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" successfully" Mar 17 17:27:44.370516 containerd[1930]: time="2025-03-17T17:27:44.370025137Z" level=info msg="StopPodSandbox for \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" returns successfully" Mar 17 17:27:44.371840 containerd[1930]: time="2025-03-17T17:27:44.371768233Z" level=info msg="RemovePodSandbox for \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\"" Mar 17 17:27:44.371840 containerd[1930]: time="2025-03-17T17:27:44.371828953Z" level=info msg="Forcibly stopping sandbox \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\"" Mar 17 17:27:44.372058 containerd[1930]: time="2025-03-17T17:27:44.371955169Z" level=info msg="TearDown network for sandbox \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" successfully" Mar 17 17:27:44.381123 containerd[1930]: time="2025-03-17T17:27:44.380988985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:27:44.381303 containerd[1930]: time="2025-03-17T17:27:44.381152713Z" level=info msg="RemovePodSandbox \"65777d319971d399157541f53b9eb0d3b1681f0fcbb971084f4818e2fcceb656\" returns successfully" Mar 17 17:27:44.382180 containerd[1930]: time="2025-03-17T17:27:44.382120501Z" level=info msg="StopPodSandbox for \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\"" Mar 17 17:27:44.382314 containerd[1930]: time="2025-03-17T17:27:44.382264789Z" level=info msg="TearDown network for sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" successfully" Mar 17 17:27:44.382314 containerd[1930]: time="2025-03-17T17:27:44.382288789Z" level=info msg="StopPodSandbox for \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" returns successfully" Mar 17 17:27:44.383828 containerd[1930]: time="2025-03-17T17:27:44.383766469Z" level=info msg="RemovePodSandbox for \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\"" Mar 17 17:27:44.383828 containerd[1930]: time="2025-03-17T17:27:44.383823181Z" level=info msg="Forcibly stopping sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\"" Mar 17 17:27:44.384058 containerd[1930]: time="2025-03-17T17:27:44.383965885Z" level=info msg="TearDown network for sandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" successfully" Mar 17 17:27:44.392149 containerd[1930]: time="2025-03-17T17:27:44.392013049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:27:44.392322 containerd[1930]: time="2025-03-17T17:27:44.392172289Z" level=info msg="RemovePodSandbox \"2a2ed7ed4a9b9deec3c3d69328f218bd79c000aad6415ced925c42d0f6e653dc\" returns successfully" Mar 17 17:27:45.272135 systemd-networkd[1843]: lxc_health: Gained IPv6LL Mar 17 17:27:47.621276 ntpd[1903]: Listen normally on 14 lxc_health [fe80::64c9:beff:fe4b:bc81%14]:123 Mar 17 17:27:47.621962 ntpd[1903]: 17 Mar 17:27:47 ntpd[1903]: Listen normally on 14 lxc_health [fe80::64c9:beff:fe4b:bc81%14]:123 Mar 17 17:27:49.051130 systemd[1]: run-containerd-runc-k8s.io-6cd626699c39c56d375419fa041e61cb6885b7f04d1173e327d5e28e808cf7fe-runc.dNZxu9.mount: Deactivated successfully. Mar 17 17:27:51.309437 systemd[1]: run-containerd-runc-k8s.io-6cd626699c39c56d375419fa041e61cb6885b7f04d1173e327d5e28e808cf7fe-runc.hGGsAM.mount: Deactivated successfully. Mar 17 17:27:51.448254 sshd[5346]: Connection closed by 139.178.68.195 port 42188 Mar 17 17:27:51.449204 sshd-session[5273]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:51.455584 systemd[1]: sshd@28-172.31.21.92:22-139.178.68.195:42188.service: Deactivated successfully. Mar 17 17:27:51.464122 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:27:51.469835 systemd-logind[1909]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:27:51.474242 systemd-logind[1909]: Removed session 29. Mar 17 17:28:05.384544 systemd[1]: cri-containerd-d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d.scope: Deactivated successfully. Mar 17 17:28:05.385593 systemd[1]: cri-containerd-d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d.scope: Consumed 4.028s CPU time, 18.2M memory peak, 0B memory swap peak. Mar 17 17:28:05.425825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d-rootfs.mount: Deactivated successfully. Mar 17 17:28:05.445316 containerd[1930]: time="2025-03-17T17:28:05.444901306Z" level=info msg="shim disconnected" id=d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d namespace=k8s.io Mar 17 17:28:05.445316 containerd[1930]: time="2025-03-17T17:28:05.445069810Z" level=warning msg="cleaning up after shim disconnected" id=d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d namespace=k8s.io Mar 17 17:28:05.445316 containerd[1930]: time="2025-03-17T17:28:05.445122862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:05.912800 kubelet[3183]: I0317 17:28:05.912447 3183 scope.go:117] "RemoveContainer" containerID="d7cabe50add3368a3f45abfc43a2fce69ee08ac0ea0c476162bc4376e6c1035d" Mar 17 17:28:05.916350 containerd[1930]: time="2025-03-17T17:28:05.916258368Z" level=info msg="CreateContainer within sandbox \"12c6d2864aab473daa7e12dcd6cb854740196cd8aa041e0945e3ccf5ba588338\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 17:28:05.940478 containerd[1930]: time="2025-03-17T17:28:05.940349640Z" level=info msg="CreateContainer within sandbox \"12c6d2864aab473daa7e12dcd6cb854740196cd8aa041e0945e3ccf5ba588338\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f8bb028969d7bbf782b34950d81df2e81615a3c67b938f13623a65093faefe09\"" Mar 17 17:28:05.941540 containerd[1930]: time="2025-03-17T17:28:05.941483652Z" level=info msg="StartContainer for \"f8bb028969d7bbf782b34950d81df2e81615a3c67b938f13623a65093faefe09\"" Mar 17 17:28:05.993340 systemd[1]: Started cri-containerd-f8bb028969d7bbf782b34950d81df2e81615a3c67b938f13623a65093faefe09.scope - libcontainer container f8bb028969d7bbf782b34950d81df2e81615a3c67b938f13623a65093faefe09. Mar 17 17:28:06.064537 containerd[1930]: time="2025-03-17T17:28:06.064390629Z" level=info msg="StartContainer for \"f8bb028969d7bbf782b34950d81df2e81615a3c67b938f13623a65093faefe09\" returns successfully" Mar 17 17:28:07.789548 kubelet[3183]: E0317 17:28:07.788479 3183 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": context deadline exceeded" Mar 17 17:28:10.080251 systemd[1]: cri-containerd-ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c.scope: Deactivated successfully. Mar 17 17:28:10.081260 systemd[1]: cri-containerd-ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c.scope: Consumed 4.405s CPU time, 16.2M memory peak, 0B memory swap peak. Mar 17 17:28:10.120061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c-rootfs.mount: Deactivated successfully. Mar 17 17:28:10.130081 containerd[1930]: time="2025-03-17T17:28:10.129980497Z" level=info msg="shim disconnected" id=ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c namespace=k8s.io Mar 17 17:28:10.130081 containerd[1930]: time="2025-03-17T17:28:10.130057369Z" level=warning msg="cleaning up after shim disconnected" id=ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c namespace=k8s.io Mar 17 17:28:10.130081 containerd[1930]: time="2025-03-17T17:28:10.130078045Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:28:10.932097 kubelet[3183]: I0317 17:28:10.931974 3183 scope.go:117] "RemoveContainer" containerID="ed890080fe009b5b69b08fa15769d27329a66da181aae4d6e6daa0ff865a836c" Mar 17 17:28:10.935032 containerd[1930]: time="2025-03-17T17:28:10.934832345Z" level=info msg="CreateContainer within sandbox \"d4623cc6bb3e35495378c07d8127a57bca7037747ffeeee7fc3bb90c5c795424\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 17:28:10.966921 containerd[1930]: time="2025-03-17T17:28:10.966792593Z" level=info msg="CreateContainer within sandbox \"d4623cc6bb3e35495378c07d8127a57bca7037747ffeeee7fc3bb90c5c795424\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b8946beb039de4f3457be3940260a1a52ad7d0891a648784bd424c529c9b1301\"" Mar 17 17:28:10.967577 containerd[1930]: time="2025-03-17T17:28:10.967498685Z" level=info msg="StartContainer for \"b8946beb039de4f3457be3940260a1a52ad7d0891a648784bd424c529c9b1301\"" Mar 17 17:28:11.020253 systemd[1]: Started cri-containerd-b8946beb039de4f3457be3940260a1a52ad7d0891a648784bd424c529c9b1301.scope - libcontainer container b8946beb039de4f3457be3940260a1a52ad7d0891a648784bd424c529c9b1301. Mar 17 17:28:11.083978 containerd[1930]: time="2025-03-17T17:28:11.083432798Z" level=info msg="StartContainer for \"b8946beb039de4f3457be3940260a1a52ad7d0891a648784bd424c529c9b1301\" returns successfully" Mar 17 17:28:17.789628 kubelet[3183]: E0317 17:28:17.789272 3183 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-92?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"