Feb 13 15:20:43.177216 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:20:43.177263 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:20:43.177288 kernel: KASLR disabled due to lack of seed Feb 13 15:20:43.177306 kernel: efi: EFI v2.7 by EDK II Feb 13 15:20:43.177323 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 15:20:43.177339 kernel: secureboot: Secure boot disabled Feb 13 15:20:43.177357 kernel: ACPI: Early table checksum verification disabled Feb 13 15:20:43.177373 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:20:43.177390 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:20:43.177406 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:20:43.177428 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:20:43.177445 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:20:43.177461 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:20:43.177479 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:20:43.177528 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:20:43.177556 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:20:43.177575 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:20:43.177591 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:20:43.177608 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:20:43.177625 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:20:43.177641 kernel: printk: bootconsole [uart0] enabled Feb 13 15:20:43.177657 kernel: NUMA: Failed to initialise from firmware Feb 13 15:20:43.177674 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:20:43.177690 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:20:43.177707 kernel: Zone ranges: Feb 13 15:20:43.177723 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:20:43.177744 kernel: DMA32 empty Feb 13 15:20:43.177760 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:20:43.177777 kernel: Movable zone start for each node Feb 13 15:20:43.177793 kernel: Early memory node ranges Feb 13 15:20:43.177810 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:20:43.177827 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:20:43.177843 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:20:43.177859 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:20:43.177875 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:20:43.177891 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:20:43.177907 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:20:43.177923 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:20:43.177944 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:20:43.177961 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:20:43.177984 kernel: psci: probing for conduit method from ACPI. Feb 13 15:20:43.178002 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:20:43.178020 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:20:43.178041 kernel: psci: Trusted OS migration not required Feb 13 15:20:43.178058 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:20:43.178075 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:20:43.178092 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:20:43.178109 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:20:43.178126 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:20:43.178143 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:20:43.178160 kernel: CPU features: detected: Spectre-v2 Feb 13 15:20:43.178177 kernel: CPU features: detected: Spectre-v3a Feb 13 15:20:43.178194 kernel: CPU features: detected: Spectre-BHB Feb 13 15:20:43.178231 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:20:43.178251 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:20:43.178275 kernel: alternatives: applying boot alternatives Feb 13 15:20:43.178295 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:20:43.178320 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:20:43.178338 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:20:43.178355 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:20:43.178372 kernel: Fallback order for Node 0: 0 Feb 13 15:20:43.178389 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:20:43.178405 kernel: Policy zone: Normal Feb 13 15:20:43.178422 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:20:43.178439 kernel: software IO TLB: area num 2. Feb 13 15:20:43.178461 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:20:43.178479 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Feb 13 15:20:43.179927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:20:43.179972 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:20:43.179991 kernel: rcu: RCU event tracing is enabled. Feb 13 15:20:43.180010 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:20:43.180028 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:20:43.180045 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:20:43.180062 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:20:43.180079 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:20:43.180096 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:20:43.180123 kernel: GICv3: 96 SPIs implemented Feb 13 15:20:43.180141 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:20:43.180157 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:20:43.180174 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:20:43.180191 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:20:43.180208 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:20:43.180225 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:20:43.180242 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:20:43.180260 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:20:43.180276 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:20:43.180293 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:20:43.180311 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:20:43.180332 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:20:43.180349 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:20:43.180366 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:20:43.180383 kernel: Console: colour dummy device 80x25 Feb 13 15:20:43.180401 kernel: printk: console [tty1] enabled Feb 13 15:20:43.180419 kernel: ACPI: Core revision 20230628 Feb 13 15:20:43.180436 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:20:43.180454 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:20:43.180472 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:20:43.180489 kernel: landlock: Up and running. Feb 13 15:20:43.180536 kernel: SELinux: Initializing. Feb 13 15:20:43.180554 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:20:43.180572 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:20:43.180590 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:20:43.180608 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:20:43.180625 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:20:43.180643 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:20:43.180660 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:20:43.180683 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:20:43.180701 kernel: Remapping and enabling EFI services. Feb 13 15:20:43.180718 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:20:43.180735 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:20:43.180753 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:20:43.180770 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:20:43.180788 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:20:43.180805 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:20:43.180822 kernel: SMP: Total of 2 processors activated. Feb 13 15:20:43.180839 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:20:43.180861 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:20:43.180878 kernel: CPU features: detected: CRC32 instructions Feb 13 15:20:43.180906 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:20:43.180929 kernel: alternatives: applying system-wide alternatives Feb 13 15:20:43.180947 kernel: devtmpfs: initialized Feb 13 15:20:43.180965 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:20:43.180983 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:20:43.181001 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:20:43.181019 kernel: SMBIOS 3.0.0 present. Feb 13 15:20:43.181042 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:20:43.181060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:20:43.181078 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:20:43.181096 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:20:43.181114 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:20:43.181132 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:20:43.181150 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Feb 13 15:20:43.181173 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:20:43.181191 kernel: cpuidle: using governor menu Feb 13 15:20:43.181209 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:20:43.181227 kernel: ASID allocator initialised with 65536 entries Feb 13 15:20:43.181245 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:20:43.181263 kernel: Serial: AMBA PL011 UART driver Feb 13 15:20:43.181281 kernel: Modules: 17360 pages in range for non-PLT usage Feb 13 15:20:43.181298 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:20:43.181317 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:20:43.181339 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:20:43.181357 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:20:43.181376 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:20:43.181393 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:20:43.181412 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:20:43.181430 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:20:43.181447 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:20:43.181465 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:20:43.181483 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:20:43.181527 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:20:43.181550 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:20:43.181570 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:20:43.181589 kernel: ACPI: Interpreter enabled Feb 13 15:20:43.181608 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:20:43.181626 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:20:43.181645 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:20:43.181964 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:20:43.182192 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:20:43.182420 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:20:43.182656 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:20:43.182895 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:20:43.182923 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:20:43.182942 kernel: acpiphp: Slot [1] registered Feb 13 15:20:43.182960 kernel: acpiphp: Slot [2] registered Feb 13 15:20:43.182980 kernel: acpiphp: Slot [3] registered Feb 13 15:20:43.183007 kernel: acpiphp: Slot [4] registered Feb 13 15:20:43.183025 kernel: acpiphp: Slot [5] registered Feb 13 15:20:43.183043 kernel: acpiphp: Slot [6] registered Feb 13 15:20:43.183062 kernel: acpiphp: Slot [7] registered Feb 13 15:20:43.183080 kernel: acpiphp: Slot [8] registered Feb 13 15:20:43.183097 kernel: acpiphp: Slot [9] registered Feb 13 15:20:43.183115 kernel: acpiphp: Slot [10] registered Feb 13 15:20:43.183133 kernel: acpiphp: Slot [11] registered Feb 13 15:20:43.183151 kernel: acpiphp: Slot [12] registered Feb 13 15:20:43.183169 kernel: acpiphp: Slot [13] registered Feb 13 15:20:43.183194 kernel: acpiphp: Slot [14] registered Feb 13 15:20:43.183212 kernel: acpiphp: Slot [15] registered Feb 13 15:20:43.183231 kernel: acpiphp: Slot [16] registered Feb 13 15:20:43.183249 kernel: acpiphp: Slot [17] registered Feb 13 15:20:43.183267 kernel: acpiphp: Slot [18] registered Feb 13 15:20:43.183285 kernel: acpiphp: Slot [19] registered Feb 13 15:20:43.183303 kernel: acpiphp: Slot [20] registered Feb 13 15:20:43.183321 kernel: acpiphp: Slot [21] registered Feb 13 15:20:43.183339 kernel: acpiphp: Slot [22] registered Feb 13 15:20:43.183362 kernel: acpiphp: Slot [23] registered Feb 13 15:20:43.183380 kernel: acpiphp: Slot [24] registered Feb 13 15:20:43.183399 kernel: acpiphp: Slot [25] registered Feb 13 15:20:43.183417 kernel: acpiphp: Slot [26] registered Feb 13 15:20:43.183435 kernel: acpiphp: Slot [27] registered Feb 13 15:20:43.183453 kernel: acpiphp: Slot [28] registered Feb 13 15:20:43.183471 kernel: acpiphp: Slot [29] registered Feb 13 15:20:43.183489 kernel: acpiphp: Slot [30] registered Feb 13 15:20:43.183535 kernel: acpiphp: Slot [31] registered Feb 13 15:20:43.183555 kernel: PCI host bridge to bus 0000:00 Feb 13 15:20:43.183798 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:20:43.183986 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:20:43.184167 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:20:43.184346 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:20:43.184647 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:20:43.184871 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:20:43.185083 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:20:43.185312 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:20:43.185540 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:20:43.185752 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:20:43.185969 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:20:43.186174 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:20:43.186408 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:20:43.186667 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:20:43.186873 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:20:43.187080 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:20:43.187303 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:20:43.189629 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:20:43.189922 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:20:43.190138 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:20:43.190365 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:20:43.190570 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:20:43.190754 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:20:43.190779 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:20:43.190815 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:20:43.190835 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:20:43.190854 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:20:43.190872 kernel: iommu: Default domain type: Translated Feb 13 15:20:43.190898 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:20:43.190917 kernel: efivars: Registered efivars operations Feb 13 15:20:43.190935 kernel: vgaarb: loaded Feb 13 15:20:43.190953 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:20:43.190971 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:20:43.190989 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:20:43.191007 kernel: pnp: PnP ACPI init Feb 13 15:20:43.191215 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:20:43.191247 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:20:43.191266 kernel: NET: Registered PF_INET protocol family Feb 13 15:20:43.191284 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:20:43.191303 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:20:43.191321 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:20:43.191339 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:20:43.191357 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:20:43.191375 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:20:43.191393 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:20:43.191416 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:20:43.191435 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:20:43.191453 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:20:43.191471 kernel: kvm [1]: HYP mode not available Feb 13 15:20:43.191489 kernel: Initialise system trusted keyrings Feb 13 15:20:43.192650 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:20:43.192674 kernel: Key type asymmetric registered Feb 13 15:20:43.192692 kernel: Asymmetric key parser 'x509' registered Feb 13 15:20:43.192710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:20:43.192737 kernel: io scheduler mq-deadline registered Feb 13 15:20:43.192756 kernel: io scheduler kyber registered Feb 13 15:20:43.192775 kernel: io scheduler bfq registered Feb 13 15:20:43.193029 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:20:43.193057 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:20:43.193076 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:20:43.193094 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:20:43.193112 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:20:43.193136 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:20:43.193155 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:20:43.193355 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:20:43.193381 kernel: printk: console [ttyS0] disabled Feb 13 15:20:43.193400 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:20:43.193418 kernel: printk: console [ttyS0] enabled Feb 13 15:20:43.193436 kernel: printk: bootconsole [uart0] disabled Feb 13 15:20:43.193454 kernel: thunder_xcv, ver 1.0 Feb 13 15:20:43.193472 kernel: thunder_bgx, ver 1.0 Feb 13 15:20:43.193490 kernel: nicpf, ver 1.0 Feb 13 15:20:43.193535 kernel: nicvf, ver 1.0 Feb 13 15:20:43.193757 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:20:43.193948 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:20:42 UTC (1739460042) Feb 13 15:20:43.193973 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:20:43.193992 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:20:43.194011 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:20:43.194029 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:20:43.194053 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:20:43.194071 kernel: Segment Routing with IPv6 Feb 13 15:20:43.194089 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:20:43.194107 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:20:43.194125 kernel: Key type dns_resolver registered Feb 13 15:20:43.194142 kernel: registered taskstats version 1 Feb 13 15:20:43.194161 kernel: Loading compiled-in X.509 certificates Feb 13 15:20:43.194179 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:20:43.194197 kernel: Key type .fscrypt registered Feb 13 15:20:43.194234 kernel: Key type fscrypt-provisioning registered Feb 13 15:20:43.194261 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:20:43.194279 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:20:43.194297 kernel: ima: No architecture policies found Feb 13 15:20:43.194315 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:20:43.194333 kernel: clk: Disabling unused clocks Feb 13 15:20:43.194351 kernel: Freeing unused kernel memory: 39936K Feb 13 15:20:43.194369 kernel: Run /init as init process Feb 13 15:20:43.194386 kernel: with arguments: Feb 13 15:20:43.194404 kernel: /init Feb 13 15:20:43.194426 kernel: with environment: Feb 13 15:20:43.194444 kernel: HOME=/ Feb 13 15:20:43.194462 kernel: TERM=linux Feb 13 15:20:43.194479 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:20:43.197576 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:20:43.197624 systemd[1]: Detected virtualization amazon. Feb 13 15:20:43.197645 systemd[1]: Detected architecture arm64. Feb 13 15:20:43.197675 systemd[1]: Running in initrd. Feb 13 15:20:43.197695 systemd[1]: No hostname configured, using default hostname. Feb 13 15:20:43.197714 systemd[1]: Hostname set to . Feb 13 15:20:43.197734 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:20:43.197753 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:20:43.197773 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:43.197793 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:43.197814 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:20:43.197838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:20:43.197858 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:20:43.197878 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:20:43.197901 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:20:43.197921 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:20:43.197941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:43.197961 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:43.197985 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:20:43.198005 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:20:43.198025 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:20:43.198045 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:20:43.198065 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:20:43.198085 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:20:43.198105 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:20:43.198124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:20:43.198144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:43.198168 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:43.198188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:43.198225 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:20:43.198249 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:20:43.198269 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:20:43.198289 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:20:43.198309 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:20:43.198328 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:20:43.198353 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:20:43.198373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:43.198393 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:20:43.198413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:43.198433 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:20:43.198520 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 15:20:43.198571 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:20:43.198591 systemd-journald[252]: Journal started Feb 13 15:20:43.198640 systemd-journald[252]: Runtime Journal (/run/log/journal/ec289947fa4683d337fda825d5e1b509) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:20:43.161197 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 15:20:43.205638 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:20:43.205682 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:20:43.211742 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 15:20:43.213593 kernel: Bridge firewalling registered Feb 13 15:20:43.218818 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:20:43.224823 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:43.230018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:43.235549 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:20:43.260872 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:20:43.268244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:20:43.283300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:20:43.286609 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:43.314639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:43.326812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:20:43.332208 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:43.350801 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:20:43.356884 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:43.373770 dracut-cmdline[286]: dracut-dracut-053 Feb 13 15:20:43.380695 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:20:43.451285 systemd-resolved[288]: Positive Trust Anchors: Feb 13 15:20:43.451314 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:20:43.451376 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:20:43.555685 kernel: SCSI subsystem initialized Feb 13 15:20:43.563624 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:20:43.575614 kernel: iscsi: registered transport (tcp) Feb 13 15:20:43.598015 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:20:43.598088 kernel: QLogic iSCSI HBA Driver Feb 13 15:20:43.677531 kernel: random: crng init done Feb 13 15:20:43.676775 systemd-resolved[288]: Defaulting to hostname 'linux'. Feb 13 15:20:43.681151 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:20:43.691319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:43.704202 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:20:43.713794 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:20:43.750576 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:20:43.750653 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:20:43.752485 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:20:43.821553 kernel: raid6: neonx8 gen() 6615 MB/s Feb 13 15:20:43.835546 kernel: raid6: neonx4 gen() 6550 MB/s Feb 13 15:20:43.853547 kernel: raid6: neonx2 gen() 5414 MB/s Feb 13 15:20:43.869549 kernel: raid6: neonx1 gen() 3959 MB/s Feb 13 15:20:43.886556 kernel: raid6: int64x8 gen() 3624 MB/s Feb 13 15:20:43.903563 kernel: raid6: int64x4 gen() 3673 MB/s Feb 13 15:20:43.920561 kernel: raid6: int64x2 gen() 3588 MB/s Feb 13 15:20:43.938396 kernel: raid6: int64x1 gen() 2743 MB/s Feb 13 15:20:43.938470 kernel: raid6: using algorithm neonx8 gen() 6615 MB/s Feb 13 15:20:43.956384 kernel: raid6: .... xor() 4716 MB/s, rmw enabled Feb 13 15:20:43.956467 kernel: raid6: using neon recovery algorithm Feb 13 15:20:43.965169 kernel: xor: measuring software checksum speed Feb 13 15:20:43.965251 kernel: 8regs : 12757 MB/sec Feb 13 15:20:43.966311 kernel: 32regs : 12995 MB/sec Feb 13 15:20:43.968537 kernel: arm64_neon : 8993 MB/sec Feb 13 15:20:43.968596 kernel: xor: using function: 32regs (12995 MB/sec) Feb 13 15:20:44.056578 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:20:44.077739 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:20:44.088894 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:44.130061 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 15:20:44.140842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:44.167845 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:20:44.200329 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Feb 13 15:20:44.263238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:20:44.273831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:20:44.407084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:44.423743 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:20:44.476593 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:20:44.483327 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:20:44.489693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:44.494539 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:20:44.508445 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:20:44.553145 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:20:44.614221 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:20:44.614289 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:20:44.648938 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:20:44.649241 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:20:44.649529 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:5d:f5:27:fc:6b Feb 13 15:20:44.650183 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:20:44.650574 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:44.660217 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:20:44.666369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:20:44.667725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:44.686574 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:44.695188 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:20:44.699735 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:44.714431 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:20:44.714524 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:20:44.722561 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:20:44.735438 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:20:44.735537 kernel: GPT:9289727 != 16777215 Feb 13 15:20:44.735567 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:20:44.735592 kernel: GPT:9289727 != 16777215 Feb 13 15:20:44.737200 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:20:44.737267 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:20:44.741438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:44.751866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:20:44.808382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:44.839727 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (520) Feb 13 15:20:44.870534 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (532) Feb 13 15:20:44.926487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:20:44.965443 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:20:44.994486 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:20:44.997255 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:20:45.027473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:20:45.040854 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:20:45.056769 disk-uuid[662]: Primary Header is updated. Feb 13 15:20:45.056769 disk-uuid[662]: Secondary Entries is updated. Feb 13 15:20:45.056769 disk-uuid[662]: Secondary Header is updated. Feb 13 15:20:45.065585 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:20:46.082615 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:20:46.084915 disk-uuid[663]: The operation has completed successfully. Feb 13 15:20:46.297233 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:20:46.298008 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:20:46.350844 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:20:46.361571 sh[923]: Success Feb 13 15:20:46.387671 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:20:46.526824 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:20:46.540728 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:20:46.552019 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:20:46.588484 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:20:46.588601 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:46.590402 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:20:46.591897 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:20:46.593014 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:20:46.619550 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:20:46.622948 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:20:46.627217 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:20:46.638838 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:20:46.646833 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:20:46.679288 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:46.679368 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:46.680661 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:20:46.689091 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:20:46.708598 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:20:46.711096 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:46.722529 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:20:46.736857 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:20:46.898947 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:20:46.915190 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:20:46.938401 ignition[1024]: Ignition 2.20.0 Feb 13 15:20:46.938430 ignition[1024]: Stage: fetch-offline Feb 13 15:20:46.938954 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:46.939070 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:46.940808 ignition[1024]: Ignition finished successfully Feb 13 15:20:46.950365 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:20:46.990381 systemd-networkd[1121]: lo: Link UP Feb 13 15:20:46.990401 systemd-networkd[1121]: lo: Gained carrier Feb 13 15:20:46.995777 systemd-networkd[1121]: Enumeration completed Feb 13 15:20:46.997468 systemd-networkd[1121]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:46.997475 systemd-networkd[1121]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:20:46.997675 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:20:47.001610 systemd[1]: Reached target network.target - Network. Feb 13 15:20:47.009554 systemd-networkd[1121]: eth0: Link UP Feb 13 15:20:47.009563 systemd-networkd[1121]: eth0: Gained carrier Feb 13 15:20:47.009583 systemd-networkd[1121]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:47.034696 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:20:47.056799 systemd-networkd[1121]: eth0: DHCPv4 address 172.31.28.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:20:47.079968 ignition[1124]: Ignition 2.20.0 Feb 13 15:20:47.080623 ignition[1124]: Stage: fetch Feb 13 15:20:47.082834 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:47.082874 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:47.083134 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:47.103887 ignition[1124]: PUT result: OK Feb 13 15:20:47.108132 ignition[1124]: parsed url from cmdline: "" Feb 13 15:20:47.108314 ignition[1124]: no config URL provided Feb 13 15:20:47.108335 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:20:47.108405 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:20:47.109859 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:47.116830 ignition[1124]: PUT result: OK Feb 13 15:20:47.117017 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:20:47.121243 ignition[1124]: GET result: OK Feb 13 15:20:47.121441 ignition[1124]: parsing config with SHA512: 5d290be85dfa9a486df17c57affa2e63021718e23b3df2c5f5b494316605547614b7d54156ce1a5b96d70d91467fdfa73d40a24ef52e7a9dac2112cc9125c91c Feb 13 15:20:47.134783 unknown[1124]: fetched base config from "system" Feb 13 15:20:47.136243 unknown[1124]: fetched base config from "system" Feb 13 15:20:47.136449 unknown[1124]: fetched user config from "aws" Feb 13 15:20:47.139164 ignition[1124]: fetch: fetch complete Feb 13 15:20:47.139182 ignition[1124]: fetch: fetch passed Feb 13 15:20:47.139307 ignition[1124]: Ignition finished successfully Feb 13 15:20:47.147435 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:20:47.166954 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:20:47.192238 ignition[1132]: Ignition 2.20.0 Feb 13 15:20:47.192830 ignition[1132]: Stage: kargs Feb 13 15:20:47.193442 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:47.193469 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:47.194359 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:47.200439 ignition[1132]: PUT result: OK Feb 13 15:20:47.207327 ignition[1132]: kargs: kargs passed Feb 13 15:20:47.208950 ignition[1132]: Ignition finished successfully Feb 13 15:20:47.213485 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:20:47.233969 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:20:47.258609 ignition[1138]: Ignition 2.20.0 Feb 13 15:20:47.259174 ignition[1138]: Stage: disks Feb 13 15:20:47.259988 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:47.260035 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:47.260215 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:47.262688 ignition[1138]: PUT result: OK Feb 13 15:20:47.272910 ignition[1138]: disks: disks passed Feb 13 15:20:47.273111 ignition[1138]: Ignition finished successfully Feb 13 15:20:47.276535 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:20:47.278999 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:20:47.281488 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:20:47.283991 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:20:47.286039 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:20:47.288961 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:20:47.306042 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:20:47.359291 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:20:47.365352 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:20:47.381905 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:20:47.481544 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:20:47.482660 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:20:47.486679 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:20:47.512774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:20:47.518607 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:20:47.521946 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:20:47.522049 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:20:47.522107 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:20:47.558556 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Feb 13 15:20:47.562994 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:20:47.567920 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:47.567970 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:47.567996 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:20:47.580805 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:20:47.589556 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:20:47.592730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:20:47.693999 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:20:47.703874 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:20:47.713110 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:20:47.722894 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:20:47.916144 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:20:47.938876 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:20:47.956794 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:47.955003 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:20:47.958756 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:20:48.010811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:20:48.012761 ignition[1283]: INFO : Ignition 2.20.0 Feb 13 15:20:48.012761 ignition[1283]: INFO : Stage: mount Feb 13 15:20:48.019341 ignition[1283]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:48.019341 ignition[1283]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:48.019341 ignition[1283]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:48.019341 ignition[1283]: INFO : PUT result: OK Feb 13 15:20:48.029973 ignition[1283]: INFO : mount: mount passed Feb 13 15:20:48.033239 ignition[1283]: INFO : Ignition finished successfully Feb 13 15:20:48.034926 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:20:48.043713 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:20:48.071863 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:20:48.110606 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1294) Feb 13 15:20:48.114316 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:20:48.114392 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:20:48.114433 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:20:48.120545 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:20:48.125587 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:20:48.166609 ignition[1311]: INFO : Ignition 2.20.0 Feb 13 15:20:48.169684 ignition[1311]: INFO : Stage: files Feb 13 15:20:48.169684 ignition[1311]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:48.169684 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:48.169684 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:48.179090 ignition[1311]: INFO : PUT result: OK Feb 13 15:20:48.183189 ignition[1311]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:20:48.187194 ignition[1311]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:20:48.187194 ignition[1311]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:20:48.200005 ignition[1311]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:20:48.203460 ignition[1311]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:20:48.207435 unknown[1311]: wrote ssh authorized keys file for user: core Feb 13 15:20:48.210553 ignition[1311]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:20:48.222958 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:20:48.222958 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:20:48.284183 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:20:48.903679 systemd-networkd[1121]: eth0: Gained IPv6LL Feb 13 15:20:48.932965 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:20:48.937804 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:20:48.937804 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:20:49.397726 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:20:49.549209 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:20:49.553418 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:20:49.957590 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:20:50.300900 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:20:50.304960 ignition[1311]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:20:50.304960 ignition[1311]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:20:50.311363 ignition[1311]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:20:50.311363 ignition[1311]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:20:50.311363 ignition[1311]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:20:50.320360 ignition[1311]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:20:50.323662 ignition[1311]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:20:50.323662 ignition[1311]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:20:50.323662 ignition[1311]: INFO : files: files passed Feb 13 15:20:50.323662 ignition[1311]: INFO : Ignition finished successfully Feb 13 15:20:50.332608 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:20:50.349954 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:20:50.353861 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:20:50.362474 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:20:50.362803 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:20:50.397853 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:50.397853 initrd-setup-root-after-ignition[1340]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:50.406351 initrd-setup-root-after-ignition[1344]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:20:50.412622 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:20:50.416023 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:20:50.435943 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:20:50.497849 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:20:50.498328 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:20:50.505519 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:20:50.508148 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:20:50.508578 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:20:50.530945 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:20:50.561616 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:20:50.572897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:20:50.598273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:50.598701 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:50.610000 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:20:50.614018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:20:50.614340 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:20:50.620635 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:20:50.622917 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:20:50.626434 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:20:50.635278 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:20:50.638418 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:20:50.645760 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:20:50.648243 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:20:50.652310 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:20:50.654709 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:20:50.657068 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:20:50.659162 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:20:50.659442 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:20:50.674241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:50.677537 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:50.681295 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:20:50.684310 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:50.688306 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:20:50.688722 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:20:50.693346 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:20:50.693957 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:20:50.703587 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:20:50.703853 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:20:50.719894 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:20:50.728083 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:20:50.737783 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:20:50.738330 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:50.752980 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:20:50.753239 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:20:50.774887 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:20:50.780096 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:20:50.791107 ignition[1364]: INFO : Ignition 2.20.0 Feb 13 15:20:50.791107 ignition[1364]: INFO : Stage: umount Feb 13 15:20:50.796091 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:20:50.796091 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:20:50.796091 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:20:50.804212 ignition[1364]: INFO : PUT result: OK Feb 13 15:20:50.820618 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:20:50.825331 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:20:50.825768 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:20:50.835727 ignition[1364]: INFO : umount: umount passed Feb 13 15:20:50.835727 ignition[1364]: INFO : Ignition finished successfully Feb 13 15:20:50.834091 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:20:50.836733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:20:50.845961 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:20:50.846257 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:20:50.850122 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:20:50.850264 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:20:50.852739 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:20:50.852854 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:20:50.861758 systemd[1]: Stopped target network.target - Network. Feb 13 15:20:50.863785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:20:50.863919 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:20:50.866977 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:20:50.868808 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:20:50.872551 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:50.875139 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:20:50.876934 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:20:50.878929 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:20:50.879025 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:20:50.881524 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:20:50.881622 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:20:50.883740 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:20:50.883856 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:20:50.885907 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:20:50.886011 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:20:50.888723 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:20:50.888850 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:20:50.892644 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:20:50.899904 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:20:50.900983 systemd-networkd[1121]: eth0: DHCPv6 lease lost Feb 13 15:20:50.905104 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:20:50.905320 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:20:50.908558 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:20:50.908807 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:20:50.917035 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:20:50.917147 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:50.936121 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:20:50.972621 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:20:50.972756 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:20:50.975133 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:20:50.975246 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:50.978164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:20:50.978293 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:50.994648 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:20:50.994770 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:51.007365 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:51.028991 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:20:51.031022 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:51.036968 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:20:51.037114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:51.041072 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:20:51.041156 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:51.043288 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:20:51.043397 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:20:51.045752 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:20:51.045859 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:20:51.055147 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:20:51.055263 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:20:51.071957 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:20:51.082790 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:20:51.082930 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:51.085702 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:20:51.085822 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:20:51.088875 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:20:51.088989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:51.092550 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:20:51.092665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:51.096194 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:20:51.096915 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:20:51.146144 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:20:51.146584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:20:51.153020 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:20:51.170249 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:20:51.187959 systemd[1]: Switching root. Feb 13 15:20:51.223687 systemd-journald[252]: Journal stopped Feb 13 15:20:53.087744 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 15:20:53.087874 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:20:53.087916 kernel: SELinux: policy capability open_perms=1 Feb 13 15:20:53.087947 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:20:53.087983 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:20:53.088019 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:20:53.088062 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:20:53.088089 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:20:53.088117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:20:53.088147 kernel: audit: type=1403 audit(1739460051.591:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:20:53.088179 systemd[1]: Successfully loaded SELinux policy in 48.947ms. Feb 13 15:20:53.088268 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.196ms. Feb 13 15:20:53.088306 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:20:53.088351 systemd[1]: Detected virtualization amazon. Feb 13 15:20:53.088382 systemd[1]: Detected architecture arm64. Feb 13 15:20:53.088413 systemd[1]: Detected first boot. Feb 13 15:20:53.088441 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:20:53.088473 zram_generator::config[1409]: No configuration found. Feb 13 15:20:53.088525 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:20:53.088560 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:20:53.088601 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:20:53.088636 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:20:53.088668 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:20:53.088700 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:20:53.088729 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:20:53.088757 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:20:53.088787 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:20:53.088821 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:20:53.088853 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:20:53.088891 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:20:53.088930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:20:53.088959 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:20:53.088991 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:20:53.089021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:20:53.089051 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:20:53.089082 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:20:53.089113 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:20:53.089142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:20:53.089175 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:20:53.089206 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:20:53.089237 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:20:53.089267 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:20:53.089297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:20:53.089327 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:20:53.089356 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:20:53.089388 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:20:53.089423 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:20:53.089455 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:20:53.089489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:20:53.089540 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:20:53.089573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:20:53.089602 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:20:53.089632 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:20:53.089663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:20:53.089692 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:20:53.089729 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:20:53.089760 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:20:53.089790 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:20:53.089820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:20:53.089849 systemd[1]: Reached target machines.target - Containers. Feb 13 15:20:53.089936 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:20:53.089970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:53.090002 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:20:53.090030 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:20:53.090064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:53.090096 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:53.090126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:53.090167 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:20:53.090205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:53.090240 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:20:53.090270 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:20:53.090299 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:20:53.090331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:20:53.090360 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:20:53.090390 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:20:53.090421 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:20:53.090451 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:20:53.090479 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:20:53.090527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:20:53.090558 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:20:53.090586 systemd[1]: Stopped verity-setup.service. Feb 13 15:20:53.090619 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:20:53.090650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:20:53.090678 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:20:53.090708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:20:53.090740 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:20:53.090768 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:20:53.090796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:20:53.090828 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:20:53.090857 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:20:53.090897 kernel: fuse: init (API version 7.39) Feb 13 15:20:53.090926 kernel: loop: module loaded Feb 13 15:20:53.090954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:53.090984 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:53.091012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:53.091044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:53.091115 systemd-journald[1493]: Collecting audit messages is disabled. Feb 13 15:20:53.091170 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:20:53.091200 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:20:53.091227 systemd-journald[1493]: Journal started Feb 13 15:20:53.091284 systemd-journald[1493]: Runtime Journal (/run/log/journal/ec289947fa4683d337fda825d5e1b509) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:20:52.565994 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:20:52.590754 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:20:52.591536 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:20:53.096535 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:20:53.102931 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:53.105676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:53.112601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:20:53.117997 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:20:53.123779 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:20:53.165206 kernel: ACPI: bus type drm_connector registered Feb 13 15:20:53.168090 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:53.171658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:53.174695 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:20:53.181048 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:20:53.191862 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:20:53.206762 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:20:53.211716 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:20:53.211791 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:20:53.216476 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:20:53.226781 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:20:53.233883 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:20:53.236047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:53.249855 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:20:53.254858 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:20:53.257154 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:53.261749 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:20:53.264270 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:53.276840 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:20:53.285562 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:20:53.298889 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:20:53.305157 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:20:53.307768 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:20:53.312585 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:20:53.376062 systemd-journald[1493]: Time spent on flushing to /var/log/journal/ec289947fa4683d337fda825d5e1b509 is 93.296ms for 911 entries. Feb 13 15:20:53.376062 systemd-journald[1493]: System Journal (/var/log/journal/ec289947fa4683d337fda825d5e1b509) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:20:53.494785 systemd-journald[1493]: Received client request to flush runtime journal. Feb 13 15:20:53.494931 kernel: loop0: detected capacity change from 0 to 53784 Feb 13 15:20:53.384236 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:20:53.388156 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:20:53.407840 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:20:53.445599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:20:53.482088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:20:53.489231 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Feb 13 15:20:53.489256 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Feb 13 15:20:53.498902 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:20:53.503552 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:20:53.528697 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:20:53.531109 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:20:53.534778 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:20:53.549905 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:20:53.575529 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:20:53.584344 udevadm[1549]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:20:53.622572 kernel: loop1: detected capacity change from 0 to 116784 Feb 13 15:20:53.658882 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:20:53.671804 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:20:53.677568 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 15:20:53.729118 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Feb 13 15:20:53.730232 systemd-tmpfiles[1561]: ACLs are not supported, ignoring. Feb 13 15:20:53.742312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:20:53.750014 kernel: loop3: detected capacity change from 0 to 113552 Feb 13 15:20:53.824223 kernel: loop4: detected capacity change from 0 to 53784 Feb 13 15:20:53.850586 kernel: loop5: detected capacity change from 0 to 116784 Feb 13 15:20:53.882560 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 15:20:53.924545 kernel: loop7: detected capacity change from 0 to 113552 Feb 13 15:20:53.950670 (sd-merge)[1566]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:20:53.951639 (sd-merge)[1566]: Merged extensions into '/usr'. Feb 13 15:20:53.966574 systemd[1]: Reloading requested from client PID 1537 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:20:53.967416 systemd[1]: Reloading... Feb 13 15:20:54.168682 zram_generator::config[1593]: No configuration found. Feb 13 15:20:54.219105 ldconfig[1532]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:20:54.440220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:54.560045 systemd[1]: Reloading finished in 591 ms. Feb 13 15:20:54.604606 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:20:54.608217 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:20:54.627948 systemd[1]: Starting ensure-sysext.service... Feb 13 15:20:54.635829 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:20:54.640659 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:20:54.654882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:20:54.664717 systemd[1]: Reloading requested from client PID 1645 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:20:54.664751 systemd[1]: Reloading... Feb 13 15:20:54.735146 systemd-tmpfiles[1646]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:20:54.739492 systemd-tmpfiles[1646]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:20:54.742407 systemd-tmpfiles[1646]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:20:54.745850 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Feb 13 15:20:54.746014 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Feb 13 15:20:54.763843 systemd-udevd[1648]: Using default interface naming scheme 'v255'. Feb 13 15:20:54.767626 systemd-tmpfiles[1646]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:54.767647 systemd-tmpfiles[1646]: Skipping /boot Feb 13 15:20:54.822220 systemd-tmpfiles[1646]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:20:54.827060 zram_generator::config[1675]: No configuration found. Feb 13 15:20:54.824310 systemd-tmpfiles[1646]: Skipping /boot Feb 13 15:20:55.014040 (udev-worker)[1679]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:20:55.198599 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1688) Feb 13 15:20:55.242125 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:20:55.412370 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:20:55.413646 systemd[1]: Reloading finished in 748 ms. Feb 13 15:20:55.447221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:20:55.472779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:20:55.559444 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:20:55.573465 systemd[1]: Finished ensure-sysext.service. Feb 13 15:20:55.580155 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:20:55.600811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:20:55.611854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:20:55.614625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:20:55.620842 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:20:55.625032 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:20:55.630863 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:20:55.644244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:20:55.650847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:20:55.653054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:20:55.665184 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:20:55.671826 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:20:55.681918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:20:55.691378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:20:55.693387 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:20:55.699961 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:20:55.708844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:20:55.714284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:20:55.715150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:20:55.717918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:20:55.719636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:20:55.733003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:20:55.745856 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:20:55.761455 lvm[1844]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:55.775681 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:20:55.776007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:20:55.785374 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:20:55.785687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:20:55.788266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:20:55.796662 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:20:55.799412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:20:55.819949 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:20:55.864747 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:20:55.873245 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:20:55.877545 lvm[1876]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:20:55.915129 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:20:55.927064 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:20:55.931291 augenrules[1887]: No rules Feb 13 15:20:55.931348 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:20:55.931795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:20:55.944926 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:20:55.968723 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:20:55.976686 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:20:55.978806 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:20:56.000235 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:20:56.032642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:20:56.116674 systemd-networkd[1852]: lo: Link UP Feb 13 15:20:56.116697 systemd-networkd[1852]: lo: Gained carrier Feb 13 15:20:56.119863 systemd-networkd[1852]: Enumeration completed Feb 13 15:20:56.120844 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:56.120869 systemd-networkd[1852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:20:56.121706 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:20:56.127559 systemd-networkd[1852]: eth0: Link UP Feb 13 15:20:56.127986 systemd-networkd[1852]: eth0: Gained carrier Feb 13 15:20:56.128023 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:20:56.130870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:20:56.137683 systemd-networkd[1852]: eth0: DHCPv4 address 172.31.28.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:20:56.140134 systemd-resolved[1853]: Positive Trust Anchors: Feb 13 15:20:56.140180 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:20:56.140244 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:20:56.152223 systemd-resolved[1853]: Defaulting to hostname 'linux'. Feb 13 15:20:56.156827 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:20:56.159672 systemd[1]: Reached target network.target - Network. Feb 13 15:20:56.161747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:20:56.164347 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:20:56.167647 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:20:56.170036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:20:56.172777 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:20:56.175356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:20:56.177698 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:20:56.180057 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:20:56.180128 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:20:56.181926 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:20:56.185227 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:20:56.189886 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:20:56.200765 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:20:56.203933 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:20:56.206288 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:20:56.208247 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:20:56.210014 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:56.210068 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:20:56.217746 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:20:56.222799 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:20:56.235831 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:20:56.242563 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:20:56.253774 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:20:56.255773 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:20:56.260164 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:20:56.268944 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:20:56.275837 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:20:56.298550 jq[1914]: false Feb 13 15:20:56.281923 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:20:56.289032 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:20:56.301405 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:20:56.312040 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:20:56.316765 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:20:56.318202 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:20:56.322090 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:20:56.327756 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:20:56.335214 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:20:56.336690 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:20:56.374081 dbus-daemon[1913]: [system] SELinux support is enabled Feb 13 15:20:56.380248 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:20:56.386848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:20:56.386893 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:20:56.389426 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:20:56.389469 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:20:56.425687 extend-filesystems[1915]: Found loop4 Feb 13 15:20:56.425687 extend-filesystems[1915]: Found loop5 Feb 13 15:20:56.425687 extend-filesystems[1915]: Found loop6 Feb 13 15:20:56.425687 extend-filesystems[1915]: Found loop7 Feb 13 15:20:56.425687 extend-filesystems[1915]: Found nvme0n1 Feb 13 15:20:56.418110 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:20:56.397867 dbus-daemon[1913]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1852 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:20:56.454417 extend-filesystems[1915]: Found nvme0n1p1 Feb 13 15:20:56.454417 extend-filesystems[1915]: Found nvme0n1p2 Feb 13 15:20:56.454417 extend-filesystems[1915]: Found nvme0n1p3 Feb 13 15:20:56.454417 extend-filesystems[1915]: Found usr Feb 13 15:20:56.454417 extend-filesystems[1915]: Found nvme0n1p4 Feb 13 15:20:56.454417 extend-filesystems[1915]: Found nvme0n1p6 Feb 13 15:20:56.454417 extend-filesystems[1915]: Found nvme0n1p7 Feb 13 15:20:56.454417 extend-filesystems[1915]: Found nvme0n1p9 Feb 13 15:20:56.454417 extend-filesystems[1915]: Checking size of /dev/nvme0n1p9 Feb 13 15:20:56.533756 update_engine[1925]: I20250213 15:20:56.428995 1925 main.cc:92] Flatcar Update Engine starting Feb 13 15:20:56.533756 update_engine[1925]: I20250213 15:20:56.452450 1925 update_check_scheduler.cc:74] Next update check in 2m49s Feb 13 15:20:56.439420 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:20:56.400087 dbus-daemon[1913]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:20:56.441086 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:20:56.477910 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:20:56.485267 (ntainerd)[1932]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:20:56.551872 jq[1926]: true Feb 13 15:20:56.495883 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:20:56.573539 tar[1935]: linux-arm64/helm Feb 13 15:20:56.567285 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:20:56.574209 extend-filesystems[1915]: Resized partition /dev/nvme0n1p9 Feb 13 15:20:56.586758 extend-filesystems[1958]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:20:56.612475 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:31:02 UTC 2025 (1): Starting Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: ---------------------------------------------------- Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: corporation. Support and training for ntp-4 are Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: available at https://www.nwtime.org/support Feb 13 15:20:56.613993 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: ---------------------------------------------------- Feb 13 15:20:56.614598 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:20:56.612587 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:20:56.612608 ntpd[1917]: ---------------------------------------------------- Feb 13 15:20:56.612627 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:20:56.612646 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:20:56.612663 ntpd[1917]: corporation. Support and training for ntp-4 are Feb 13 15:20:56.612681 ntpd[1917]: available at https://www.nwtime.org/support Feb 13 15:20:56.612702 ntpd[1917]: ---------------------------------------------------- Feb 13 15:20:56.617407 ntpd[1917]: proto: precision = 0.096 usec (-23) Feb 13 15:20:56.619627 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: proto: precision = 0.096 usec (-23) Feb 13 15:20:56.620877 ntpd[1917]: basedate set to 2025-02-01 Feb 13 15:20:56.622697 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: basedate set to 2025-02-01 Feb 13 15:20:56.622697 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: gps base set to 2025-02-02 (week 2352) Feb 13 15:20:56.620913 ntpd[1917]: gps base set to 2025-02-02 (week 2352) Feb 13 15:20:56.626800 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Listen normally on 3 eth0 172.31.28.93:123 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Listen normally on 4 lo [::1]:123 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: bind(21) AF_INET6 fe80::45d:f5ff:fe27:fc6b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: unable to create socket on eth0 (5) for fe80::45d:f5ff:fe27:fc6b%2#123 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: failed to init interface for address fe80::45d:f5ff:fe27:fc6b%2 Feb 13 15:20:56.629566 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Feb 13 15:20:56.628641 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:20:56.628904 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:20:56.628965 ntpd[1917]: Listen normally on 3 eth0 172.31.28.93:123 Feb 13 15:20:56.629029 ntpd[1917]: Listen normally on 4 lo [::1]:123 Feb 13 15:20:56.629105 ntpd[1917]: bind(21) AF_INET6 fe80::45d:f5ff:fe27:fc6b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:56.629142 ntpd[1917]: unable to create socket on eth0 (5) for fe80::45d:f5ff:fe27:fc6b%2#123 Feb 13 15:20:56.629169 ntpd[1917]: failed to init interface for address fe80::45d:f5ff:fe27:fc6b%2 Feb 13 15:20:56.629219 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Feb 13 15:20:56.640770 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:56.641665 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:56.641665 ntpd[1917]: 13 Feb 15:20:56 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:56.640834 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:20:56.660665 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:20:56.661142 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:20:56.688879 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:20:56.695443 jq[1952]: true Feb 13 15:20:56.776595 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:20:56.810946 extend-filesystems[1958]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:20:56.810946 extend-filesystems[1958]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:20:56.810946 extend-filesystems[1958]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:20:56.804154 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:20:56.832368 coreos-metadata[1912]: Feb 13 15:20:56.818 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:20:56.832929 extend-filesystems[1915]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:20:56.804575 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:20:56.836812 coreos-metadata[1912]: Feb 13 15:20:56.836 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:20:56.838298 coreos-metadata[1912]: Feb 13 15:20:56.837 INFO Fetch successful Feb 13 15:20:56.838298 coreos-metadata[1912]: Feb 13 15:20:56.837 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:20:56.843744 coreos-metadata[1912]: Feb 13 15:20:56.840 INFO Fetch successful Feb 13 15:20:56.843744 coreos-metadata[1912]: Feb 13 15:20:56.840 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:20:56.849411 coreos-metadata[1912]: Feb 13 15:20:56.847 INFO Fetch successful Feb 13 15:20:56.849411 coreos-metadata[1912]: Feb 13 15:20:56.848 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:20:56.853700 coreos-metadata[1912]: Feb 13 15:20:56.853 INFO Fetch successful Feb 13 15:20:56.853700 coreos-metadata[1912]: Feb 13 15:20:56.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:20:56.860589 coreos-metadata[1912]: Feb 13 15:20:56.859 INFO Fetch failed with 404: resource not found Feb 13 15:20:56.860589 coreos-metadata[1912]: Feb 13 15:20:56.859 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:20:56.862753 coreos-metadata[1912]: Feb 13 15:20:56.862 INFO Fetch successful Feb 13 15:20:56.862753 coreos-metadata[1912]: Feb 13 15:20:56.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:20:56.868981 coreos-metadata[1912]: Feb 13 15:20:56.868 INFO Fetch successful Feb 13 15:20:56.868981 coreos-metadata[1912]: Feb 13 15:20:56.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:20:56.874678 coreos-metadata[1912]: Feb 13 15:20:56.874 INFO Fetch successful Feb 13 15:20:56.874678 coreos-metadata[1912]: Feb 13 15:20:56.874 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:20:56.880811 coreos-metadata[1912]: Feb 13 15:20:56.880 INFO Fetch successful Feb 13 15:20:56.880811 coreos-metadata[1912]: Feb 13 15:20:56.880 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:20:56.888538 coreos-metadata[1912]: Feb 13 15:20:56.885 INFO Fetch successful Feb 13 15:20:56.941583 bash[1991]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:20:56.945921 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:20:56.941850 dbus-daemon[1913]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:20:56.942992 dbus-daemon[1913]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1933 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:20:56.958633 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:20:56.969733 systemd-logind[1923]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:20:56.969789 systemd-logind[1923]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:20:56.971621 systemd-logind[1923]: New seat seat0. Feb 13 15:20:56.977252 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:20:56.988467 locksmithd[1942]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:20:57.069702 systemd[1]: Starting sshkeys.service... Feb 13 15:20:57.071374 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:20:57.094238 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:20:57.097391 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:20:57.132173 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1688) Feb 13 15:20:57.143817 polkitd[2002]: Started polkitd version 121 Feb 13 15:20:57.181736 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:20:57.194673 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:20:57.225108 polkitd[2002]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:20:57.225237 polkitd[2002]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:20:57.240320 polkitd[2002]: Finished loading, compiling and executing 2 rules Feb 13 15:20:57.241759 dbus-daemon[1913]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:20:57.242030 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:20:57.245343 polkitd[2002]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:20:57.343576 systemd-resolved[1853]: System hostname changed to 'ip-172-31-28-93'. Feb 13 15:20:57.344243 systemd-hostnamed[1933]: Hostname set to (transient) Feb 13 15:20:57.390690 coreos-metadata[2040]: Feb 13 15:20:57.388 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:20:57.390690 coreos-metadata[2040]: Feb 13 15:20:57.390 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:20:57.394474 coreos-metadata[2040]: Feb 13 15:20:57.393 INFO Fetch successful Feb 13 15:20:57.394474 coreos-metadata[2040]: Feb 13 15:20:57.393 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:20:57.400727 coreos-metadata[2040]: Feb 13 15:20:57.400 INFO Fetch successful Feb 13 15:20:57.404792 unknown[2040]: wrote ssh authorized keys file for user: core Feb 13 15:20:57.436338 containerd[1932]: time="2025-02-13T15:20:57.436186642Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:20:57.491539 update-ssh-keys[2085]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:20:57.506083 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:20:57.515776 systemd[1]: Finished sshkeys.service. Feb 13 15:20:57.615054 containerd[1932]: time="2025-02-13T15:20:57.612749843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:57.615169 ntpd[1917]: 13 Feb 15:20:57 ntpd[1917]: bind(24) AF_INET6 fe80::45d:f5ff:fe27:fc6b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:57.615169 ntpd[1917]: 13 Feb 15:20:57 ntpd[1917]: unable to create socket on eth0 (6) for fe80::45d:f5ff:fe27:fc6b%2#123 Feb 13 15:20:57.615169 ntpd[1917]: 13 Feb 15:20:57 ntpd[1917]: failed to init interface for address fe80::45d:f5ff:fe27:fc6b%2 Feb 13 15:20:57.614760 ntpd[1917]: bind(24) AF_INET6 fe80::45d:f5ff:fe27:fc6b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:20:57.614820 ntpd[1917]: unable to create socket on eth0 (6) for fe80::45d:f5ff:fe27:fc6b%2#123 Feb 13 15:20:57.614849 ntpd[1917]: failed to init interface for address fe80::45d:f5ff:fe27:fc6b%2 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.621962519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.622038419Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.622081667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.622784579Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.622837163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.622983371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.623018879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.623344571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.623380595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.623414723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:57.625545 containerd[1932]: time="2025-02-13T15:20:57.623439023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:57.626061 containerd[1932]: time="2025-02-13T15:20:57.623687687Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:57.626061 containerd[1932]: time="2025-02-13T15:20:57.624164231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:20:57.626061 containerd[1932]: time="2025-02-13T15:20:57.624399611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:20:57.626061 containerd[1932]: time="2025-02-13T15:20:57.624436187Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:20:57.628973 containerd[1932]: time="2025-02-13T15:20:57.627756839Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:20:57.628973 containerd[1932]: time="2025-02-13T15:20:57.627920831Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:20:57.638093 containerd[1932]: time="2025-02-13T15:20:57.638005835Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:20:57.638273 containerd[1932]: time="2025-02-13T15:20:57.638124971Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:20:57.638273 containerd[1932]: time="2025-02-13T15:20:57.638180795Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:20:57.638273 containerd[1932]: time="2025-02-13T15:20:57.638219267Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:20:57.638273 containerd[1932]: time="2025-02-13T15:20:57.638259215Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:20:57.641305 containerd[1932]: time="2025-02-13T15:20:57.641231327Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.642428831Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.642943523Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.642990287Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643157771Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643198031Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643229039Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643269167Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643303319Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643336823Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643370867Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.643960091Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.644006591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.644051855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.644601 containerd[1932]: time="2025-02-13T15:20:57.644084207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644113847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644147807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644177735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644209547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644237579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644286575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644331215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644367251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644401931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644431883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644460419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.645241 containerd[1932]: time="2025-02-13T15:20:57.644492087Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:20:57.646633 containerd[1932]: time="2025-02-13T15:20:57.646562447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.646755 containerd[1932]: time="2025-02-13T15:20:57.646638359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.646755 containerd[1932]: time="2025-02-13T15:20:57.646670663Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:20:57.646892 containerd[1932]: time="2025-02-13T15:20:57.646851551Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:20:57.648541 containerd[1932]: time="2025-02-13T15:20:57.647011523Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:20:57.648541 containerd[1932]: time="2025-02-13T15:20:57.647069291Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:20:57.648541 containerd[1932]: time="2025-02-13T15:20:57.647111243Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:20:57.648541 containerd[1932]: time="2025-02-13T15:20:57.647135339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.648541 containerd[1932]: time="2025-02-13T15:20:57.647167607Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:20:57.648541 containerd[1932]: time="2025-02-13T15:20:57.647191739Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:20:57.648541 containerd[1932]: time="2025-02-13T15:20:57.647221487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:20:57.648936 containerd[1932]: time="2025-02-13T15:20:57.647813471Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:20:57.648936 containerd[1932]: time="2025-02-13T15:20:57.647920895Z" level=info msg="Connect containerd service" Feb 13 15:20:57.648936 containerd[1932]: time="2025-02-13T15:20:57.648001271Z" level=info msg="using legacy CRI server" Feb 13 15:20:57.648936 containerd[1932]: time="2025-02-13T15:20:57.648024551Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:20:57.648936 containerd[1932]: time="2025-02-13T15:20:57.648280859Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:20:57.652592 containerd[1932]: time="2025-02-13T15:20:57.651917591Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:20:57.655651 containerd[1932]: time="2025-02-13T15:20:57.652692083Z" level=info msg="Start subscribing containerd event" Feb 13 15:20:57.655651 containerd[1932]: time="2025-02-13T15:20:57.652806635Z" level=info msg="Start recovering state" Feb 13 15:20:57.655651 containerd[1932]: time="2025-02-13T15:20:57.652937159Z" level=info msg="Start event monitor" Feb 13 15:20:57.655651 containerd[1932]: time="2025-02-13T15:20:57.652967243Z" level=info msg="Start snapshots syncer" Feb 13 15:20:57.655651 containerd[1932]: time="2025-02-13T15:20:57.652990331Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:20:57.655651 containerd[1932]: time="2025-02-13T15:20:57.653008823Z" level=info msg="Start streaming server" Feb 13 15:20:57.656603 containerd[1932]: time="2025-02-13T15:20:57.655544063Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:20:57.656783 containerd[1932]: time="2025-02-13T15:20:57.656727071Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:20:57.661073 containerd[1932]: time="2025-02-13T15:20:57.660693515Z" level=info msg="containerd successfully booted in 0.227149s" Feb 13 15:20:57.660844 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:20:57.799674 systemd-networkd[1852]: eth0: Gained IPv6LL Feb 13 15:20:57.810593 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:20:57.814344 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:20:57.828668 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:20:57.842850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:20:57.853211 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:20:57.950688 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:20:58.020568 amazon-ssm-agent[2120]: Initializing new seelog logger Feb 13 15:20:58.022009 amazon-ssm-agent[2120]: New Seelog Logger Creation Complete Feb 13 15:20:58.024200 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.024200 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.025550 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 processing appconfig overrides Feb 13 15:20:58.027743 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.027743 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.027935 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 processing appconfig overrides Feb 13 15:20:58.028246 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.028246 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.028376 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 processing appconfig overrides Feb 13 15:20:58.029526 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO Proxy environment variables: Feb 13 15:20:58.033887 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.033887 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:20:58.034073 amazon-ssm-agent[2120]: 2025/02/13 15:20:58 processing appconfig overrides Feb 13 15:20:58.131394 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO https_proxy: Feb 13 15:20:58.200632 sshd_keygen[1937]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:20:58.231160 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO http_proxy: Feb 13 15:20:58.319711 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:20:58.331823 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO no_proxy: Feb 13 15:20:58.332121 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:20:58.339158 systemd[1]: Started sshd@0-172.31.28.93:22-147.75.109.163:47590.service - OpenSSH per-connection server daemon (147.75.109.163:47590). Feb 13 15:20:58.370994 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:20:58.371670 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:20:58.385351 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:20:58.433253 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:20:58.462487 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:20:58.473127 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:20:58.481117 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:20:58.483741 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:20:58.531329 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:20:58.595084 tar[1935]: linux-arm64/LICENSE Feb 13 15:20:58.595674 tar[1935]: linux-arm64/README.md Feb 13 15:20:58.632370 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO Agent will take identity from EC2 Feb 13 15:20:58.633656 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:20:58.662533 sshd[2145]: Accepted publickey for core from 147.75.109.163 port 47590 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:58.667779 sshd-session[2145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:58.695364 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:20:58.709004 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:20:58.718985 systemd-logind[1923]: New session 1 of user core. Feb 13 15:20:58.734687 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:58.762060 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:20:58.778164 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:20:58.800856 (systemd)[2161]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:20:58.834547 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [Registrar] Starting registrar module Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:20:58.893806 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:20:58.934093 amazon-ssm-agent[2120]: 2025-02-13 15:20:58 INFO [CredentialRefresher] Next credential rotation will be in 30.8499882593 minutes Feb 13 15:20:59.065779 systemd[2161]: Queued start job for default target default.target. Feb 13 15:20:59.079022 systemd[2161]: Created slice app.slice - User Application Slice. Feb 13 15:20:59.079266 systemd[2161]: Reached target paths.target - Paths. Feb 13 15:20:59.079448 systemd[2161]: Reached target timers.target - Timers. Feb 13 15:20:59.082758 systemd[2161]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:20:59.129347 systemd[2161]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:20:59.129721 systemd[2161]: Reached target sockets.target - Sockets. Feb 13 15:20:59.129760 systemd[2161]: Reached target basic.target - Basic System. Feb 13 15:20:59.129862 systemd[2161]: Reached target default.target - Main User Target. Feb 13 15:20:59.129931 systemd[2161]: Startup finished in 308ms. Feb 13 15:20:59.130246 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:20:59.141827 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:20:59.303131 systemd[1]: Started sshd@1-172.31.28.93:22-147.75.109.163:47594.service - OpenSSH per-connection server daemon (147.75.109.163:47594). Feb 13 15:20:59.499217 sshd[2172]: Accepted publickey for core from 147.75.109.163 port 47594 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:59.501742 sshd-session[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:59.511799 systemd-logind[1923]: New session 2 of user core. Feb 13 15:20:59.517791 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:20:59.636821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:20:59.640363 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:20:59.642741 systemd[1]: Startup finished in 1.100s (kernel) + 8.803s (initrd) + 8.098s (userspace) = 18.002s. Feb 13 15:20:59.651834 sshd-session[2172]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:59.652155 sshd[2174]: Connection closed by 147.75.109.163 port 47594 Feb 13 15:20:59.650227 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:20:59.663761 systemd[1]: sshd@1-172.31.28.93:22-147.75.109.163:47594.service: Deactivated successfully. Feb 13 15:20:59.670424 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:20:59.673490 systemd-logind[1923]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:20:59.682230 agetty[2155]: failed to open credentials directory Feb 13 15:20:59.684197 agetty[2154]: failed to open credentials directory Feb 13 15:20:59.698014 systemd[1]: Started sshd@2-172.31.28.93:22-147.75.109.163:34640.service - OpenSSH per-connection server daemon (147.75.109.163:34640). Feb 13 15:20:59.699707 systemd-logind[1923]: Removed session 2. Feb 13 15:20:59.901098 sshd[2189]: Accepted publickey for core from 147.75.109.163 port 34640 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:20:59.903728 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:20:59.917606 systemd-logind[1923]: New session 3 of user core. Feb 13 15:20:59.924040 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:20:59.929603 amazon-ssm-agent[2120]: 2025-02-13 15:20:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:21:00.033071 amazon-ssm-agent[2120]: 2025-02-13 15:20:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2197) started Feb 13 15:21:00.065544 sshd[2198]: Connection closed by 147.75.109.163 port 34640 Feb 13 15:21:00.066330 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:00.073575 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:21:00.075205 systemd[1]: sshd@2-172.31.28.93:22-147.75.109.163:34640.service: Deactivated successfully. Feb 13 15:21:00.075276 systemd-logind[1923]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:21:00.085652 systemd-logind[1923]: Removed session 3. Feb 13 15:21:00.133836 amazon-ssm-agent[2120]: 2025-02-13 15:20:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:21:00.583758 kubelet[2181]: E0213 15:21:00.583667 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:21:00.588236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:21:00.588611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:21:00.589265 systemd[1]: kubelet.service: Consumed 1.282s CPU time. Feb 13 15:21:00.613237 ntpd[1917]: Listen normally on 7 eth0 [fe80::45d:f5ff:fe27:fc6b%2]:123 Feb 13 15:21:00.614290 ntpd[1917]: 13 Feb 15:21:00 ntpd[1917]: Listen normally on 7 eth0 [fe80::45d:f5ff:fe27:fc6b%2]:123 Feb 13 15:21:10.099607 systemd[1]: Started sshd@3-172.31.28.93:22-147.75.109.163:60748.service - OpenSSH per-connection server daemon (147.75.109.163:60748). Feb 13 15:21:10.290340 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 60748 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:10.292789 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:10.301160 systemd-logind[1923]: New session 4 of user core. Feb 13 15:21:10.310760 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:21:10.435179 sshd[2216]: Connection closed by 147.75.109.163 port 60748 Feb 13 15:21:10.436314 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:10.442258 systemd[1]: sshd@3-172.31.28.93:22-147.75.109.163:60748.service: Deactivated successfully. Feb 13 15:21:10.445444 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:21:10.447746 systemd-logind[1923]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:21:10.449569 systemd-logind[1923]: Removed session 4. Feb 13 15:21:10.479220 systemd[1]: Started sshd@4-172.31.28.93:22-147.75.109.163:60752.service - OpenSSH per-connection server daemon (147.75.109.163:60752). Feb 13 15:21:10.590372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:21:10.605902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:10.658433 sshd[2221]: Accepted publickey for core from 147.75.109.163 port 60752 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:10.661643 sshd-session[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:10.670934 systemd-logind[1923]: New session 5 of user core. Feb 13 15:21:10.686872 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:21:10.808963 sshd[2226]: Connection closed by 147.75.109.163 port 60752 Feb 13 15:21:10.809777 sshd-session[2221]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:10.817272 systemd[1]: sshd@4-172.31.28.93:22-147.75.109.163:60752.service: Deactivated successfully. Feb 13 15:21:10.822205 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:21:10.824826 systemd-logind[1923]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:21:10.843132 systemd-logind[1923]: Removed session 5. Feb 13 15:21:10.851033 systemd[1]: Started sshd@5-172.31.28.93:22-147.75.109.163:60754.service - OpenSSH per-connection server daemon (147.75.109.163:60754). Feb 13 15:21:10.903373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:10.917977 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:21:10.998477 kubelet[2238]: E0213 15:21:10.996191 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:21:11.004258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:21:11.004826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:21:11.036404 sshd[2231]: Accepted publickey for core from 147.75.109.163 port 60754 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:11.038812 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:11.046039 systemd-logind[1923]: New session 6 of user core. Feb 13 15:21:11.062747 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:21:11.187962 sshd[2245]: Connection closed by 147.75.109.163 port 60754 Feb 13 15:21:11.187840 sshd-session[2231]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:11.193770 systemd[1]: sshd@5-172.31.28.93:22-147.75.109.163:60754.service: Deactivated successfully. Feb 13 15:21:11.196564 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:21:11.197904 systemd-logind[1923]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:21:11.199752 systemd-logind[1923]: Removed session 6. Feb 13 15:21:11.224468 systemd[1]: Started sshd@6-172.31.28.93:22-147.75.109.163:60762.service - OpenSSH per-connection server daemon (147.75.109.163:60762). Feb 13 15:21:11.412757 sshd[2250]: Accepted publickey for core from 147.75.109.163 port 60762 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:11.415101 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:11.423293 systemd-logind[1923]: New session 7 of user core. Feb 13 15:21:11.429777 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:21:11.547056 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:21:11.547691 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:11.562141 sudo[2253]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:11.586425 sshd[2252]: Connection closed by 147.75.109.163 port 60762 Feb 13 15:21:11.585321 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:11.591096 systemd-logind[1923]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:21:11.591727 systemd[1]: sshd@6-172.31.28.93:22-147.75.109.163:60762.service: Deactivated successfully. Feb 13 15:21:11.594990 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:21:11.598306 systemd-logind[1923]: Removed session 7. Feb 13 15:21:11.623995 systemd[1]: Started sshd@7-172.31.28.93:22-147.75.109.163:60776.service - OpenSSH per-connection server daemon (147.75.109.163:60776). Feb 13 15:21:11.802437 sshd[2258]: Accepted publickey for core from 147.75.109.163 port 60776 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:11.804943 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:11.812281 systemd-logind[1923]: New session 8 of user core. Feb 13 15:21:11.817788 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:21:11.920751 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:21:11.921865 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:11.928044 sudo[2262]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:11.937896 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:21:11.938538 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:11.961113 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:21:12.009081 augenrules[2284]: No rules Feb 13 15:21:12.011262 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:21:12.011838 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:21:12.015687 sudo[2261]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:12.038012 sshd[2260]: Connection closed by 147.75.109.163 port 60776 Feb 13 15:21:12.038800 sshd-session[2258]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:12.044395 systemd-logind[1923]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:21:12.046475 systemd[1]: sshd@7-172.31.28.93:22-147.75.109.163:60776.service: Deactivated successfully. Feb 13 15:21:12.050024 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:21:12.052771 systemd-logind[1923]: Removed session 8. Feb 13 15:21:12.073399 systemd[1]: Started sshd@8-172.31.28.93:22-147.75.109.163:60788.service - OpenSSH per-connection server daemon (147.75.109.163:60788). Feb 13 15:21:12.268364 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 60788 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:21:12.270728 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:21:12.278731 systemd-logind[1923]: New session 9 of user core. Feb 13 15:21:12.288755 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:21:12.393225 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:21:12.393893 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:21:12.910414 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:21:12.923024 (dockerd)[2314]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:21:13.260152 dockerd[2314]: time="2025-02-13T15:21:13.259620723Z" level=info msg="Starting up" Feb 13 15:21:13.372833 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2057178363-merged.mount: Deactivated successfully. Feb 13 15:21:13.390594 systemd[1]: var-lib-docker-metacopy\x2dcheck3387499262-merged.mount: Deactivated successfully. Feb 13 15:21:13.411159 dockerd[2314]: time="2025-02-13T15:21:13.411072760Z" level=info msg="Loading containers: start." Feb 13 15:21:13.653551 kernel: Initializing XFRM netlink socket Feb 13 15:21:13.686925 (udev-worker)[2337]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:21:13.777067 systemd-networkd[1852]: docker0: Link UP Feb 13 15:21:13.817825 dockerd[2314]: time="2025-02-13T15:21:13.817754478Z" level=info msg="Loading containers: done." Feb 13 15:21:13.845553 dockerd[2314]: time="2025-02-13T15:21:13.845207214Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:21:13.845553 dockerd[2314]: time="2025-02-13T15:21:13.845345970Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:21:13.845815 dockerd[2314]: time="2025-02-13T15:21:13.845581506Z" level=info msg="Daemon has completed initialization" Feb 13 15:21:13.900912 dockerd[2314]: time="2025-02-13T15:21:13.900816451Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:21:13.901311 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:21:14.366167 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3096696426-merged.mount: Deactivated successfully. Feb 13 15:21:14.969354 containerd[1932]: time="2025-02-13T15:21:14.968880092Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:21:15.654932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182219991.mount: Deactivated successfully. Feb 13 15:21:17.018555 containerd[1932]: time="2025-02-13T15:21:17.018263730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:17.021081 containerd[1932]: time="2025-02-13T15:21:17.020998458Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 15:21:17.022647 containerd[1932]: time="2025-02-13T15:21:17.022559994Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:17.028219 containerd[1932]: time="2025-02-13T15:21:17.028118838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:17.030744 containerd[1932]: time="2025-02-13T15:21:17.030441690Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.06150359s" Feb 13 15:21:17.030744 containerd[1932]: time="2025-02-13T15:21:17.030527658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:21:17.031450 containerd[1932]: time="2025-02-13T15:21:17.031389582Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:21:18.739100 containerd[1932]: time="2025-02-13T15:21:18.739021727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:18.741105 containerd[1932]: time="2025-02-13T15:21:18.741020519Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 15:21:18.742671 containerd[1932]: time="2025-02-13T15:21:18.742618895Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:18.748550 containerd[1932]: time="2025-02-13T15:21:18.748426103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:18.751288 containerd[1932]: time="2025-02-13T15:21:18.750847055Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.719184557s" Feb 13 15:21:18.751288 containerd[1932]: time="2025-02-13T15:21:18.750901979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:21:18.751969 containerd[1932]: time="2025-02-13T15:21:18.751716215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:21:20.103565 containerd[1932]: time="2025-02-13T15:21:20.102865449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:20.105079 containerd[1932]: time="2025-02-13T15:21:20.104990757Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 15:21:20.107485 containerd[1932]: time="2025-02-13T15:21:20.107415921Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:20.112880 containerd[1932]: time="2025-02-13T15:21:20.112785453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:20.115303 containerd[1932]: time="2025-02-13T15:21:20.115111305Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.363345374s" Feb 13 15:21:20.115303 containerd[1932]: time="2025-02-13T15:21:20.115164549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:21:20.115900 containerd[1932]: time="2025-02-13T15:21:20.115772673Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:21:21.091289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:21:21.099920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:21.445697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:21.455691 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:21:21.537940 kubelet[2577]: E0213 15:21:21.537879 2577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:21:21.542328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:21:21.542713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:21:21.691157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494138998.mount: Deactivated successfully. Feb 13 15:21:22.188093 containerd[1932]: time="2025-02-13T15:21:22.188029452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:22.190054 containerd[1932]: time="2025-02-13T15:21:22.189973680Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 15:21:22.191337 containerd[1932]: time="2025-02-13T15:21:22.191265564Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:22.194869 containerd[1932]: time="2025-02-13T15:21:22.194798412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:22.196482 containerd[1932]: time="2025-02-13T15:21:22.196266408Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 2.080366919s" Feb 13 15:21:22.196482 containerd[1932]: time="2025-02-13T15:21:22.196314792Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:21:22.197448 containerd[1932]: time="2025-02-13T15:21:22.197336304Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:21:22.731477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065159307.mount: Deactivated successfully. Feb 13 15:21:23.849561 containerd[1932]: time="2025-02-13T15:21:23.849221248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:23.851573 containerd[1932]: time="2025-02-13T15:21:23.851449720Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:21:23.854066 containerd[1932]: time="2025-02-13T15:21:23.853998880Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:23.866238 containerd[1932]: time="2025-02-13T15:21:23.866128060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:23.868707 containerd[1932]: time="2025-02-13T15:21:23.868445536Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.671055292s" Feb 13 15:21:23.868707 containerd[1932]: time="2025-02-13T15:21:23.868524256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:21:23.869359 containerd[1932]: time="2025-02-13T15:21:23.869308312Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:21:24.439990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246559084.mount: Deactivated successfully. Feb 13 15:21:24.453403 containerd[1932]: time="2025-02-13T15:21:24.453329103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:24.456554 containerd[1932]: time="2025-02-13T15:21:24.456456675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 15:21:24.458711 containerd[1932]: time="2025-02-13T15:21:24.458635767Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:24.463993 containerd[1932]: time="2025-02-13T15:21:24.463898691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:24.465696 containerd[1932]: time="2025-02-13T15:21:24.465474531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 596.112519ms" Feb 13 15:21:24.465696 containerd[1932]: time="2025-02-13T15:21:24.465559815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:21:24.466614 containerd[1932]: time="2025-02-13T15:21:24.466198851Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:21:25.038375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323283526.mount: Deactivated successfully. Feb 13 15:21:27.357732 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:21:27.550422 containerd[1932]: time="2025-02-13T15:21:27.550339674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:27.552740 containerd[1932]: time="2025-02-13T15:21:27.552649698Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 15:21:27.554947 containerd[1932]: time="2025-02-13T15:21:27.554878722Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:27.561469 containerd[1932]: time="2025-02-13T15:21:27.561390438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:27.564093 containerd[1932]: time="2025-02-13T15:21:27.563883162Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.097632219s" Feb 13 15:21:27.564093 containerd[1932]: time="2025-02-13T15:21:27.563944590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:21:31.590415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:21:31.599076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:31.918974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:31.922921 (kubelet)[2720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:21:32.002540 kubelet[2720]: E0213 15:21:32.000754 2720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:21:32.005549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:21:32.006041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:21:35.409094 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:35.425960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:35.481962 systemd[1]: Reloading requested from client PID 2734 ('systemctl') (unit session-9.scope)... Feb 13 15:21:35.481996 systemd[1]: Reloading... Feb 13 15:21:35.711537 zram_generator::config[2777]: No configuration found. Feb 13 15:21:35.930035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:21:36.097894 systemd[1]: Reloading finished in 615 ms. Feb 13 15:21:36.187305 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:21:36.187840 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:21:36.188449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:36.194371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:36.488814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:36.493858 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:21:36.570036 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:36.570036 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:21:36.570036 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:36.570642 kubelet[2837]: I0213 15:21:36.570172 2837 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:21:37.447339 kubelet[2837]: I0213 15:21:37.447269 2837 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:21:37.447339 kubelet[2837]: I0213 15:21:37.447322 2837 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:21:37.447819 kubelet[2837]: I0213 15:21:37.447776 2837 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:21:37.497024 kubelet[2837]: E0213 15:21:37.496974 2837 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:37.499487 kubelet[2837]: I0213 15:21:37.499249 2837 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:21:37.523760 kubelet[2837]: E0213 15:21:37.523692 2837 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:21:37.523760 kubelet[2837]: I0213 15:21:37.523749 2837 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:21:37.530580 kubelet[2837]: I0213 15:21:37.530465 2837 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:21:37.532041 kubelet[2837]: I0213 15:21:37.531994 2837 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:21:37.532459 kubelet[2837]: I0213 15:21:37.532399 2837 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:21:37.532772 kubelet[2837]: I0213 15:21:37.532452 2837 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:21:37.532969 kubelet[2837]: I0213 15:21:37.532817 2837 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:21:37.532969 kubelet[2837]: I0213 15:21:37.532837 2837 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:21:37.533080 kubelet[2837]: I0213 15:21:37.533034 2837 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:37.537041 kubelet[2837]: I0213 15:21:37.536990 2837 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:21:37.537041 kubelet[2837]: I0213 15:21:37.537039 2837 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:21:37.537208 kubelet[2837]: I0213 15:21:37.537092 2837 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:21:37.537208 kubelet[2837]: I0213 15:21:37.537114 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:21:37.542654 kubelet[2837]: W0213 15:21:37.542150 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-93&limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:37.542654 kubelet[2837]: E0213 15:21:37.542245 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-93&limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:37.544625 kubelet[2837]: W0213 15:21:37.544152 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:37.544625 kubelet[2837]: E0213 15:21:37.544245 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:37.544625 kubelet[2837]: I0213 15:21:37.544377 2837 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:21:37.547475 kubelet[2837]: I0213 15:21:37.547372 2837 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:21:37.549563 kubelet[2837]: W0213 15:21:37.548751 2837 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:21:37.551927 kubelet[2837]: I0213 15:21:37.551892 2837 server.go:1269] "Started kubelet" Feb 13 15:21:37.552772 kubelet[2837]: I0213 15:21:37.552698 2837 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:21:37.554455 kubelet[2837]: I0213 15:21:37.554391 2837 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:21:37.557626 kubelet[2837]: I0213 15:21:37.557095 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:21:37.557626 kubelet[2837]: I0213 15:21:37.557567 2837 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:21:37.559727 kubelet[2837]: I0213 15:21:37.559680 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:21:37.560991 kubelet[2837]: E0213 15:21:37.558602 2837 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.93:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.93:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-93.1823cdbe1f8b6d70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-93,UID:ip-172-31-28-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-93,},FirstTimestamp:2025-02-13 15:21:37.55185496 +0000 UTC m=+1.051096278,LastTimestamp:2025-02-13 15:21:37.55185496 +0000 UTC m=+1.051096278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-93,}" Feb 13 15:21:37.563033 kubelet[2837]: I0213 15:21:37.562105 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:21:37.566788 kubelet[2837]: I0213 15:21:37.566751 2837 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:21:37.569594 kubelet[2837]: I0213 15:21:37.567080 2837 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:21:37.569788 kubelet[2837]: E0213 15:21:37.568153 2837 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-93\" not found" Feb 13 15:21:37.570592 kubelet[2837]: I0213 15:21:37.569815 2837 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:21:37.572605 kubelet[2837]: E0213 15:21:37.570158 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-93?timeout=10s\": dial tcp 172.31.28.93:6443: connect: connection refused" interval="200ms" Feb 13 15:21:37.572605 kubelet[2837]: I0213 15:21:37.572056 2837 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:21:37.572825 kubelet[2837]: I0213 15:21:37.572694 2837 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:21:37.573776 kubelet[2837]: W0213 15:21:37.573189 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:37.573776 kubelet[2837]: E0213 15:21:37.573283 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:37.575167 kubelet[2837]: E0213 15:21:37.575117 2837 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:21:37.576919 kubelet[2837]: I0213 15:21:37.576882 2837 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:21:37.608291 kubelet[2837]: I0213 15:21:37.608010 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:21:37.613295 kubelet[2837]: I0213 15:21:37.613204 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:21:37.613522 kubelet[2837]: I0213 15:21:37.613481 2837 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:21:37.613649 kubelet[2837]: I0213 15:21:37.613630 2837 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:21:37.613841 kubelet[2837]: E0213 15:21:37.613809 2837 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:21:37.619349 kubelet[2837]: W0213 15:21:37.619231 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:37.620037 kubelet[2837]: E0213 15:21:37.619998 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:37.620335 kubelet[2837]: I0213 15:21:37.620309 2837 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:21:37.620472 kubelet[2837]: I0213 15:21:37.620450 2837 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:21:37.620731 kubelet[2837]: I0213 15:21:37.620636 2837 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:37.628727 kubelet[2837]: I0213 15:21:37.628187 2837 policy_none.go:49] "None policy: Start" Feb 13 15:21:37.630334 kubelet[2837]: I0213 15:21:37.629857 2837 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:21:37.630334 kubelet[2837]: I0213 15:21:37.629899 2837 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:21:37.641378 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:21:37.657663 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:21:37.663884 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:21:37.671047 kubelet[2837]: E0213 15:21:37.670998 2837 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-93\" not found" Feb 13 15:21:37.673145 kubelet[2837]: I0213 15:21:37.673100 2837 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:21:37.673435 kubelet[2837]: I0213 15:21:37.673388 2837 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:21:37.673722 kubelet[2837]: I0213 15:21:37.673419 2837 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:21:37.674601 kubelet[2837]: I0213 15:21:37.674492 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:21:37.677087 kubelet[2837]: E0213 15:21:37.677045 2837 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-93\" not found" Feb 13 15:21:37.735713 systemd[1]: Created slice kubepods-burstable-pod30b85bb3dc5479efe2b063b0b5fb0ff2.slice - libcontainer container kubepods-burstable-pod30b85bb3dc5479efe2b063b0b5fb0ff2.slice. Feb 13 15:21:37.753277 systemd[1]: Created slice kubepods-burstable-poda700e507724ef655799f80db851d64ad.slice - libcontainer container kubepods-burstable-poda700e507724ef655799f80db851d64ad.slice. Feb 13 15:21:37.768380 systemd[1]: Created slice kubepods-burstable-pod69228f1e7a43fd05e010d42a9b92fad6.slice - libcontainer container kubepods-burstable-pod69228f1e7a43fd05e010d42a9b92fad6.slice. Feb 13 15:21:37.773126 kubelet[2837]: I0213 15:21:37.773084 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:37.773900 kubelet[2837]: I0213 15:21:37.773403 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69228f1e7a43fd05e010d42a9b92fad6-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-93\" (UID: \"69228f1e7a43fd05e010d42a9b92fad6\") " pod="kube-system/kube-scheduler-ip-172-31-28-93" Feb 13 15:21:37.773900 kubelet[2837]: E0213 15:21:37.773446 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-93?timeout=10s\": dial tcp 172.31.28.93:6443: connect: connection refused" interval="400ms" Feb 13 15:21:37.773900 kubelet[2837]: I0213 15:21:37.773464 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30b85bb3dc5479efe2b063b0b5fb0ff2-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-93\" (UID: \"30b85bb3dc5479efe2b063b0b5fb0ff2\") " pod="kube-system/kube-apiserver-ip-172-31-28-93" Feb 13 15:21:37.773900 kubelet[2837]: I0213 15:21:37.773577 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30b85bb3dc5479efe2b063b0b5fb0ff2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-93\" (UID: \"30b85bb3dc5479efe2b063b0b5fb0ff2\") " pod="kube-system/kube-apiserver-ip-172-31-28-93" Feb 13 15:21:37.773900 kubelet[2837]: I0213 15:21:37.773618 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:37.774187 kubelet[2837]: I0213 15:21:37.773655 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:37.774187 kubelet[2837]: I0213 15:21:37.773693 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30b85bb3dc5479efe2b063b0b5fb0ff2-ca-certs\") pod \"kube-apiserver-ip-172-31-28-93\" (UID: \"30b85bb3dc5479efe2b063b0b5fb0ff2\") " pod="kube-system/kube-apiserver-ip-172-31-28-93" Feb 13 15:21:37.774187 kubelet[2837]: I0213 15:21:37.773727 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:37.774187 kubelet[2837]: I0213 15:21:37.773764 2837 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:37.775783 kubelet[2837]: I0213 15:21:37.775662 2837 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-93" Feb 13 15:21:37.776655 kubelet[2837]: E0213 15:21:37.776550 2837 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.93:6443/api/v1/nodes\": dial tcp 172.31.28.93:6443: connect: connection refused" node="ip-172-31-28-93" Feb 13 15:21:37.979815 kubelet[2837]: I0213 15:21:37.979727 2837 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-93" Feb 13 15:21:37.980272 kubelet[2837]: E0213 15:21:37.980226 2837 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.93:6443/api/v1/nodes\": dial tcp 172.31.28.93:6443: connect: connection refused" node="ip-172-31-28-93" Feb 13 15:21:38.049578 containerd[1932]: time="2025-02-13T15:21:38.049388846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-93,Uid:30b85bb3dc5479efe2b063b0b5fb0ff2,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:38.064624 containerd[1932]: time="2025-02-13T15:21:38.064529427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-93,Uid:a700e507724ef655799f80db851d64ad,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:38.074219 containerd[1932]: time="2025-02-13T15:21:38.074133615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-93,Uid:69228f1e7a43fd05e010d42a9b92fad6,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:38.174139 kubelet[2837]: E0213 15:21:38.174069 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-93?timeout=10s\": dial tcp 172.31.28.93:6443: connect: connection refused" interval="800ms" Feb 13 15:21:38.382231 kubelet[2837]: I0213 15:21:38.382195 2837 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-93" Feb 13 15:21:38.382960 kubelet[2837]: E0213 15:21:38.382892 2837 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.93:6443/api/v1/nodes\": dial tcp 172.31.28.93:6443: connect: connection refused" node="ip-172-31-28-93" Feb 13 15:21:38.631296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137277110.mount: Deactivated successfully. Feb 13 15:21:38.646030 containerd[1932]: time="2025-02-13T15:21:38.645873461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:38.650132 containerd[1932]: time="2025-02-13T15:21:38.650042213Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:21:38.657881 containerd[1932]: time="2025-02-13T15:21:38.657573701Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:38.659891 containerd[1932]: time="2025-02-13T15:21:38.659805797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:38.661649 containerd[1932]: time="2025-02-13T15:21:38.661570446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:21:38.666112 containerd[1932]: time="2025-02-13T15:21:38.666037998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:38.668000 containerd[1932]: time="2025-02-13T15:21:38.667934154Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 618.386152ms" Feb 13 15:21:38.669384 containerd[1932]: time="2025-02-13T15:21:38.669320514Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:21:38.670119 containerd[1932]: time="2025-02-13T15:21:38.669928914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:21:38.677490 containerd[1932]: time="2025-02-13T15:21:38.677415738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.156987ms" Feb 13 15:21:38.698167 containerd[1932]: time="2025-02-13T15:21:38.697620294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 632.980647ms" Feb 13 15:21:38.810914 kubelet[2837]: W0213 15:21:38.810422 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:38.810914 kubelet[2837]: E0213 15:21:38.810862 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:38.868065 kubelet[2837]: W0213 15:21:38.867491 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:38.868065 kubelet[2837]: E0213 15:21:38.867639 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:38.870359 containerd[1932]: time="2025-02-13T15:21:38.870136495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:38.870359 containerd[1932]: time="2025-02-13T15:21:38.870269503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:38.873827 containerd[1932]: time="2025-02-13T15:21:38.873671491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:38.874222 containerd[1932]: time="2025-02-13T15:21:38.874138279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:38.874756 containerd[1932]: time="2025-02-13T15:21:38.874568779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:38.876266 containerd[1932]: time="2025-02-13T15:21:38.875730751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:38.884547 containerd[1932]: time="2025-02-13T15:21:38.881683255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:38.884547 containerd[1932]: time="2025-02-13T15:21:38.884014219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:38.891687 containerd[1932]: time="2025-02-13T15:21:38.891167947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:38.891687 containerd[1932]: time="2025-02-13T15:21:38.891263839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:38.891687 containerd[1932]: time="2025-02-13T15:21:38.891321607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:38.891687 containerd[1932]: time="2025-02-13T15:21:38.891478219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:38.929447 systemd[1]: Started cri-containerd-eadd8fee6f572dd40457ff88510b8adab116e4f2b7188c4d7e0ffb878ce64d66.scope - libcontainer container eadd8fee6f572dd40457ff88510b8adab116e4f2b7188c4d7e0ffb878ce64d66. Feb 13 15:21:38.950863 systemd[1]: Started cri-containerd-a6ec2e2f55dc96f37ed1d0eb35ee2b6790869456aefca9eebc9b528b7338cb77.scope - libcontainer container a6ec2e2f55dc96f37ed1d0eb35ee2b6790869456aefca9eebc9b528b7338cb77. Feb 13 15:21:38.967139 kubelet[2837]: W0213 15:21:38.967070 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:38.967261 kubelet[2837]: E0213 15:21:38.967146 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:38.974197 systemd[1]: Started cri-containerd-aed232a13856917a53edf613422e9c34538a78991ba46bd5d8a6d3997fd4fde5.scope - libcontainer container aed232a13856917a53edf613422e9c34538a78991ba46bd5d8a6d3997fd4fde5. Feb 13 15:21:38.975425 kubelet[2837]: E0213 15:21:38.975351 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-93?timeout=10s\": dial tcp 172.31.28.93:6443: connect: connection refused" interval="1.6s" Feb 13 15:21:39.001530 kubelet[2837]: W0213 15:21:39.001399 2837 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-93&limit=500&resourceVersion=0": dial tcp 172.31.28.93:6443: connect: connection refused Feb 13 15:21:39.001530 kubelet[2837]: E0213 15:21:39.001521 2837 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-93&limit=500&resourceVersion=0\": dial tcp 172.31.28.93:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:21:39.039689 containerd[1932]: time="2025-02-13T15:21:39.039382683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-93,Uid:69228f1e7a43fd05e010d42a9b92fad6,Namespace:kube-system,Attempt:0,} returns sandbox id \"eadd8fee6f572dd40457ff88510b8adab116e4f2b7188c4d7e0ffb878ce64d66\"" Feb 13 15:21:39.051312 containerd[1932]: time="2025-02-13T15:21:39.050983587Z" level=info msg="CreateContainer within sandbox \"eadd8fee6f572dd40457ff88510b8adab116e4f2b7188c4d7e0ffb878ce64d66\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:21:39.086321 containerd[1932]: time="2025-02-13T15:21:39.086141044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-93,Uid:a700e507724ef655799f80db851d64ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6ec2e2f55dc96f37ed1d0eb35ee2b6790869456aefca9eebc9b528b7338cb77\"" Feb 13 15:21:39.097378 containerd[1932]: time="2025-02-13T15:21:39.096594976Z" level=info msg="CreateContainer within sandbox \"a6ec2e2f55dc96f37ed1d0eb35ee2b6790869456aefca9eebc9b528b7338cb77\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:21:39.103812 containerd[1932]: time="2025-02-13T15:21:39.103750252Z" level=info msg="CreateContainer within sandbox \"eadd8fee6f572dd40457ff88510b8adab116e4f2b7188c4d7e0ffb878ce64d66\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc\"" Feb 13 15:21:39.117749 containerd[1932]: time="2025-02-13T15:21:39.117601432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-93,Uid:30b85bb3dc5479efe2b063b0b5fb0ff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed232a13856917a53edf613422e9c34538a78991ba46bd5d8a6d3997fd4fde5\"" Feb 13 15:21:39.117749 containerd[1932]: time="2025-02-13T15:21:39.117654916Z" level=info msg="StartContainer for \"92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc\"" Feb 13 15:21:39.124890 containerd[1932]: time="2025-02-13T15:21:39.124725256Z" level=info msg="CreateContainer within sandbox \"aed232a13856917a53edf613422e9c34538a78991ba46bd5d8a6d3997fd4fde5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:21:39.137155 containerd[1932]: time="2025-02-13T15:21:39.136960852Z" level=info msg="CreateContainer within sandbox \"a6ec2e2f55dc96f37ed1d0eb35ee2b6790869456aefca9eebc9b528b7338cb77\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e\"" Feb 13 15:21:39.137884 containerd[1932]: time="2025-02-13T15:21:39.137733820Z" level=info msg="StartContainer for \"f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e\"" Feb 13 15:21:39.165409 containerd[1932]: time="2025-02-13T15:21:39.165083596Z" level=info msg="CreateContainer within sandbox \"aed232a13856917a53edf613422e9c34538a78991ba46bd5d8a6d3997fd4fde5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5f8832d7e1c5237c906c0b45e242e0c9048c578b020b87eff9db7c06cc19438e\"" Feb 13 15:21:39.168689 containerd[1932]: time="2025-02-13T15:21:39.168637420Z" level=info msg="StartContainer for \"5f8832d7e1c5237c906c0b45e242e0c9048c578b020b87eff9db7c06cc19438e\"" Feb 13 15:21:39.178827 systemd[1]: Started cri-containerd-92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc.scope - libcontainer container 92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc. Feb 13 15:21:39.188437 kubelet[2837]: I0213 15:21:39.188286 2837 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-93" Feb 13 15:21:39.196038 kubelet[2837]: E0213 15:21:39.195719 2837 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.93:6443/api/v1/nodes\": dial tcp 172.31.28.93:6443: connect: connection refused" node="ip-172-31-28-93" Feb 13 15:21:39.213991 systemd[1]: Started cri-containerd-f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e.scope - libcontainer container f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e. Feb 13 15:21:39.259801 systemd[1]: Started cri-containerd-5f8832d7e1c5237c906c0b45e242e0c9048c578b020b87eff9db7c06cc19438e.scope - libcontainer container 5f8832d7e1c5237c906c0b45e242e0c9048c578b020b87eff9db7c06cc19438e. Feb 13 15:21:39.325039 containerd[1932]: time="2025-02-13T15:21:39.324976181Z" level=info msg="StartContainer for \"92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc\" returns successfully" Feb 13 15:21:39.367149 containerd[1932]: time="2025-02-13T15:21:39.367044149Z" level=info msg="StartContainer for \"f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e\" returns successfully" Feb 13 15:21:39.385046 containerd[1932]: time="2025-02-13T15:21:39.384800693Z" level=info msg="StartContainer for \"5f8832d7e1c5237c906c0b45e242e0c9048c578b020b87eff9db7c06cc19438e\" returns successfully" Feb 13 15:21:40.798631 kubelet[2837]: I0213 15:21:40.798545 2837 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-93" Feb 13 15:21:41.642634 update_engine[1925]: I20250213 15:21:41.642540 1925 update_attempter.cc:509] Updating boot flags... Feb 13 15:21:41.753541 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3121) Feb 13 15:21:42.224764 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3112) Feb 13 15:21:43.483111 kubelet[2837]: E0213 15:21:43.482997 2837 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-93\" not found" node="ip-172-31-28-93" Feb 13 15:21:43.545823 kubelet[2837]: I0213 15:21:43.545758 2837 apiserver.go:52] "Watching apiserver" Feb 13 15:21:43.570135 kubelet[2837]: I0213 15:21:43.570066 2837 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:21:43.586839 kubelet[2837]: I0213 15:21:43.586776 2837 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-93" Feb 13 15:21:43.595593 kubelet[2837]: E0213 15:21:43.595414 2837 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-93.1823cdbe1f8b6d70 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-93,UID:ip-172-31-28-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-93,},FirstTimestamp:2025-02-13 15:21:37.55185496 +0000 UTC m=+1.051096278,LastTimestamp:2025-02-13 15:21:37.55185496 +0000 UTC m=+1.051096278,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-93,}" Feb 13 15:21:43.664831 kubelet[2837]: E0213 15:21:43.664332 2837 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-93.1823cdbe20ee1120 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-93,UID:ip-172-31-28-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-28-93,},FirstTimestamp:2025-02-13 15:21:37.575096608 +0000 UTC m=+1.074337926,LastTimestamp:2025-02-13 15:21:37.575096608 +0000 UTC m=+1.074337926,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-93,}" Feb 13 15:21:43.726978 kubelet[2837]: E0213 15:21:43.726823 2837 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-93.1823cdbe237de418 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-93,UID:ip-172-31-28-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-28-93 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-28-93,},FirstTimestamp:2025-02-13 15:21:37.618076696 +0000 UTC m=+1.117317990,LastTimestamp:2025-02-13 15:21:37.618076696 +0000 UTC m=+1.117317990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-93,}" Feb 13 15:21:43.793549 kubelet[2837]: E0213 15:21:43.793074 2837 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-28-93.1823cdbe237e0758 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-93,UID:ip-172-31-28-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-28-93 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-28-93,},FirstTimestamp:2025-02-13 15:21:37.61808572 +0000 UTC m=+1.117327014,LastTimestamp:2025-02-13 15:21:37.61808572 +0000 UTC m=+1.117327014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-93,}" Feb 13 15:21:44.143881 kubelet[2837]: E0213 15:21:44.143824 2837 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-28-93\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:45.473778 systemd[1]: Reloading requested from client PID 3291 ('systemctl') (unit session-9.scope)... Feb 13 15:21:45.473812 systemd[1]: Reloading... Feb 13 15:21:45.655579 zram_generator::config[3331]: No configuration found. Feb 13 15:21:45.927141 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:21:46.143726 systemd[1]: Reloading finished in 669 ms. Feb 13 15:21:46.237240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:46.252132 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:21:46.252574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:46.252661 systemd[1]: kubelet.service: Consumed 1.759s CPU time, 117.0M memory peak, 0B memory swap peak. Feb 13 15:21:46.258247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:21:46.570808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:21:46.586061 (kubelet)[3391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:21:46.694572 kubelet[3391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:46.694572 kubelet[3391]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:21:46.694572 kubelet[3391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:21:46.694572 kubelet[3391]: I0213 15:21:46.694001 3391 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:21:46.709462 kubelet[3391]: I0213 15:21:46.708650 3391 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:21:46.709462 kubelet[3391]: I0213 15:21:46.708696 3391 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:21:46.709462 kubelet[3391]: I0213 15:21:46.709099 3391 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:21:46.719428 kubelet[3391]: I0213 15:21:46.719353 3391 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:21:46.725566 kubelet[3391]: I0213 15:21:46.725276 3391 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:21:46.732939 kubelet[3391]: E0213 15:21:46.732888 3391 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:21:46.733143 kubelet[3391]: I0213 15:21:46.733118 3391 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:21:46.736759 sudo[3404]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:21:46.737391 sudo[3404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:21:46.740013 kubelet[3391]: I0213 15:21:46.739416 3391 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:21:46.740013 kubelet[3391]: I0213 15:21:46.739753 3391 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:21:46.741047 kubelet[3391]: I0213 15:21:46.740404 3391 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:21:46.741047 kubelet[3391]: I0213 15:21:46.740464 3391 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:21:46.741047 kubelet[3391]: I0213 15:21:46.740806 3391 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:21:46.741047 kubelet[3391]: I0213 15:21:46.740828 3391 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:21:46.741398 kubelet[3391]: I0213 15:21:46.740883 3391 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:46.742973 kubelet[3391]: I0213 15:21:46.742879 3391 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:21:46.742973 kubelet[3391]: I0213 15:21:46.742928 3391 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:21:46.743551 kubelet[3391]: I0213 15:21:46.743211 3391 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:21:46.745605 kubelet[3391]: I0213 15:21:46.745566 3391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:21:46.750549 kubelet[3391]: I0213 15:21:46.749796 3391 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:21:46.750896 kubelet[3391]: I0213 15:21:46.750870 3391 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:21:46.753685 kubelet[3391]: I0213 15:21:46.753650 3391 server.go:1269] "Started kubelet" Feb 13 15:21:46.758210 kubelet[3391]: I0213 15:21:46.758018 3391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:21:46.768648 kubelet[3391]: I0213 15:21:46.768526 3391 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:21:46.770547 kubelet[3391]: I0213 15:21:46.769249 3391 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:21:46.772066 kubelet[3391]: E0213 15:21:46.772024 3391 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-93\" not found" Feb 13 15:21:46.773017 kubelet[3391]: I0213 15:21:46.772300 3391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:21:46.774159 kubelet[3391]: I0213 15:21:46.774119 3391 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:21:46.775429 kubelet[3391]: I0213 15:21:46.773477 3391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:21:46.791484 kubelet[3391]: E0213 15:21:46.777129 3391 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:21:46.795614 kubelet[3391]: I0213 15:21:46.777213 3391 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:21:46.799532 kubelet[3391]: I0213 15:21:46.777461 3391 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:21:46.799532 kubelet[3391]: I0213 15:21:46.790312 3391 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:21:46.819399 kubelet[3391]: I0213 15:21:46.819339 3391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:21:46.835761 kubelet[3391]: I0213 15:21:46.835613 3391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:21:46.841674 kubelet[3391]: I0213 15:21:46.841633 3391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:21:46.842239 kubelet[3391]: I0213 15:21:46.841811 3391 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:21:46.842239 kubelet[3391]: I0213 15:21:46.841847 3391 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:21:46.842239 kubelet[3391]: E0213 15:21:46.841917 3391 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:21:46.853371 kubelet[3391]: I0213 15:21:46.853332 3391 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:21:46.854550 kubelet[3391]: I0213 15:21:46.853561 3391 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:21:46.885389 kubelet[3391]: E0213 15:21:46.885352 3391 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-93\" not found" Feb 13 15:21:46.942780 kubelet[3391]: E0213 15:21:46.942187 3391 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:21:47.009997 kubelet[3391]: I0213 15:21:47.009963 3391 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:21:47.010280 kubelet[3391]: I0213 15:21:47.010256 3391 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:21:47.010426 kubelet[3391]: I0213 15:21:47.010408 3391 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:21:47.010950 kubelet[3391]: I0213 15:21:47.010819 3391 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:21:47.010950 kubelet[3391]: I0213 15:21:47.010847 3391 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:21:47.010950 kubelet[3391]: I0213 15:21:47.010881 3391 policy_none.go:49] "None policy: Start" Feb 13 15:21:47.013787 kubelet[3391]: I0213 15:21:47.013455 3391 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:21:47.013787 kubelet[3391]: I0213 15:21:47.013623 3391 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:21:47.016775 kubelet[3391]: I0213 15:21:47.015398 3391 state_mem.go:75] "Updated machine memory state" Feb 13 15:21:47.029399 kubelet[3391]: I0213 15:21:47.029271 3391 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:21:47.030830 kubelet[3391]: I0213 15:21:47.030791 3391 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:21:47.031594 kubelet[3391]: I0213 15:21:47.031312 3391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:21:47.033722 kubelet[3391]: I0213 15:21:47.033046 3391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:21:47.163392 kubelet[3391]: I0213 15:21:47.162236 3391 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-93" Feb 13 15:21:47.181346 kubelet[3391]: I0213 15:21:47.181200 3391 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-28-93" Feb 13 15:21:47.182066 kubelet[3391]: I0213 15:21:47.181929 3391 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-93" Feb 13 15:21:47.205980 kubelet[3391]: I0213 15:21:47.205489 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30b85bb3dc5479efe2b063b0b5fb0ff2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-93\" (UID: \"30b85bb3dc5479efe2b063b0b5fb0ff2\") " pod="kube-system/kube-apiserver-ip-172-31-28-93" Feb 13 15:21:47.205980 kubelet[3391]: I0213 15:21:47.205565 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:47.205980 kubelet[3391]: I0213 15:21:47.205603 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:47.205980 kubelet[3391]: I0213 15:21:47.205642 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:47.205980 kubelet[3391]: I0213 15:21:47.205687 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69228f1e7a43fd05e010d42a9b92fad6-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-93\" (UID: \"69228f1e7a43fd05e010d42a9b92fad6\") " pod="kube-system/kube-scheduler-ip-172-31-28-93" Feb 13 15:21:47.206335 kubelet[3391]: I0213 15:21:47.205725 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30b85bb3dc5479efe2b063b0b5fb0ff2-ca-certs\") pod \"kube-apiserver-ip-172-31-28-93\" (UID: \"30b85bb3dc5479efe2b063b0b5fb0ff2\") " pod="kube-system/kube-apiserver-ip-172-31-28-93" Feb 13 15:21:47.206335 kubelet[3391]: I0213 15:21:47.205758 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30b85bb3dc5479efe2b063b0b5fb0ff2-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-93\" (UID: \"30b85bb3dc5479efe2b063b0b5fb0ff2\") " pod="kube-system/kube-apiserver-ip-172-31-28-93" Feb 13 15:21:47.206335 kubelet[3391]: I0213 15:21:47.205791 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:47.206335 kubelet[3391]: I0213 15:21:47.205828 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a700e507724ef655799f80db851d64ad-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-93\" (UID: \"a700e507724ef655799f80db851d64ad\") " pod="kube-system/kube-controller-manager-ip-172-31-28-93" Feb 13 15:21:47.680767 sudo[3404]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:47.746555 kubelet[3391]: I0213 15:21:47.746475 3391 apiserver.go:52] "Watching apiserver" Feb 13 15:21:47.798756 kubelet[3391]: I0213 15:21:47.798704 3391 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:21:47.945243 kubelet[3391]: E0213 15:21:47.944737 3391 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-93\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-93" Feb 13 15:21:48.011004 kubelet[3391]: I0213 15:21:48.010914 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-93" podStartSLOduration=1.010891404 podStartE2EDuration="1.010891404s" podCreationTimestamp="2025-02-13 15:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:47.991416652 +0000 UTC m=+1.395131288" watchObservedRunningTime="2025-02-13 15:21:48.010891404 +0000 UTC m=+1.414606016" Feb 13 15:21:48.065114 kubelet[3391]: I0213 15:21:48.064913 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-93" podStartSLOduration=1.064893216 podStartE2EDuration="1.064893216s" podCreationTimestamp="2025-02-13 15:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:48.014665308 +0000 UTC m=+1.418379956" watchObservedRunningTime="2025-02-13 15:21:48.064893216 +0000 UTC m=+1.468607816" Feb 13 15:21:48.112149 kubelet[3391]: I0213 15:21:48.111875 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-93" podStartSLOduration=1.111852456 podStartE2EDuration="1.111852456s" podCreationTimestamp="2025-02-13 15:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:48.06907386 +0000 UTC m=+1.472788496" watchObservedRunningTime="2025-02-13 15:21:48.111852456 +0000 UTC m=+1.515567080" Feb 13 15:21:50.796152 kubelet[3391]: I0213 15:21:50.795947 3391 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:21:50.797351 containerd[1932]: time="2025-02-13T15:21:50.797264298Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:21:50.798474 kubelet[3391]: I0213 15:21:50.798412 3391 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:21:50.937555 sudo[2295]: pam_unix(sudo:session): session closed for user root Feb 13 15:21:50.962836 sshd[2294]: Connection closed by 147.75.109.163 port 60788 Feb 13 15:21:50.961735 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Feb 13 15:21:50.969104 systemd[1]: sshd@8-172.31.28.93:22-147.75.109.163:60788.service: Deactivated successfully. Feb 13 15:21:50.974392 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:21:50.975934 systemd[1]: session-9.scope: Consumed 11.920s CPU time, 155.1M memory peak, 0B memory swap peak. Feb 13 15:21:50.980064 systemd-logind[1923]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:21:50.982771 systemd-logind[1923]: Removed session 9. Feb 13 15:21:51.367120 systemd[1]: Created slice kubepods-besteffort-podf69ee930_b139_4e71_b60c_98b53939757e.slice - libcontainer container kubepods-besteffort-podf69ee930_b139_4e71_b60c_98b53939757e.slice. Feb 13 15:21:51.416476 systemd[1]: Created slice kubepods-burstable-podb4bf7fec_5296_4126_b0c7_d1a76c24dd74.slice - libcontainer container kubepods-burstable-podb4bf7fec_5296_4126_b0c7_d1a76c24dd74.slice. Feb 13 15:21:51.434044 kubelet[3391]: I0213 15:21:51.433729 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f69ee930-b139-4e71-b60c-98b53939757e-kube-proxy\") pod \"kube-proxy-q2q9m\" (UID: \"f69ee930-b139-4e71-b60c-98b53939757e\") " pod="kube-system/kube-proxy-q2q9m" Feb 13 15:21:51.434044 kubelet[3391]: I0213 15:21:51.433805 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hostproc\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434044 kubelet[3391]: I0213 15:21:51.433865 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-cgroup\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434044 kubelet[3391]: I0213 15:21:51.433902 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cni-path\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434044 kubelet[3391]: I0213 15:21:51.433957 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-config-path\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434044 kubelet[3391]: I0213 15:21:51.433995 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-kernel\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434553 kubelet[3391]: I0213 15:21:51.434036 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrhl\" (UniqueName: \"kubernetes.io/projected/f69ee930-b139-4e71-b60c-98b53939757e-kube-api-access-hlrhl\") pod \"kube-proxy-q2q9m\" (UID: \"f69ee930-b139-4e71-b60c-98b53939757e\") " pod="kube-system/kube-proxy-q2q9m" Feb 13 15:21:51.434553 kubelet[3391]: I0213 15:21:51.434076 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f69ee930-b139-4e71-b60c-98b53939757e-xtables-lock\") pod \"kube-proxy-q2q9m\" (UID: \"f69ee930-b139-4e71-b60c-98b53939757e\") " pod="kube-system/kube-proxy-q2q9m" Feb 13 15:21:51.434553 kubelet[3391]: I0213 15:21:51.434110 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f69ee930-b139-4e71-b60c-98b53939757e-lib-modules\") pod \"kube-proxy-q2q9m\" (UID: \"f69ee930-b139-4e71-b60c-98b53939757e\") " pod="kube-system/kube-proxy-q2q9m" Feb 13 15:21:51.434553 kubelet[3391]: I0213 15:21:51.434157 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hubble-tls\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434553 kubelet[3391]: I0213 15:21:51.434194 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-lib-modules\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434553 kubelet[3391]: I0213 15:21:51.434240 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-xtables-lock\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434865 kubelet[3391]: I0213 15:21:51.434274 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-clustermesh-secrets\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434865 kubelet[3391]: I0213 15:21:51.434320 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-run\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434865 kubelet[3391]: I0213 15:21:51.434353 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-etc-cni-netd\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434865 kubelet[3391]: I0213 15:21:51.434396 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-bpf-maps\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434865 kubelet[3391]: I0213 15:21:51.434466 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-net\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.434865 kubelet[3391]: I0213 15:21:51.434536 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhmhh\" (UniqueName: \"kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-kube-api-access-fhmhh\") pod \"cilium-vr29b\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " pod="kube-system/cilium-vr29b" Feb 13 15:21:51.652704 systemd[1]: Created slice kubepods-besteffort-pod8db092e8_c7ad_4278_9be5_39ca9ed5ddfe.slice - libcontainer container kubepods-besteffort-pod8db092e8_c7ad_4278_9be5_39ca9ed5ddfe.slice. Feb 13 15:21:51.681216 containerd[1932]: time="2025-02-13T15:21:51.681014778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2q9m,Uid:f69ee930-b139-4e71-b60c-98b53939757e,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:51.727228 containerd[1932]: time="2025-02-13T15:21:51.726780606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vr29b,Uid:b4bf7fec-5296-4126-b0c7-d1a76c24dd74,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:51.732865 containerd[1932]: time="2025-02-13T15:21:51.732295482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:51.732865 containerd[1932]: time="2025-02-13T15:21:51.732789390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:51.733575 containerd[1932]: time="2025-02-13T15:21:51.732835374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:51.734841 containerd[1932]: time="2025-02-13T15:21:51.734739846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:51.737858 kubelet[3391]: I0213 15:21:51.737806 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-cilium-config-path\") pod \"cilium-operator-5d85765b45-k4vlw\" (UID: \"8db092e8-c7ad-4278-9be5-39ca9ed5ddfe\") " pod="kube-system/cilium-operator-5d85765b45-k4vlw" Feb 13 15:21:51.737858 kubelet[3391]: I0213 15:21:51.737877 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmtjv\" (UniqueName: \"kubernetes.io/projected/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-kube-api-access-fmtjv\") pod \"cilium-operator-5d85765b45-k4vlw\" (UID: \"8db092e8-c7ad-4278-9be5-39ca9ed5ddfe\") " pod="kube-system/cilium-operator-5d85765b45-k4vlw" Feb 13 15:21:51.771856 systemd[1]: Started cri-containerd-dd542f5b637d00b9d9967f77f8dc919e53a91cdc1f65c08d8857b08dc92accb5.scope - libcontainer container dd542f5b637d00b9d9967f77f8dc919e53a91cdc1f65c08d8857b08dc92accb5. Feb 13 15:21:51.792092 containerd[1932]: time="2025-02-13T15:21:51.791318299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:51.792092 containerd[1932]: time="2025-02-13T15:21:51.791427547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:51.792092 containerd[1932]: time="2025-02-13T15:21:51.791482219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:51.792092 containerd[1932]: time="2025-02-13T15:21:51.791710219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:51.829112 containerd[1932]: time="2025-02-13T15:21:51.828885295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2q9m,Uid:f69ee930-b139-4e71-b60c-98b53939757e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd542f5b637d00b9d9967f77f8dc919e53a91cdc1f65c08d8857b08dc92accb5\"" Feb 13 15:21:51.837179 systemd[1]: Started cri-containerd-565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7.scope - libcontainer container 565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7. Feb 13 15:21:51.852559 containerd[1932]: time="2025-02-13T15:21:51.847805563Z" level=info msg="CreateContainer within sandbox \"dd542f5b637d00b9d9967f77f8dc919e53a91cdc1f65c08d8857b08dc92accb5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:21:51.908846 containerd[1932]: time="2025-02-13T15:21:51.908320495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vr29b,Uid:b4bf7fec-5296-4126-b0c7-d1a76c24dd74,Namespace:kube-system,Attempt:0,} returns sandbox id \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\"" Feb 13 15:21:51.913669 containerd[1932]: time="2025-02-13T15:21:51.913225639Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:21:51.917315 containerd[1932]: time="2025-02-13T15:21:51.916965427Z" level=info msg="CreateContainer within sandbox \"dd542f5b637d00b9d9967f77f8dc919e53a91cdc1f65c08d8857b08dc92accb5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce212bc4527cc31ef8166a796a2a282d0ecf1f228a2f19223022b8bf14dd4867\"" Feb 13 15:21:51.920602 containerd[1932]: time="2025-02-13T15:21:51.919624723Z" level=info msg="StartContainer for \"ce212bc4527cc31ef8166a796a2a282d0ecf1f228a2f19223022b8bf14dd4867\"" Feb 13 15:21:51.960273 containerd[1932]: time="2025-02-13T15:21:51.959838476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k4vlw,Uid:8db092e8-c7ad-4278-9be5-39ca9ed5ddfe,Namespace:kube-system,Attempt:0,}" Feb 13 15:21:51.974819 systemd[1]: Started cri-containerd-ce212bc4527cc31ef8166a796a2a282d0ecf1f228a2f19223022b8bf14dd4867.scope - libcontainer container ce212bc4527cc31ef8166a796a2a282d0ecf1f228a2f19223022b8bf14dd4867. Feb 13 15:21:52.016964 containerd[1932]: time="2025-02-13T15:21:52.015662980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:21:52.016964 containerd[1932]: time="2025-02-13T15:21:52.016624360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:21:52.016964 containerd[1932]: time="2025-02-13T15:21:52.016655464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:52.016964 containerd[1932]: time="2025-02-13T15:21:52.016832644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:21:52.060680 systemd[1]: Started cri-containerd-0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655.scope - libcontainer container 0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655. Feb 13 15:21:52.067104 containerd[1932]: time="2025-02-13T15:21:52.066879472Z" level=info msg="StartContainer for \"ce212bc4527cc31ef8166a796a2a282d0ecf1f228a2f19223022b8bf14dd4867\" returns successfully" Feb 13 15:21:52.160792 containerd[1932]: time="2025-02-13T15:21:52.160465925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k4vlw,Uid:8db092e8-c7ad-4278-9be5-39ca9ed5ddfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\"" Feb 13 15:21:52.975614 kubelet[3391]: I0213 15:21:52.975513 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q2q9m" podStartSLOduration=1.9754740210000001 podStartE2EDuration="1.975474021s" podCreationTimestamp="2025-02-13 15:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:21:52.974518389 +0000 UTC m=+6.378233025" watchObservedRunningTime="2025-02-13 15:21:52.975474021 +0000 UTC m=+6.379188633" Feb 13 15:21:56.974903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555899382.mount: Deactivated successfully. Feb 13 15:21:59.467585 containerd[1932]: time="2025-02-13T15:21:59.467481793Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:59.469471 containerd[1932]: time="2025-02-13T15:21:59.469373401Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:21:59.472218 containerd[1932]: time="2025-02-13T15:21:59.472109593Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:21:59.476230 containerd[1932]: time="2025-02-13T15:21:59.476065081Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.562474834s" Feb 13 15:21:59.476230 containerd[1932]: time="2025-02-13T15:21:59.476125513Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:21:59.480199 containerd[1932]: time="2025-02-13T15:21:59.479086537Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:21:59.480830 containerd[1932]: time="2025-02-13T15:21:59.480756517Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:21:59.507166 containerd[1932]: time="2025-02-13T15:21:59.507095089Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\"" Feb 13 15:21:59.509073 containerd[1932]: time="2025-02-13T15:21:59.508481965Z" level=info msg="StartContainer for \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\"" Feb 13 15:21:59.566978 systemd[1]: Started cri-containerd-9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480.scope - libcontainer container 9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480. Feb 13 15:21:59.617546 containerd[1932]: time="2025-02-13T15:21:59.617462438Z" level=info msg="StartContainer for \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\" returns successfully" Feb 13 15:21:59.638580 systemd[1]: cri-containerd-9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480.scope: Deactivated successfully. Feb 13 15:22:00.496050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480-rootfs.mount: Deactivated successfully. Feb 13 15:22:00.696828 containerd[1932]: time="2025-02-13T15:22:00.696635235Z" level=info msg="shim disconnected" id=9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480 namespace=k8s.io Feb 13 15:22:00.696828 containerd[1932]: time="2025-02-13T15:22:00.696709683Z" level=warning msg="cleaning up after shim disconnected" id=9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480 namespace=k8s.io Feb 13 15:22:00.696828 containerd[1932]: time="2025-02-13T15:22:00.696729063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:00.994758 containerd[1932]: time="2025-02-13T15:22:00.994371400Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:22:01.050073 containerd[1932]: time="2025-02-13T15:22:01.049973857Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\"" Feb 13 15:22:01.051632 containerd[1932]: time="2025-02-13T15:22:01.051360985Z" level=info msg="StartContainer for \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\"" Feb 13 15:22:01.113828 systemd[1]: Started cri-containerd-5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d.scope - libcontainer container 5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d. Feb 13 15:22:01.164201 containerd[1932]: time="2025-02-13T15:22:01.164137093Z" level=info msg="StartContainer for \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\" returns successfully" Feb 13 15:22:01.187074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:22:01.189053 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:01.189181 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:22:01.199571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:22:01.200088 systemd[1]: cri-containerd-5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d.scope: Deactivated successfully. Feb 13 15:22:01.241818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:22:01.248677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d-rootfs.mount: Deactivated successfully. Feb 13 15:22:01.252727 containerd[1932]: time="2025-02-13T15:22:01.252642794Z" level=info msg="shim disconnected" id=5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d namespace=k8s.io Feb 13 15:22:01.253254 containerd[1932]: time="2025-02-13T15:22:01.253026338Z" level=warning msg="cleaning up after shim disconnected" id=5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d namespace=k8s.io Feb 13 15:22:01.253254 containerd[1932]: time="2025-02-13T15:22:01.253055138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:02.000364 containerd[1932]: time="2025-02-13T15:22:02.000287665Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:22:02.060946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674439144.mount: Deactivated successfully. Feb 13 15:22:02.074098 containerd[1932]: time="2025-02-13T15:22:02.073909958Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\"" Feb 13 15:22:02.074893 containerd[1932]: time="2025-02-13T15:22:02.074819642Z" level=info msg="StartContainer for \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\"" Feb 13 15:22:02.141276 systemd[1]: Started cri-containerd-945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db.scope - libcontainer container 945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db. Feb 13 15:22:02.219200 containerd[1932]: time="2025-02-13T15:22:02.219128043Z" level=info msg="StartContainer for \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\" returns successfully" Feb 13 15:22:02.222545 systemd[1]: cri-containerd-945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db.scope: Deactivated successfully. Feb 13 15:22:02.303552 containerd[1932]: time="2025-02-13T15:22:02.303198159Z" level=info msg="shim disconnected" id=945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db namespace=k8s.io Feb 13 15:22:02.303552 containerd[1932]: time="2025-02-13T15:22:02.303275979Z" level=warning msg="cleaning up after shim disconnected" id=945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db namespace=k8s.io Feb 13 15:22:02.303552 containerd[1932]: time="2025-02-13T15:22:02.303296979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:03.007376 containerd[1932]: time="2025-02-13T15:22:03.006964814Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:22:03.034746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db-rootfs.mount: Deactivated successfully. Feb 13 15:22:03.039388 containerd[1932]: time="2025-02-13T15:22:03.039234495Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\"" Feb 13 15:22:03.040920 containerd[1932]: time="2025-02-13T15:22:03.040840827Z" level=info msg="StartContainer for \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\"" Feb 13 15:22:03.092553 systemd[1]: run-containerd-runc-k8s.io-ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a-runc.NT1YJ5.mount: Deactivated successfully. Feb 13 15:22:03.101830 systemd[1]: Started cri-containerd-ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a.scope - libcontainer container ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a. Feb 13 15:22:03.154112 systemd[1]: cri-containerd-ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a.scope: Deactivated successfully. Feb 13 15:22:03.160206 containerd[1932]: time="2025-02-13T15:22:03.159745695Z" level=info msg="StartContainer for \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\" returns successfully" Feb 13 15:22:03.200106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a-rootfs.mount: Deactivated successfully. Feb 13 15:22:03.269260 containerd[1932]: time="2025-02-13T15:22:03.268440628Z" level=info msg="shim disconnected" id=ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a namespace=k8s.io Feb 13 15:22:03.269260 containerd[1932]: time="2025-02-13T15:22:03.268567588Z" level=warning msg="cleaning up after shim disconnected" id=ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a namespace=k8s.io Feb 13 15:22:03.269260 containerd[1932]: time="2025-02-13T15:22:03.269174200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:22:04.016814 containerd[1932]: time="2025-02-13T15:22:04.016534239Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:22:04.053117 containerd[1932]: time="2025-02-13T15:22:04.052950964Z" level=info msg="CreateContainer within sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\"" Feb 13 15:22:04.054696 containerd[1932]: time="2025-02-13T15:22:04.053888776Z" level=info msg="StartContainer for \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\"" Feb 13 15:22:04.108972 systemd[1]: Started cri-containerd-b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517.scope - libcontainer container b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517. Feb 13 15:22:04.172435 containerd[1932]: time="2025-02-13T15:22:04.172295992Z" level=info msg="StartContainer for \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\" returns successfully" Feb 13 15:22:04.324046 kubelet[3391]: I0213 15:22:04.323896 3391 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:22:04.386835 systemd[1]: Created slice kubepods-burstable-pod9cf6c1b4_7c38_437a_869a_c98576c00217.slice - libcontainer container kubepods-burstable-pod9cf6c1b4_7c38_437a_869a_c98576c00217.slice. Feb 13 15:22:04.409166 systemd[1]: Created slice kubepods-burstable-podccf7b9bc_6672_41db_a82d_4d23d07c4e78.slice - libcontainer container kubepods-burstable-podccf7b9bc_6672_41db_a82d_4d23d07c4e78.slice. Feb 13 15:22:04.423413 kubelet[3391]: I0213 15:22:04.423147 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljw2\" (UniqueName: \"kubernetes.io/projected/ccf7b9bc-6672-41db-a82d-4d23d07c4e78-kube-api-access-hljw2\") pod \"coredns-6f6b679f8f-g4rjf\" (UID: \"ccf7b9bc-6672-41db-a82d-4d23d07c4e78\") " pod="kube-system/coredns-6f6b679f8f-g4rjf" Feb 13 15:22:04.423413 kubelet[3391]: I0213 15:22:04.423222 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cf6c1b4-7c38-437a-869a-c98576c00217-config-volume\") pod \"coredns-6f6b679f8f-z7lcd\" (UID: \"9cf6c1b4-7c38-437a-869a-c98576c00217\") " pod="kube-system/coredns-6f6b679f8f-z7lcd" Feb 13 15:22:04.423413 kubelet[3391]: I0213 15:22:04.423266 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ccf7b9bc-6672-41db-a82d-4d23d07c4e78-config-volume\") pod \"coredns-6f6b679f8f-g4rjf\" (UID: \"ccf7b9bc-6672-41db-a82d-4d23d07c4e78\") " pod="kube-system/coredns-6f6b679f8f-g4rjf" Feb 13 15:22:04.423413 kubelet[3391]: I0213 15:22:04.423304 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6kkh\" (UniqueName: \"kubernetes.io/projected/9cf6c1b4-7c38-437a-869a-c98576c00217-kube-api-access-t6kkh\") pod \"coredns-6f6b679f8f-z7lcd\" (UID: \"9cf6c1b4-7c38-437a-869a-c98576c00217\") " pod="kube-system/coredns-6f6b679f8f-z7lcd" Feb 13 15:22:04.701313 containerd[1932]: time="2025-02-13T15:22:04.700721899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z7lcd,Uid:9cf6c1b4-7c38-437a-869a-c98576c00217,Namespace:kube-system,Attempt:0,}" Feb 13 15:22:04.722033 containerd[1932]: time="2025-02-13T15:22:04.721956295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g4rjf,Uid:ccf7b9bc-6672-41db-a82d-4d23d07c4e78,Namespace:kube-system,Attempt:0,}" Feb 13 15:22:04.834002 containerd[1932]: time="2025-02-13T15:22:04.833934308Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:04.836458 containerd[1932]: time="2025-02-13T15:22:04.836279888Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:22:04.838769 containerd[1932]: time="2025-02-13T15:22:04.838707512Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:22:04.857060 containerd[1932]: time="2025-02-13T15:22:04.856583756Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.377360143s" Feb 13 15:22:04.857060 containerd[1932]: time="2025-02-13T15:22:04.856739024Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:22:04.864878 containerd[1932]: time="2025-02-13T15:22:04.864806504Z" level=info msg="CreateContainer within sandbox \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:22:04.908754 containerd[1932]: time="2025-02-13T15:22:04.908612012Z" level=info msg="CreateContainer within sandbox \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\"" Feb 13 15:22:04.911770 containerd[1932]: time="2025-02-13T15:22:04.909697244Z" level=info msg="StartContainer for \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\"" Feb 13 15:22:04.969828 systemd[1]: Started cri-containerd-ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319.scope - libcontainer container ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319. Feb 13 15:22:05.060063 systemd[1]: run-containerd-runc-k8s.io-b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517-runc.Z0SNPZ.mount: Deactivated successfully. Feb 13 15:22:05.090416 kubelet[3391]: I0213 15:22:05.090281 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vr29b" podStartSLOduration=6.524187035 podStartE2EDuration="14.090253349s" podCreationTimestamp="2025-02-13 15:21:51 +0000 UTC" firstStartedPulling="2025-02-13 15:21:51.912060571 +0000 UTC m=+5.315775171" lastFinishedPulling="2025-02-13 15:21:59.478126885 +0000 UTC m=+12.881841485" observedRunningTime="2025-02-13 15:22:05.088830569 +0000 UTC m=+18.492545181" watchObservedRunningTime="2025-02-13 15:22:05.090253349 +0000 UTC m=+18.493967961" Feb 13 15:22:05.142541 containerd[1932]: time="2025-02-13T15:22:05.142465325Z" level=info msg="StartContainer for \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\" returns successfully" Feb 13 15:22:09.122483 systemd-networkd[1852]: cilium_host: Link UP Feb 13 15:22:09.124139 systemd-networkd[1852]: cilium_net: Link UP Feb 13 15:22:09.126212 systemd-networkd[1852]: cilium_net: Gained carrier Feb 13 15:22:09.126703 systemd-networkd[1852]: cilium_host: Gained carrier Feb 13 15:22:09.131003 (udev-worker)[4220]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:22:09.137705 (udev-worker)[4221]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:22:09.314796 systemd-networkd[1852]: cilium_vxlan: Link UP Feb 13 15:22:09.314812 systemd-networkd[1852]: cilium_vxlan: Gained carrier Feb 13 15:22:09.790565 kernel: NET: Registered PF_ALG protocol family Feb 13 15:22:10.055839 systemd-networkd[1852]: cilium_net: Gained IPv6LL Feb 13 15:22:10.119744 systemd-networkd[1852]: cilium_host: Gained IPv6LL Feb 13 15:22:11.078189 systemd-networkd[1852]: lxc_health: Link UP Feb 13 15:22:11.079840 (udev-worker)[4219]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:22:11.085651 systemd-networkd[1852]: lxc_health: Gained carrier Feb 13 15:22:11.143727 systemd-networkd[1852]: cilium_vxlan: Gained IPv6LL Feb 13 15:22:11.763335 kubelet[3391]: I0213 15:22:11.763214 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-k4vlw" podStartSLOduration=8.069302471 podStartE2EDuration="20.763189094s" podCreationTimestamp="2025-02-13 15:21:51 +0000 UTC" firstStartedPulling="2025-02-13 15:21:52.164891777 +0000 UTC m=+5.568606413" lastFinishedPulling="2025-02-13 15:22:04.858778424 +0000 UTC m=+18.262493036" observedRunningTime="2025-02-13 15:22:06.06124089 +0000 UTC m=+19.464955502" watchObservedRunningTime="2025-02-13 15:22:11.763189094 +0000 UTC m=+25.166903694" Feb 13 15:22:11.807751 kernel: eth0: renamed from tmpa1f19 Feb 13 15:22:11.817018 systemd-networkd[1852]: lxcdad8a3465a30: Link UP Feb 13 15:22:11.817758 systemd-networkd[1852]: lxcdad8a3465a30: Gained carrier Feb 13 15:22:11.872838 systemd-networkd[1852]: lxc462b46351a39: Link UP Feb 13 15:22:11.880630 kernel: eth0: renamed from tmpd165a Feb 13 15:22:11.887401 systemd-networkd[1852]: lxc462b46351a39: Gained carrier Feb 13 15:22:12.359786 systemd-networkd[1852]: lxc_health: Gained IPv6LL Feb 13 15:22:13.194630 systemd-networkd[1852]: lxc462b46351a39: Gained IPv6LL Feb 13 15:22:13.195942 systemd-networkd[1852]: lxcdad8a3465a30: Gained IPv6LL Feb 13 15:22:15.613334 ntpd[1917]: Listen normally on 8 cilium_host 192.168.0.31:123 Feb 13 15:22:15.614561 ntpd[1917]: 13 Feb 15:22:15 ntpd[1917]: Listen normally on 8 cilium_host 192.168.0.31:123 Feb 13 15:22:15.614561 ntpd[1917]: 13 Feb 15:22:15 ntpd[1917]: Listen normally on 9 cilium_net [fe80::d45e:faff:fead:74df%4]:123 Feb 13 15:22:15.614561 ntpd[1917]: 13 Feb 15:22:15 ntpd[1917]: Listen normally on 10 cilium_host [fe80::e062:19ff:fe5a:7465%5]:123 Feb 13 15:22:15.614561 ntpd[1917]: 13 Feb 15:22:15 ntpd[1917]: Listen normally on 11 cilium_vxlan [fe80::fc48:99ff:fee0:589c%6]:123 Feb 13 15:22:15.614561 ntpd[1917]: 13 Feb 15:22:15 ntpd[1917]: Listen normally on 12 lxc_health [fe80::304e:97ff:fe5a:49a8%8]:123 Feb 13 15:22:15.614561 ntpd[1917]: 13 Feb 15:22:15 ntpd[1917]: Listen normally on 13 lxcdad8a3465a30 [fe80::f44f:89ff:fead:747f%10]:123 Feb 13 15:22:15.614561 ntpd[1917]: 13 Feb 15:22:15 ntpd[1917]: Listen normally on 14 lxc462b46351a39 [fe80::3416:30ff:fe73:6883%12]:123 Feb 13 15:22:15.613467 ntpd[1917]: Listen normally on 9 cilium_net [fe80::d45e:faff:fead:74df%4]:123 Feb 13 15:22:15.613592 ntpd[1917]: Listen normally on 10 cilium_host [fe80::e062:19ff:fe5a:7465%5]:123 Feb 13 15:22:15.613663 ntpd[1917]: Listen normally on 11 cilium_vxlan [fe80::fc48:99ff:fee0:589c%6]:123 Feb 13 15:22:15.613730 ntpd[1917]: Listen normally on 12 lxc_health [fe80::304e:97ff:fe5a:49a8%8]:123 Feb 13 15:22:15.613796 ntpd[1917]: Listen normally on 13 lxcdad8a3465a30 [fe80::f44f:89ff:fead:747f%10]:123 Feb 13 15:22:15.613861 ntpd[1917]: Listen normally on 14 lxc462b46351a39 [fe80::3416:30ff:fe73:6883%12]:123 Feb 13 15:22:20.098219 containerd[1932]: time="2025-02-13T15:22:20.098060731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:22:20.101797 containerd[1932]: time="2025-02-13T15:22:20.101168971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:22:20.101797 containerd[1932]: time="2025-02-13T15:22:20.101420851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:20.102966 containerd[1932]: time="2025-02-13T15:22:20.102595267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:20.118446 containerd[1932]: time="2025-02-13T15:22:20.117167479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:22:20.118446 containerd[1932]: time="2025-02-13T15:22:20.117272347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:22:20.118446 containerd[1932]: time="2025-02-13T15:22:20.117301843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:20.118446 containerd[1932]: time="2025-02-13T15:22:20.117463303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:22:20.191631 systemd[1]: Started cri-containerd-d165a8e51f0d084b96bf4a829d6cafa50e472d5d172ef819005da639e97ed486.scope - libcontainer container d165a8e51f0d084b96bf4a829d6cafa50e472d5d172ef819005da639e97ed486. Feb 13 15:22:20.207340 systemd[1]: Started cri-containerd-a1f196e697af451e0604fb689474d178eadecfecb329624e8a31fcae5ea260cc.scope - libcontainer container a1f196e697af451e0604fb689474d178eadecfecb329624e8a31fcae5ea260cc. Feb 13 15:22:20.344414 containerd[1932]: time="2025-02-13T15:22:20.343948749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-g4rjf,Uid:ccf7b9bc-6672-41db-a82d-4d23d07c4e78,Namespace:kube-system,Attempt:0,} returns sandbox id \"d165a8e51f0d084b96bf4a829d6cafa50e472d5d172ef819005da639e97ed486\"" Feb 13 15:22:20.367054 containerd[1932]: time="2025-02-13T15:22:20.364840881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z7lcd,Uid:9cf6c1b4-7c38-437a-869a-c98576c00217,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1f196e697af451e0604fb689474d178eadecfecb329624e8a31fcae5ea260cc\"" Feb 13 15:22:20.367054 containerd[1932]: time="2025-02-13T15:22:20.366793281Z" level=info msg="CreateContainer within sandbox \"d165a8e51f0d084b96bf4a829d6cafa50e472d5d172ef819005da639e97ed486\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:22:20.378702 containerd[1932]: time="2025-02-13T15:22:20.377461821Z" level=info msg="CreateContainer within sandbox \"a1f196e697af451e0604fb689474d178eadecfecb329624e8a31fcae5ea260cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:22:20.420755 containerd[1932]: time="2025-02-13T15:22:20.420674661Z" level=info msg="CreateContainer within sandbox \"a1f196e697af451e0604fb689474d178eadecfecb329624e8a31fcae5ea260cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e4d28df5bddec175536328cfafcacc7e44b911c554fcace8c90ffb602c163c9\"" Feb 13 15:22:20.422942 containerd[1932]: time="2025-02-13T15:22:20.422787777Z" level=info msg="StartContainer for \"3e4d28df5bddec175536328cfafcacc7e44b911c554fcace8c90ffb602c163c9\"" Feb 13 15:22:20.432705 containerd[1932]: time="2025-02-13T15:22:20.432626433Z" level=info msg="CreateContainer within sandbox \"d165a8e51f0d084b96bf4a829d6cafa50e472d5d172ef819005da639e97ed486\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32481259114a4d91464dc43f75382a266862f0eddcc460eb3a5cb5b6c0462ddb\"" Feb 13 15:22:20.439677 containerd[1932]: time="2025-02-13T15:22:20.439620249Z" level=info msg="StartContainer for \"32481259114a4d91464dc43f75382a266862f0eddcc460eb3a5cb5b6c0462ddb\"" Feb 13 15:22:20.519807 systemd[1]: Started cri-containerd-3e4d28df5bddec175536328cfafcacc7e44b911c554fcace8c90ffb602c163c9.scope - libcontainer container 3e4d28df5bddec175536328cfafcacc7e44b911c554fcace8c90ffb602c163c9. Feb 13 15:22:20.531797 systemd[1]: Started cri-containerd-32481259114a4d91464dc43f75382a266862f0eddcc460eb3a5cb5b6c0462ddb.scope - libcontainer container 32481259114a4d91464dc43f75382a266862f0eddcc460eb3a5cb5b6c0462ddb. Feb 13 15:22:20.639755 containerd[1932]: time="2025-02-13T15:22:20.639408538Z" level=info msg="StartContainer for \"3e4d28df5bddec175536328cfafcacc7e44b911c554fcace8c90ffb602c163c9\" returns successfully" Feb 13 15:22:20.657787 containerd[1932]: time="2025-02-13T15:22:20.657693058Z" level=info msg="StartContainer for \"32481259114a4d91464dc43f75382a266862f0eddcc460eb3a5cb5b6c0462ddb\" returns successfully" Feb 13 15:22:21.154963 kubelet[3391]: I0213 15:22:21.154373 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-z7lcd" podStartSLOduration=30.154349409 podStartE2EDuration="30.154349409s" podCreationTimestamp="2025-02-13 15:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:22:21.108833696 +0000 UTC m=+34.512548308" watchObservedRunningTime="2025-02-13 15:22:21.154349409 +0000 UTC m=+34.558064033" Feb 13 15:22:33.695612 systemd[1]: Started sshd@9-172.31.28.93:22-147.75.109.163:40086.service - OpenSSH per-connection server daemon (147.75.109.163:40086). Feb 13 15:22:33.881852 sshd[4755]: Accepted publickey for core from 147.75.109.163 port 40086 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:33.884477 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:33.892967 systemd-logind[1923]: New session 10 of user core. Feb 13 15:22:33.901832 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:22:34.179641 sshd[4757]: Connection closed by 147.75.109.163 port 40086 Feb 13 15:22:34.181869 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:34.188038 systemd[1]: sshd@9-172.31.28.93:22-147.75.109.163:40086.service: Deactivated successfully. Feb 13 15:22:34.193151 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:22:34.196929 systemd-logind[1923]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:22:34.200711 systemd-logind[1923]: Removed session 10. Feb 13 15:22:39.219099 systemd[1]: Started sshd@10-172.31.28.93:22-147.75.109.163:40102.service - OpenSSH per-connection server daemon (147.75.109.163:40102). Feb 13 15:22:39.411995 sshd[4772]: Accepted publickey for core from 147.75.109.163 port 40102 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:39.414475 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:39.422816 systemd-logind[1923]: New session 11 of user core. Feb 13 15:22:39.435779 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:22:39.687112 sshd[4774]: Connection closed by 147.75.109.163 port 40102 Feb 13 15:22:39.686216 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:39.693141 systemd[1]: sshd@10-172.31.28.93:22-147.75.109.163:40102.service: Deactivated successfully. Feb 13 15:22:39.696646 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:22:39.698751 systemd-logind[1923]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:22:39.701057 systemd-logind[1923]: Removed session 11. Feb 13 15:22:44.733473 systemd[1]: Started sshd@11-172.31.28.93:22-147.75.109.163:49606.service - OpenSSH per-connection server daemon (147.75.109.163:49606). Feb 13 15:22:44.910721 sshd[4787]: Accepted publickey for core from 147.75.109.163 port 49606 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:44.913219 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:44.922003 systemd-logind[1923]: New session 12 of user core. Feb 13 15:22:44.933797 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:22:45.180853 sshd[4789]: Connection closed by 147.75.109.163 port 49606 Feb 13 15:22:45.181712 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:45.188144 systemd[1]: sshd@11-172.31.28.93:22-147.75.109.163:49606.service: Deactivated successfully. Feb 13 15:22:45.191356 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:22:45.193494 systemd-logind[1923]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:22:45.195396 systemd-logind[1923]: Removed session 12. Feb 13 15:22:50.229031 systemd[1]: Started sshd@12-172.31.28.93:22-147.75.109.163:38672.service - OpenSSH per-connection server daemon (147.75.109.163:38672). Feb 13 15:22:50.412142 sshd[4804]: Accepted publickey for core from 147.75.109.163 port 38672 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:50.414638 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:50.422392 systemd-logind[1923]: New session 13 of user core. Feb 13 15:22:50.430780 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:22:50.696102 sshd[4806]: Connection closed by 147.75.109.163 port 38672 Feb 13 15:22:50.697057 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:50.702287 systemd-logind[1923]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:22:50.703146 systemd[1]: sshd@12-172.31.28.93:22-147.75.109.163:38672.service: Deactivated successfully. Feb 13 15:22:50.708089 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:22:50.713550 systemd-logind[1923]: Removed session 13. Feb 13 15:22:50.735009 systemd[1]: Started sshd@13-172.31.28.93:22-147.75.109.163:38678.service - OpenSSH per-connection server daemon (147.75.109.163:38678). Feb 13 15:22:50.932721 sshd[4817]: Accepted publickey for core from 147.75.109.163 port 38678 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:50.935304 sshd-session[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:50.943023 systemd-logind[1923]: New session 14 of user core. Feb 13 15:22:50.954803 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:22:51.284392 sshd[4819]: Connection closed by 147.75.109.163 port 38678 Feb 13 15:22:51.284120 sshd-session[4817]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:51.293621 systemd[1]: sshd@13-172.31.28.93:22-147.75.109.163:38678.service: Deactivated successfully. Feb 13 15:22:51.300704 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:22:51.309229 systemd-logind[1923]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:22:51.340033 systemd[1]: Started sshd@14-172.31.28.93:22-147.75.109.163:38688.service - OpenSSH per-connection server daemon (147.75.109.163:38688). Feb 13 15:22:51.342622 systemd-logind[1923]: Removed session 14. Feb 13 15:22:51.529883 sshd[4828]: Accepted publickey for core from 147.75.109.163 port 38688 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:51.532307 sshd-session[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:51.543870 systemd-logind[1923]: New session 15 of user core. Feb 13 15:22:51.551763 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:22:51.791703 sshd[4830]: Connection closed by 147.75.109.163 port 38688 Feb 13 15:22:51.792573 sshd-session[4828]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:51.799238 systemd[1]: sshd@14-172.31.28.93:22-147.75.109.163:38688.service: Deactivated successfully. Feb 13 15:22:51.802897 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:22:51.805103 systemd-logind[1923]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:22:51.807008 systemd-logind[1923]: Removed session 15. Feb 13 15:22:56.834022 systemd[1]: Started sshd@15-172.31.28.93:22-147.75.109.163:38700.service - OpenSSH per-connection server daemon (147.75.109.163:38700). Feb 13 15:22:57.027602 sshd[4842]: Accepted publickey for core from 147.75.109.163 port 38700 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:22:57.030720 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:22:57.037958 systemd-logind[1923]: New session 16 of user core. Feb 13 15:22:57.046753 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:22:57.295544 sshd[4844]: Connection closed by 147.75.109.163 port 38700 Feb 13 15:22:57.295836 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Feb 13 15:22:57.301052 systemd-logind[1923]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:22:57.301932 systemd[1]: sshd@15-172.31.28.93:22-147.75.109.163:38700.service: Deactivated successfully. Feb 13 15:22:57.307586 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:22:57.310893 systemd-logind[1923]: Removed session 16. Feb 13 15:23:02.337350 systemd[1]: Started sshd@16-172.31.28.93:22-147.75.109.163:40818.service - OpenSSH per-connection server daemon (147.75.109.163:40818). Feb 13 15:23:02.529591 sshd[4856]: Accepted publickey for core from 147.75.109.163 port 40818 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:02.532008 sshd-session[4856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:02.539369 systemd-logind[1923]: New session 17 of user core. Feb 13 15:23:02.551779 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:23:02.799918 sshd[4858]: Connection closed by 147.75.109.163 port 40818 Feb 13 15:23:02.800982 sshd-session[4856]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:02.807085 systemd-logind[1923]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:23:02.808444 systemd[1]: sshd@16-172.31.28.93:22-147.75.109.163:40818.service: Deactivated successfully. Feb 13 15:23:02.812540 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:23:02.815454 systemd-logind[1923]: Removed session 17. Feb 13 15:23:07.842965 systemd[1]: Started sshd@17-172.31.28.93:22-147.75.109.163:40826.service - OpenSSH per-connection server daemon (147.75.109.163:40826). Feb 13 15:23:08.026075 sshd[4869]: Accepted publickey for core from 147.75.109.163 port 40826 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:08.028591 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:08.037467 systemd-logind[1923]: New session 18 of user core. Feb 13 15:23:08.046825 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:23:08.290089 sshd[4871]: Connection closed by 147.75.109.163 port 40826 Feb 13 15:23:08.291326 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:08.297975 systemd[1]: sshd@17-172.31.28.93:22-147.75.109.163:40826.service: Deactivated successfully. Feb 13 15:23:08.301332 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:23:08.303494 systemd-logind[1923]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:23:08.305540 systemd-logind[1923]: Removed session 18. Feb 13 15:23:13.331017 systemd[1]: Started sshd@18-172.31.28.93:22-147.75.109.163:51678.service - OpenSSH per-connection server daemon (147.75.109.163:51678). Feb 13 15:23:13.518133 sshd[4883]: Accepted publickey for core from 147.75.109.163 port 51678 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:13.520628 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:13.530064 systemd-logind[1923]: New session 19 of user core. Feb 13 15:23:13.539813 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:23:13.780838 sshd[4885]: Connection closed by 147.75.109.163 port 51678 Feb 13 15:23:13.780122 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:13.786952 systemd[1]: sshd@18-172.31.28.93:22-147.75.109.163:51678.service: Deactivated successfully. Feb 13 15:23:13.790856 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:23:13.792319 systemd-logind[1923]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:23:13.794166 systemd-logind[1923]: Removed session 19. Feb 13 15:23:13.819046 systemd[1]: Started sshd@19-172.31.28.93:22-147.75.109.163:51688.service - OpenSSH per-connection server daemon (147.75.109.163:51688). Feb 13 15:23:14.020035 sshd[4895]: Accepted publickey for core from 147.75.109.163 port 51688 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:14.021733 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:14.028963 systemd-logind[1923]: New session 20 of user core. Feb 13 15:23:14.039824 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:23:14.347097 sshd[4897]: Connection closed by 147.75.109.163 port 51688 Feb 13 15:23:14.348075 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:14.354309 systemd[1]: sshd@19-172.31.28.93:22-147.75.109.163:51688.service: Deactivated successfully. Feb 13 15:23:14.359270 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:23:14.361477 systemd-logind[1923]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:23:14.363588 systemd-logind[1923]: Removed session 20. Feb 13 15:23:14.390064 systemd[1]: Started sshd@20-172.31.28.93:22-147.75.109.163:51700.service - OpenSSH per-connection server daemon (147.75.109.163:51700). Feb 13 15:23:14.581665 sshd[4906]: Accepted publickey for core from 147.75.109.163 port 51700 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:14.584094 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:14.592804 systemd-logind[1923]: New session 21 of user core. Feb 13 15:23:14.603790 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:23:16.928267 sshd[4908]: Connection closed by 147.75.109.163 port 51700 Feb 13 15:23:16.927481 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:16.938349 systemd[1]: sshd@20-172.31.28.93:22-147.75.109.163:51700.service: Deactivated successfully. Feb 13 15:23:16.938990 systemd-logind[1923]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:23:16.946222 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:23:16.965137 systemd-logind[1923]: Removed session 21. Feb 13 15:23:16.973189 systemd[1]: Started sshd@21-172.31.28.93:22-147.75.109.163:51716.service - OpenSSH per-connection server daemon (147.75.109.163:51716). Feb 13 15:23:17.161098 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 51716 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:17.163602 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:17.171250 systemd-logind[1923]: New session 22 of user core. Feb 13 15:23:17.179754 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:23:17.672947 sshd[4926]: Connection closed by 147.75.109.163 port 51716 Feb 13 15:23:17.675058 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:17.681880 systemd[1]: sshd@21-172.31.28.93:22-147.75.109.163:51716.service: Deactivated successfully. Feb 13 15:23:17.688437 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:23:17.692268 systemd-logind[1923]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:23:17.694487 systemd-logind[1923]: Removed session 22. Feb 13 15:23:17.716054 systemd[1]: Started sshd@22-172.31.28.93:22-147.75.109.163:51724.service - OpenSSH per-connection server daemon (147.75.109.163:51724). Feb 13 15:23:17.898726 sshd[4935]: Accepted publickey for core from 147.75.109.163 port 51724 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:17.901160 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:17.910031 systemd-logind[1923]: New session 23 of user core. Feb 13 15:23:17.917788 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:23:18.154856 sshd[4937]: Connection closed by 147.75.109.163 port 51724 Feb 13 15:23:18.155923 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:18.163643 systemd[1]: sshd@22-172.31.28.93:22-147.75.109.163:51724.service: Deactivated successfully. Feb 13 15:23:18.168647 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:23:18.170970 systemd-logind[1923]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:23:18.172968 systemd-logind[1923]: Removed session 23. Feb 13 15:23:23.188897 systemd[1]: Started sshd@23-172.31.28.93:22-147.75.109.163:60656.service - OpenSSH per-connection server daemon (147.75.109.163:60656). Feb 13 15:23:23.389365 sshd[4951]: Accepted publickey for core from 147.75.109.163 port 60656 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:23.391864 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:23.399367 systemd-logind[1923]: New session 24 of user core. Feb 13 15:23:23.409807 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:23:23.650687 sshd[4953]: Connection closed by 147.75.109.163 port 60656 Feb 13 15:23:23.652353 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:23.660343 systemd[1]: sshd@23-172.31.28.93:22-147.75.109.163:60656.service: Deactivated successfully. Feb 13 15:23:23.660347 systemd-logind[1923]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:23:23.664830 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:23:23.669274 systemd-logind[1923]: Removed session 24. Feb 13 15:23:28.691023 systemd[1]: Started sshd@24-172.31.28.93:22-147.75.109.163:60670.service - OpenSSH per-connection server daemon (147.75.109.163:60670). Feb 13 15:23:28.887099 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 60670 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:28.888200 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:28.897751 systemd-logind[1923]: New session 25 of user core. Feb 13 15:23:28.911838 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:23:29.180359 sshd[4969]: Connection closed by 147.75.109.163 port 60670 Feb 13 15:23:29.181247 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:29.186629 systemd-logind[1923]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:23:29.188100 systemd[1]: sshd@24-172.31.28.93:22-147.75.109.163:60670.service: Deactivated successfully. Feb 13 15:23:29.191314 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:23:29.196057 systemd-logind[1923]: Removed session 25. Feb 13 15:23:34.224021 systemd[1]: Started sshd@25-172.31.28.93:22-147.75.109.163:45028.service - OpenSSH per-connection server daemon (147.75.109.163:45028). Feb 13 15:23:34.399165 sshd[4980]: Accepted publickey for core from 147.75.109.163 port 45028 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:34.403086 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:34.415156 systemd-logind[1923]: New session 26 of user core. Feb 13 15:23:34.422851 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:23:34.660359 sshd[4982]: Connection closed by 147.75.109.163 port 45028 Feb 13 15:23:34.661232 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:34.669401 systemd[1]: sshd@25-172.31.28.93:22-147.75.109.163:45028.service: Deactivated successfully. Feb 13 15:23:34.674766 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:23:34.676376 systemd-logind[1923]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:23:34.680105 systemd-logind[1923]: Removed session 26. Feb 13 15:23:39.700021 systemd[1]: Started sshd@26-172.31.28.93:22-147.75.109.163:40516.service - OpenSSH per-connection server daemon (147.75.109.163:40516). Feb 13 15:23:39.886449 sshd[4994]: Accepted publickey for core from 147.75.109.163 port 40516 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:39.888903 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:39.897547 systemd-logind[1923]: New session 27 of user core. Feb 13 15:23:39.907819 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:23:40.144379 sshd[4996]: Connection closed by 147.75.109.163 port 40516 Feb 13 15:23:40.143601 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:40.149016 systemd[1]: sshd@26-172.31.28.93:22-147.75.109.163:40516.service: Deactivated successfully. Feb 13 15:23:40.152645 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:23:40.156185 systemd-logind[1923]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:23:40.158577 systemd-logind[1923]: Removed session 27. Feb 13 15:23:40.184021 systemd[1]: Started sshd@27-172.31.28.93:22-147.75.109.163:40530.service - OpenSSH per-connection server daemon (147.75.109.163:40530). Feb 13 15:23:40.375640 sshd[5007]: Accepted publickey for core from 147.75.109.163 port 40530 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:40.378077 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:40.385556 systemd-logind[1923]: New session 28 of user core. Feb 13 15:23:40.394758 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:23:43.369403 kubelet[3391]: I0213 15:23:43.369307 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-g4rjf" podStartSLOduration=112.369283229 podStartE2EDuration="1m52.369283229s" podCreationTimestamp="2025-02-13 15:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:22:21.238947309 +0000 UTC m=+34.642661933" watchObservedRunningTime="2025-02-13 15:23:43.369283229 +0000 UTC m=+116.772997841" Feb 13 15:23:43.398638 containerd[1932]: time="2025-02-13T15:23:43.397074473Z" level=info msg="StopContainer for \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\" with timeout 30 (s)" Feb 13 15:23:43.401835 containerd[1932]: time="2025-02-13T15:23:43.400378517Z" level=info msg="Stop container \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\" with signal terminated" Feb 13 15:23:43.438961 systemd[1]: cri-containerd-ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319.scope: Deactivated successfully. Feb 13 15:23:43.449657 containerd[1932]: time="2025-02-13T15:23:43.449060501Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:23:43.465337 containerd[1932]: time="2025-02-13T15:23:43.465183341Z" level=info msg="StopContainer for \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\" with timeout 2 (s)" Feb 13 15:23:43.466465 containerd[1932]: time="2025-02-13T15:23:43.466328597Z" level=info msg="Stop container \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\" with signal terminated" Feb 13 15:23:43.484239 systemd-networkd[1852]: lxc_health: Link DOWN Feb 13 15:23:43.484258 systemd-networkd[1852]: lxc_health: Lost carrier Feb 13 15:23:43.510401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319-rootfs.mount: Deactivated successfully. Feb 13 15:23:43.515633 systemd[1]: cri-containerd-b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517.scope: Deactivated successfully. Feb 13 15:23:43.516632 systemd[1]: cri-containerd-b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517.scope: Consumed 14.243s CPU time. Feb 13 15:23:43.532139 containerd[1932]: time="2025-02-13T15:23:43.531907902Z" level=info msg="shim disconnected" id=ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319 namespace=k8s.io Feb 13 15:23:43.532139 containerd[1932]: time="2025-02-13T15:23:43.532045890Z" level=warning msg="cleaning up after shim disconnected" id=ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319 namespace=k8s.io Feb 13 15:23:43.532139 containerd[1932]: time="2025-02-13T15:23:43.532068150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:43.559612 containerd[1932]: time="2025-02-13T15:23:43.559433610Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:23:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:23:43.568551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517-rootfs.mount: Deactivated successfully. Feb 13 15:23:43.570199 containerd[1932]: time="2025-02-13T15:23:43.569147034Z" level=info msg="StopContainer for \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\" returns successfully" Feb 13 15:23:43.572080 containerd[1932]: time="2025-02-13T15:23:43.571828098Z" level=info msg="StopPodSandbox for \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\"" Feb 13 15:23:43.572080 containerd[1932]: time="2025-02-13T15:23:43.571895010Z" level=info msg="Container to stop \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:43.576494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655-shm.mount: Deactivated successfully. Feb 13 15:23:43.582391 containerd[1932]: time="2025-02-13T15:23:43.582095442Z" level=info msg="shim disconnected" id=b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517 namespace=k8s.io Feb 13 15:23:43.582391 containerd[1932]: time="2025-02-13T15:23:43.582173130Z" level=warning msg="cleaning up after shim disconnected" id=b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517 namespace=k8s.io Feb 13 15:23:43.582391 containerd[1932]: time="2025-02-13T15:23:43.582195030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:43.593619 systemd[1]: cri-containerd-0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655.scope: Deactivated successfully. Feb 13 15:23:43.620727 containerd[1932]: time="2025-02-13T15:23:43.618963306Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:23:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:23:43.626034 containerd[1932]: time="2025-02-13T15:23:43.625962858Z" level=info msg="StopContainer for \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\" returns successfully" Feb 13 15:23:43.626833 containerd[1932]: time="2025-02-13T15:23:43.626777982Z" level=info msg="StopPodSandbox for \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\"" Feb 13 15:23:43.626950 containerd[1932]: time="2025-02-13T15:23:43.626843118Z" level=info msg="Container to stop \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:43.626950 containerd[1932]: time="2025-02-13T15:23:43.626869374Z" level=info msg="Container to stop \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:43.626950 containerd[1932]: time="2025-02-13T15:23:43.626891118Z" level=info msg="Container to stop \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:43.626950 containerd[1932]: time="2025-02-13T15:23:43.626913306Z" level=info msg="Container to stop \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:43.626950 containerd[1932]: time="2025-02-13T15:23:43.626934330Z" level=info msg="Container to stop \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:23:43.635029 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7-shm.mount: Deactivated successfully. Feb 13 15:23:43.647868 systemd[1]: cri-containerd-565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7.scope: Deactivated successfully. Feb 13 15:23:43.657641 containerd[1932]: time="2025-02-13T15:23:43.657203802Z" level=info msg="shim disconnected" id=0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655 namespace=k8s.io Feb 13 15:23:43.657641 containerd[1932]: time="2025-02-13T15:23:43.657287034Z" level=warning msg="cleaning up after shim disconnected" id=0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655 namespace=k8s.io Feb 13 15:23:43.657641 containerd[1932]: time="2025-02-13T15:23:43.657310386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:43.695099 containerd[1932]: time="2025-02-13T15:23:43.694819951Z" level=info msg="TearDown network for sandbox \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" successfully" Feb 13 15:23:43.695099 containerd[1932]: time="2025-02-13T15:23:43.694877263Z" level=info msg="StopPodSandbox for \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" returns successfully" Feb 13 15:23:43.713305 containerd[1932]: time="2025-02-13T15:23:43.712077379Z" level=info msg="shim disconnected" id=565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7 namespace=k8s.io Feb 13 15:23:43.713305 containerd[1932]: time="2025-02-13T15:23:43.713052619Z" level=warning msg="cleaning up after shim disconnected" id=565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7 namespace=k8s.io Feb 13 15:23:43.713305 containerd[1932]: time="2025-02-13T15:23:43.713076919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:43.746647 containerd[1932]: time="2025-02-13T15:23:43.746449459Z" level=info msg="TearDown network for sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" successfully" Feb 13 15:23:43.746647 containerd[1932]: time="2025-02-13T15:23:43.746517619Z" level=info msg="StopPodSandbox for \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" returns successfully" Feb 13 15:23:43.857807 kubelet[3391]: I0213 15:23:43.857228 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmtjv\" (UniqueName: \"kubernetes.io/projected/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-kube-api-access-fmtjv\") pod \"8db092e8-c7ad-4278-9be5-39ca9ed5ddfe\" (UID: \"8db092e8-c7ad-4278-9be5-39ca9ed5ddfe\") " Feb 13 15:23:43.857807 kubelet[3391]: I0213 15:23:43.857299 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-cilium-config-path\") pod \"8db092e8-c7ad-4278-9be5-39ca9ed5ddfe\" (UID: \"8db092e8-c7ad-4278-9be5-39ca9ed5ddfe\") " Feb 13 15:23:43.857807 kubelet[3391]: I0213 15:23:43.857339 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-cgroup\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.857807 kubelet[3391]: I0213 15:23:43.857373 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-bpf-maps\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.857807 kubelet[3391]: I0213 15:23:43.857409 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cni-path\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.857807 kubelet[3391]: I0213 15:23:43.857445 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hubble-tls\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858276 kubelet[3391]: I0213 15:23:43.857478 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hostproc\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858276 kubelet[3391]: I0213 15:23:43.857544 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-kernel\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858276 kubelet[3391]: I0213 15:23:43.857579 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-xtables-lock\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858276 kubelet[3391]: I0213 15:23:43.857649 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-clustermesh-secrets\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858276 kubelet[3391]: I0213 15:23:43.857685 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-run\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858276 kubelet[3391]: I0213 15:23:43.857723 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhmhh\" (UniqueName: \"kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-kube-api-access-fhmhh\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858630 kubelet[3391]: I0213 15:23:43.857757 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-net\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.858630 kubelet[3391]: I0213 15:23:43.857869 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.861531 kubelet[3391]: I0213 15:23:43.860991 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.861531 kubelet[3391]: I0213 15:23:43.861098 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.861531 kubelet[3391]: I0213 15:23:43.861162 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.861531 kubelet[3391]: I0213 15:23:43.861205 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.863620 kubelet[3391]: I0213 15:23:43.862870 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.865083 kubelet[3391]: I0213 15:23:43.863037 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.865083 kubelet[3391]: I0213 15:23:43.863161 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.865083 kubelet[3391]: I0213 15:23:43.864922 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8db092e8-c7ad-4278-9be5-39ca9ed5ddfe" (UID: "8db092e8-c7ad-4278-9be5-39ca9ed5ddfe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:23:43.869615 kubelet[3391]: I0213 15:23:43.869399 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-kube-api-access-fmtjv" (OuterVolumeSpecName: "kube-api-access-fmtjv") pod "8db092e8-c7ad-4278-9be5-39ca9ed5ddfe" (UID: "8db092e8-c7ad-4278-9be5-39ca9ed5ddfe"). InnerVolumeSpecName "kube-api-access-fmtjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:23:43.872136 kubelet[3391]: I0213 15:23:43.871917 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-kube-api-access-fhmhh" (OuterVolumeSpecName: "kube-api-access-fhmhh") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "kube-api-access-fhmhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:23:43.872136 kubelet[3391]: I0213 15:23:43.871982 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:23:43.872758 kubelet[3391]: I0213 15:23:43.872114 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:23:43.958191 kubelet[3391]: I0213 15:23:43.958119 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-config-path\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.958191 kubelet[3391]: I0213 15:23:43.958188 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-etc-cni-netd\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.958415 kubelet[3391]: I0213 15:23:43.958228 3391 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-lib-modules\") pod \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\" (UID: \"b4bf7fec-5296-4126-b0c7-d1a76c24dd74\") " Feb 13 15:23:43.958415 kubelet[3391]: I0213 15:23:43.958288 3391 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-cgroup\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958415 kubelet[3391]: I0213 15:23:43.958313 3391 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-bpf-maps\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958415 kubelet[3391]: I0213 15:23:43.958334 3391 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cni-path\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958415 kubelet[3391]: I0213 15:23:43.958354 3391 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hubble-tls\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958415 kubelet[3391]: I0213 15:23:43.958373 3391 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-hostproc\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958414 3391 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-kernel\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958441 3391 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-xtables-lock\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958461 3391 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-clustermesh-secrets\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958482 3391 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-run\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958532 3391 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fhmhh\" (UniqueName: \"kubernetes.io/projected/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-kube-api-access-fhmhh\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958556 3391 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-host-proc-sys-net\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958578 3391 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fmtjv\" (UniqueName: \"kubernetes.io/projected/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-kube-api-access-fmtjv\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.958764 kubelet[3391]: I0213 15:23:43.958599 3391 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe-cilium-config-path\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:43.959159 kubelet[3391]: I0213 15:23:43.958648 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.959355 kubelet[3391]: I0213 15:23:43.959269 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:23:43.965004 kubelet[3391]: I0213 15:23:43.964935 3391 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4bf7fec-5296-4126-b0c7-d1a76c24dd74" (UID: "b4bf7fec-5296-4126-b0c7-d1a76c24dd74"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:23:44.059601 kubelet[3391]: I0213 15:23:44.059542 3391 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-cilium-config-path\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:44.059601 kubelet[3391]: I0213 15:23:44.059593 3391 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-etc-cni-netd\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:44.059801 kubelet[3391]: I0213 15:23:44.059618 3391 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4bf7fec-5296-4126-b0c7-d1a76c24dd74-lib-modules\") on node \"ip-172-31-28-93\" DevicePath \"\"" Feb 13 15:23:44.305312 kubelet[3391]: I0213 15:23:44.304427 3391 scope.go:117] "RemoveContainer" containerID="b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517" Feb 13 15:23:44.309405 containerd[1932]: time="2025-02-13T15:23:44.309102822Z" level=info msg="RemoveContainer for \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\"" Feb 13 15:23:44.322651 systemd[1]: Removed slice kubepods-burstable-podb4bf7fec_5296_4126_b0c7_d1a76c24dd74.slice - libcontainer container kubepods-burstable-podb4bf7fec_5296_4126_b0c7_d1a76c24dd74.slice. Feb 13 15:23:44.322905 systemd[1]: kubepods-burstable-podb4bf7fec_5296_4126_b0c7_d1a76c24dd74.slice: Consumed 14.389s CPU time. Feb 13 15:23:44.328881 containerd[1932]: time="2025-02-13T15:23:44.328741734Z" level=info msg="RemoveContainer for \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\" returns successfully" Feb 13 15:23:44.329862 kubelet[3391]: I0213 15:23:44.329576 3391 scope.go:117] "RemoveContainer" containerID="ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a" Feb 13 15:23:44.332430 systemd[1]: Removed slice kubepods-besteffort-pod8db092e8_c7ad_4278_9be5_39ca9ed5ddfe.slice - libcontainer container kubepods-besteffort-pod8db092e8_c7ad_4278_9be5_39ca9ed5ddfe.slice. Feb 13 15:23:44.334318 containerd[1932]: time="2025-02-13T15:23:44.333065742Z" level=info msg="RemoveContainer for \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\"" Feb 13 15:23:44.340759 containerd[1932]: time="2025-02-13T15:23:44.340706130Z" level=info msg="RemoveContainer for \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\" returns successfully" Feb 13 15:23:44.342795 kubelet[3391]: I0213 15:23:44.342662 3391 scope.go:117] "RemoveContainer" containerID="945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db" Feb 13 15:23:44.347054 containerd[1932]: time="2025-02-13T15:23:44.346964766Z" level=info msg="RemoveContainer for \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\"" Feb 13 15:23:44.354745 containerd[1932]: time="2025-02-13T15:23:44.354628710Z" level=info msg="RemoveContainer for \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\" returns successfully" Feb 13 15:23:44.355177 kubelet[3391]: I0213 15:23:44.355126 3391 scope.go:117] "RemoveContainer" containerID="5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d" Feb 13 15:23:44.357739 containerd[1932]: time="2025-02-13T15:23:44.357662898Z" level=info msg="RemoveContainer for \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\"" Feb 13 15:23:44.366903 containerd[1932]: time="2025-02-13T15:23:44.366727782Z" level=info msg="RemoveContainer for \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\" returns successfully" Feb 13 15:23:44.367100 kubelet[3391]: I0213 15:23:44.367059 3391 scope.go:117] "RemoveContainer" containerID="9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480" Feb 13 15:23:44.370393 containerd[1932]: time="2025-02-13T15:23:44.370307838Z" level=info msg="RemoveContainer for \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\"" Feb 13 15:23:44.379032 containerd[1932]: time="2025-02-13T15:23:44.378850914Z" level=info msg="RemoveContainer for \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\" returns successfully" Feb 13 15:23:44.380571 kubelet[3391]: I0213 15:23:44.380472 3391 scope.go:117] "RemoveContainer" containerID="b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517" Feb 13 15:23:44.383596 containerd[1932]: time="2025-02-13T15:23:44.381034074Z" level=error msg="ContainerStatus for \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\": not found" Feb 13 15:23:44.384098 kubelet[3391]: E0213 15:23:44.383899 3391 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\": not found" containerID="b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517" Feb 13 15:23:44.384098 kubelet[3391]: I0213 15:23:44.384192 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517"} err="failed to get container status \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6aef60c3cc57f7df60da4d135d860851ddfe27c44b5852e3e518728ee682517\": not found" Feb 13 15:23:44.384098 kubelet[3391]: I0213 15:23:44.384536 3391 scope.go:117] "RemoveContainer" containerID="ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a" Feb 13 15:23:44.385468 containerd[1932]: time="2025-02-13T15:23:44.385239930Z" level=error msg="ContainerStatus for \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\": not found" Feb 13 15:23:44.385802 kubelet[3391]: E0213 15:23:44.385750 3391 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\": not found" containerID="ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a" Feb 13 15:23:44.385975 kubelet[3391]: I0213 15:23:44.385934 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a"} err="failed to get container status \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee1657bee9c3df3d124e44cfa10ac816e8e0798f8e898c536e4f79cd5839219a\": not found" Feb 13 15:23:44.386093 kubelet[3391]: I0213 15:23:44.386071 3391 scope.go:117] "RemoveContainer" containerID="945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db" Feb 13 15:23:44.386629 containerd[1932]: time="2025-02-13T15:23:44.386563122Z" level=error msg="ContainerStatus for \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\": not found" Feb 13 15:23:44.387115 kubelet[3391]: E0213 15:23:44.387019 3391 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\": not found" containerID="945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db" Feb 13 15:23:44.387236 kubelet[3391]: I0213 15:23:44.387153 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db"} err="failed to get container status \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\": rpc error: code = NotFound desc = an error occurred when try to find container \"945001346afdea268e0584c852a323a6e8fd390094c147a5aea61ef487d868db\": not found" Feb 13 15:23:44.387301 kubelet[3391]: I0213 15:23:44.387233 3391 scope.go:117] "RemoveContainer" containerID="5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d" Feb 13 15:23:44.387746 containerd[1932]: time="2025-02-13T15:23:44.387620226Z" level=error msg="ContainerStatus for \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\": not found" Feb 13 15:23:44.388079 kubelet[3391]: E0213 15:23:44.388025 3391 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\": not found" containerID="5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d" Feb 13 15:23:44.388158 kubelet[3391]: I0213 15:23:44.388095 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d"} err="failed to get container status \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f7d47e11efa3222305e3e1cd3446860a781fe261c00c53130a5d1de862b905d\": not found" Feb 13 15:23:44.388158 kubelet[3391]: I0213 15:23:44.388138 3391 scope.go:117] "RemoveContainer" containerID="9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480" Feb 13 15:23:44.389381 containerd[1932]: time="2025-02-13T15:23:44.388745694Z" level=error msg="ContainerStatus for \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\": not found" Feb 13 15:23:44.389584 kubelet[3391]: E0213 15:23:44.389011 3391 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\": not found" containerID="9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480" Feb 13 15:23:44.389584 kubelet[3391]: I0213 15:23:44.389059 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480"} err="failed to get container status \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a11653f9771c9d1e2bfa27e2d6b6c0169729257cf9770e4ca584544e7aee480\": not found" Feb 13 15:23:44.389584 kubelet[3391]: I0213 15:23:44.389092 3391 scope.go:117] "RemoveContainer" containerID="ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319" Feb 13 15:23:44.392134 containerd[1932]: time="2025-02-13T15:23:44.392049990Z" level=info msg="RemoveContainer for \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\"" Feb 13 15:23:44.400648 containerd[1932]: time="2025-02-13T15:23:44.400568322Z" level=info msg="RemoveContainer for \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\" returns successfully" Feb 13 15:23:44.402035 containerd[1932]: time="2025-02-13T15:23:44.401853066Z" level=error msg="ContainerStatus for \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\": not found" Feb 13 15:23:44.402149 kubelet[3391]: I0213 15:23:44.400988 3391 scope.go:117] "RemoveContainer" containerID="ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319" Feb 13 15:23:44.402468 kubelet[3391]: E0213 15:23:44.402371 3391 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\": not found" containerID="ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319" Feb 13 15:23:44.402468 kubelet[3391]: I0213 15:23:44.402427 3391 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319"} err="failed to get container status \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae07f547e580029314b84f97cbd75a0631e3bf8ff50724cb67595d19e9e52319\": not found" Feb 13 15:23:44.414491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655-rootfs.mount: Deactivated successfully. Feb 13 15:23:44.414692 systemd[1]: var-lib-kubelet-pods-8db092e8\x2dc7ad\x2d4278\x2d9be5\x2d39ca9ed5ddfe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmtjv.mount: Deactivated successfully. Feb 13 15:23:44.414839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7-rootfs.mount: Deactivated successfully. Feb 13 15:23:44.414970 systemd[1]: var-lib-kubelet-pods-b4bf7fec\x2d5296\x2d4126\x2db0c7\x2dd1a76c24dd74-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfhmhh.mount: Deactivated successfully. Feb 13 15:23:44.415100 systemd[1]: var-lib-kubelet-pods-b4bf7fec\x2d5296\x2d4126\x2db0c7\x2dd1a76c24dd74-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:23:44.415229 systemd[1]: var-lib-kubelet-pods-b4bf7fec\x2d5296\x2d4126\x2db0c7\x2dd1a76c24dd74-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:23:44.847746 kubelet[3391]: I0213 15:23:44.847669 3391 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8db092e8-c7ad-4278-9be5-39ca9ed5ddfe" path="/var/lib/kubelet/pods/8db092e8-c7ad-4278-9be5-39ca9ed5ddfe/volumes" Feb 13 15:23:44.848721 kubelet[3391]: I0213 15:23:44.848672 3391 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4bf7fec-5296-4126-b0c7-d1a76c24dd74" path="/var/lib/kubelet/pods/b4bf7fec-5296-4126-b0c7-d1a76c24dd74/volumes" Feb 13 15:23:45.329542 sshd[5009]: Connection closed by 147.75.109.163 port 40530 Feb 13 15:23:45.330943 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:45.336145 systemd[1]: sshd@27-172.31.28.93:22-147.75.109.163:40530.service: Deactivated successfully. Feb 13 15:23:45.339219 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:23:45.340143 systemd[1]: session-28.scope: Consumed 2.235s CPU time. Feb 13 15:23:45.343146 systemd-logind[1923]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:23:45.346145 systemd-logind[1923]: Removed session 28. Feb 13 15:23:45.367015 systemd[1]: Started sshd@28-172.31.28.93:22-147.75.109.163:40544.service - OpenSSH per-connection server daemon (147.75.109.163:40544). Feb 13 15:23:45.549376 sshd[5164]: Accepted publickey for core from 147.75.109.163 port 40544 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:45.551898 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:45.561180 systemd-logind[1923]: New session 29 of user core. Feb 13 15:23:45.568857 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:23:45.613311 ntpd[1917]: Deleting interface #12 lxc_health, fe80::304e:97ff:fe5a:49a8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Feb 13 15:23:45.613813 ntpd[1917]: 13 Feb 15:23:45 ntpd[1917]: Deleting interface #12 lxc_health, fe80::304e:97ff:fe5a:49a8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Feb 13 15:23:45.640791 update_engine[1925]: I20250213 15:23:45.640714 1925 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:23:45.640791 update_engine[1925]: I20250213 15:23:45.640786 1925 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:23:45.641305 update_engine[1925]: I20250213 15:23:45.641052 1925 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:23:45.642051 update_engine[1925]: I20250213 15:23:45.641999 1925 omaha_request_params.cc:62] Current group set to beta Feb 13 15:23:45.642315 update_engine[1925]: I20250213 15:23:45.642148 1925 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:23:45.642315 update_engine[1925]: I20250213 15:23:45.642177 1925 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:23:45.642315 update_engine[1925]: I20250213 15:23:45.642213 1925 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:23:45.642315 update_engine[1925]: I20250213 15:23:45.642273 1925 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:23:45.642563 update_engine[1925]: I20250213 15:23:45.642375 1925 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:23:45.642563 update_engine[1925]: I20250213 15:23:45.642395 1925 omaha_request_action.cc:272] Request: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: Feb 13 15:23:45.642563 update_engine[1925]: I20250213 15:23:45.642412 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:23:45.643301 locksmithd[1942]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:23:45.644396 update_engine[1925]: I20250213 15:23:45.644330 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:23:45.644930 update_engine[1925]: I20250213 15:23:45.644855 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:23:45.677550 update_engine[1925]: E20250213 15:23:45.677439 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:23:45.677717 update_engine[1925]: I20250213 15:23:45.677608 1925 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:23:46.777079 containerd[1932]: time="2025-02-13T15:23:46.776961814Z" level=info msg="StopPodSandbox for \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\"" Feb 13 15:23:46.777652 containerd[1932]: time="2025-02-13T15:23:46.777103102Z" level=info msg="TearDown network for sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" successfully" Feb 13 15:23:46.777652 containerd[1932]: time="2025-02-13T15:23:46.777126706Z" level=info msg="StopPodSandbox for \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" returns successfully" Feb 13 15:23:46.779858 containerd[1932]: time="2025-02-13T15:23:46.778830526Z" level=info msg="RemovePodSandbox for \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\"" Feb 13 15:23:46.779858 containerd[1932]: time="2025-02-13T15:23:46.778901326Z" level=info msg="Forcibly stopping sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\"" Feb 13 15:23:46.779858 containerd[1932]: time="2025-02-13T15:23:46.779031094Z" level=info msg="TearDown network for sandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" successfully" Feb 13 15:23:46.786732 containerd[1932]: time="2025-02-13T15:23:46.785740762Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:23:46.786732 containerd[1932]: time="2025-02-13T15:23:46.785821918Z" level=info msg="RemovePodSandbox \"565b75f75decb437627ed3925b337449690a7e0c345f6c5a71a4254ae7c729b7\" returns successfully" Feb 13 15:23:46.786732 containerd[1932]: time="2025-02-13T15:23:46.786475318Z" level=info msg="StopPodSandbox for \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\"" Feb 13 15:23:46.786732 containerd[1932]: time="2025-02-13T15:23:46.786641182Z" level=info msg="TearDown network for sandbox \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" successfully" Feb 13 15:23:46.786732 containerd[1932]: time="2025-02-13T15:23:46.786663886Z" level=info msg="StopPodSandbox for \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" returns successfully" Feb 13 15:23:46.789578 containerd[1932]: time="2025-02-13T15:23:46.787938814Z" level=info msg="RemovePodSandbox for \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\"" Feb 13 15:23:46.789578 containerd[1932]: time="2025-02-13T15:23:46.787979638Z" level=info msg="Forcibly stopping sandbox \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\"" Feb 13 15:23:46.789578 containerd[1932]: time="2025-02-13T15:23:46.788067250Z" level=info msg="TearDown network for sandbox \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" successfully" Feb 13 15:23:46.794337 containerd[1932]: time="2025-02-13T15:23:46.794275222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:23:46.794552 containerd[1932]: time="2025-02-13T15:23:46.794523934Z" level=info msg="RemovePodSandbox \"0ae4dd07411faeea77bd47f4939f45dcc2e4bd5c114a8c055183712da8894655\" returns successfully" Feb 13 15:23:47.080728 kubelet[3391]: E0213 15:23:47.079940 3391 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:23:48.119971 sshd[5166]: Connection closed by 147.75.109.163 port 40544 Feb 13 15:23:48.119858 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:48.131726 systemd[1]: sshd@28-172.31.28.93:22-147.75.109.163:40544.service: Deactivated successfully. Feb 13 15:23:48.134546 kubelet[3391]: E0213 15:23:48.132354 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4bf7fec-5296-4126-b0c7-d1a76c24dd74" containerName="mount-cgroup" Feb 13 15:23:48.134546 kubelet[3391]: E0213 15:23:48.132432 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4bf7fec-5296-4126-b0c7-d1a76c24dd74" containerName="mount-bpf-fs" Feb 13 15:23:48.134546 kubelet[3391]: E0213 15:23:48.132452 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4bf7fec-5296-4126-b0c7-d1a76c24dd74" containerName="clean-cilium-state" Feb 13 15:23:48.134546 kubelet[3391]: E0213 15:23:48.132468 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4bf7fec-5296-4126-b0c7-d1a76c24dd74" containerName="cilium-agent" Feb 13 15:23:48.134546 kubelet[3391]: E0213 15:23:48.132483 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4bf7fec-5296-4126-b0c7-d1a76c24dd74" containerName="apply-sysctl-overwrites" Feb 13 15:23:48.134546 kubelet[3391]: E0213 15:23:48.132530 3391 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8db092e8-c7ad-4278-9be5-39ca9ed5ddfe" containerName="cilium-operator" Feb 13 15:23:48.134546 kubelet[3391]: I0213 15:23:48.132582 3391 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4bf7fec-5296-4126-b0c7-d1a76c24dd74" containerName="cilium-agent" Feb 13 15:23:48.134546 kubelet[3391]: I0213 15:23:48.132598 3391 memory_manager.go:354] "RemoveStaleState removing state" podUID="8db092e8-c7ad-4278-9be5-39ca9ed5ddfe" containerName="cilium-operator" Feb 13 15:23:48.140270 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:23:48.143629 systemd[1]: session-29.scope: Consumed 2.342s CPU time. Feb 13 15:23:48.149895 systemd-logind[1923]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:23:48.183048 systemd[1]: Started sshd@29-172.31.28.93:22-147.75.109.163:40560.service - OpenSSH per-connection server daemon (147.75.109.163:40560). Feb 13 15:23:48.189357 systemd-logind[1923]: Removed session 29. Feb 13 15:23:48.203583 systemd[1]: Created slice kubepods-burstable-pod04168d3a_d579_4278_800f_0becff97f8e5.slice - libcontainer container kubepods-burstable-pod04168d3a_d579_4278_800f_0becff97f8e5.slice. Feb 13 15:23:48.287921 kubelet[3391]: I0213 15:23:48.287870 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-bpf-maps\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288535 kubelet[3391]: I0213 15:23:48.288100 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-cni-path\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288535 kubelet[3391]: I0213 15:23:48.288142 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-host-proc-sys-net\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288535 kubelet[3391]: I0213 15:23:48.288182 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-host-proc-sys-kernel\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288535 kubelet[3391]: I0213 15:23:48.288222 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04168d3a-d579-4278-800f-0becff97f8e5-clustermesh-secrets\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288535 kubelet[3391]: I0213 15:23:48.288259 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04168d3a-d579-4278-800f-0becff97f8e5-cilium-config-path\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288845 kubelet[3391]: I0213 15:23:48.288294 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/04168d3a-d579-4278-800f-0becff97f8e5-cilium-ipsec-secrets\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288845 kubelet[3391]: I0213 15:23:48.288343 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-hostproc\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288845 kubelet[3391]: I0213 15:23:48.288391 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-etc-cni-netd\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288845 kubelet[3391]: I0213 15:23:48.288445 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-lib-modules\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288845 kubelet[3391]: I0213 15:23:48.288488 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04168d3a-d579-4278-800f-0becff97f8e5-hubble-tls\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.288845 kubelet[3391]: I0213 15:23:48.288626 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-xtables-lock\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.289127 kubelet[3391]: I0213 15:23:48.288699 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-cilium-run\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.289127 kubelet[3391]: I0213 15:23:48.288736 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04168d3a-d579-4278-800f-0becff97f8e5-cilium-cgroup\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.289127 kubelet[3391]: I0213 15:23:48.288770 3391 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdj2j\" (UniqueName: \"kubernetes.io/projected/04168d3a-d579-4278-800f-0becff97f8e5-kube-api-access-fdj2j\") pod \"cilium-ztfzr\" (UID: \"04168d3a-d579-4278-800f-0becff97f8e5\") " pod="kube-system/cilium-ztfzr" Feb 13 15:23:48.406688 sshd[5177]: Accepted publickey for core from 147.75.109.163 port 40560 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:48.427275 sshd-session[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:48.463316 systemd-logind[1923]: New session 30 of user core. Feb 13 15:23:48.471782 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 15:23:48.512091 containerd[1932]: time="2025-02-13T15:23:48.512005798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztfzr,Uid:04168d3a-d579-4278-800f-0becff97f8e5,Namespace:kube-system,Attempt:0,}" Feb 13 15:23:48.556792 containerd[1932]: time="2025-02-13T15:23:48.556623059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:23:48.556792 containerd[1932]: time="2025-02-13T15:23:48.556718951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:23:48.556792 containerd[1932]: time="2025-02-13T15:23:48.556776431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:48.557436 containerd[1932]: time="2025-02-13T15:23:48.556978055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:23:48.586823 systemd[1]: Started cri-containerd-cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f.scope - libcontainer container cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f. Feb 13 15:23:48.593100 sshd[5183]: Connection closed by 147.75.109.163 port 40560 Feb 13 15:23:48.593702 sshd-session[5177]: pam_unix(sshd:session): session closed for user core Feb 13 15:23:48.602257 systemd-logind[1923]: Session 30 logged out. Waiting for processes to exit. Feb 13 15:23:48.604950 systemd[1]: sshd@29-172.31.28.93:22-147.75.109.163:40560.service: Deactivated successfully. Feb 13 15:23:48.611030 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 15:23:48.623144 systemd-logind[1923]: Removed session 30. Feb 13 15:23:48.630054 systemd[1]: Started sshd@30-172.31.28.93:22-147.75.109.163:40572.service - OpenSSH per-connection server daemon (147.75.109.163:40572). Feb 13 15:23:48.673917 containerd[1932]: time="2025-02-13T15:23:48.673721423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ztfzr,Uid:04168d3a-d579-4278-800f-0becff97f8e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\"" Feb 13 15:23:48.681580 containerd[1932]: time="2025-02-13T15:23:48.681311183Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:23:48.708940 containerd[1932]: time="2025-02-13T15:23:48.708798911Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2\"" Feb 13 15:23:48.710625 containerd[1932]: time="2025-02-13T15:23:48.710558951Z" level=info msg="StartContainer for \"6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2\"" Feb 13 15:23:48.759837 systemd[1]: Started cri-containerd-6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2.scope - libcontainer container 6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2. Feb 13 15:23:48.816637 containerd[1932]: time="2025-02-13T15:23:48.816559740Z" level=info msg="StartContainer for \"6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2\" returns successfully" Feb 13 15:23:48.834350 systemd[1]: cri-containerd-6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2.scope: Deactivated successfully. Feb 13 15:23:48.842337 sshd[5222]: Accepted publickey for core from 147.75.109.163 port 40572 ssh2: RSA SHA256:R36zWpw5cakk8fauQhOcmVfR8ZJ3XJQ/P/ZhUMLO1pQ Feb 13 15:23:48.844224 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:23:48.858219 systemd-logind[1923]: New session 31 of user core. Feb 13 15:23:48.865058 systemd[1]: Started session-31.scope - Session 31 of User core. Feb 13 15:23:48.905059 containerd[1932]: time="2025-02-13T15:23:48.904978992Z" level=info msg="shim disconnected" id=6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2 namespace=k8s.io Feb 13 15:23:48.905746 containerd[1932]: time="2025-02-13T15:23:48.905405616Z" level=warning msg="cleaning up after shim disconnected" id=6047bbf8ceba6b155a5d608f1b4d6994f4bfa994c5d1878b34c4dd87fdb557c2 namespace=k8s.io Feb 13 15:23:48.905746 containerd[1932]: time="2025-02-13T15:23:48.905444940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:49.338056 containerd[1932]: time="2025-02-13T15:23:49.337586819Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:23:49.347234 kubelet[3391]: I0213 15:23:49.347113 3391 setters.go:600] "Node became not ready" node="ip-172-31-28-93" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:23:49Z","lastTransitionTime":"2025-02-13T15:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:23:49.365840 containerd[1932]: time="2025-02-13T15:23:49.365765027Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee\"" Feb 13 15:23:49.371314 containerd[1932]: time="2025-02-13T15:23:49.367839563Z" level=info msg="StartContainer for \"51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee\"" Feb 13 15:23:49.439808 systemd[1]: Started cri-containerd-51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee.scope - libcontainer container 51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee. Feb 13 15:23:49.488655 containerd[1932]: time="2025-02-13T15:23:49.488600099Z" level=info msg="StartContainer for \"51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee\" returns successfully" Feb 13 15:23:49.502380 systemd[1]: cri-containerd-51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee.scope: Deactivated successfully. Feb 13 15:23:49.544601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee-rootfs.mount: Deactivated successfully. Feb 13 15:23:49.555654 containerd[1932]: time="2025-02-13T15:23:49.555579012Z" level=info msg="shim disconnected" id=51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee namespace=k8s.io Feb 13 15:23:49.556290 containerd[1932]: time="2025-02-13T15:23:49.556189380Z" level=warning msg="cleaning up after shim disconnected" id=51cd271a6ac9af4c1b511aa8156f354c5e50cbb4beaa31caea4cdd9f2ade42ee namespace=k8s.io Feb 13 15:23:49.556290 containerd[1932]: time="2025-02-13T15:23:49.556220820Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:49.575797 containerd[1932]: time="2025-02-13T15:23:49.575693712Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:23:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:23:50.343199 containerd[1932]: time="2025-02-13T15:23:50.343096764Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:23:50.376856 containerd[1932]: time="2025-02-13T15:23:50.376608012Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7\"" Feb 13 15:23:50.378912 containerd[1932]: time="2025-02-13T15:23:50.378796188Z" level=info msg="StartContainer for \"5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7\"" Feb 13 15:23:50.435207 systemd[1]: run-containerd-runc-k8s.io-5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7-runc.Ev4xEs.mount: Deactivated successfully. Feb 13 15:23:50.448833 systemd[1]: Started cri-containerd-5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7.scope - libcontainer container 5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7. Feb 13 15:23:50.505028 containerd[1932]: time="2025-02-13T15:23:50.504919704Z" level=info msg="StartContainer for \"5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7\" returns successfully" Feb 13 15:23:50.509920 systemd[1]: cri-containerd-5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7.scope: Deactivated successfully. Feb 13 15:23:50.547546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7-rootfs.mount: Deactivated successfully. Feb 13 15:23:50.562724 containerd[1932]: time="2025-02-13T15:23:50.562636393Z" level=info msg="shim disconnected" id=5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7 namespace=k8s.io Feb 13 15:23:50.562724 containerd[1932]: time="2025-02-13T15:23:50.562716973Z" level=warning msg="cleaning up after shim disconnected" id=5726391f38f9fe033cb7de18b1b836361dd40b343333da3e8ca11b2fa3fb29a7 namespace=k8s.io Feb 13 15:23:50.563930 containerd[1932]: time="2025-02-13T15:23:50.562738933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:51.360415 containerd[1932]: time="2025-02-13T15:23:51.360110509Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:23:51.390684 containerd[1932]: time="2025-02-13T15:23:51.390474457Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9\"" Feb 13 15:23:51.392430 containerd[1932]: time="2025-02-13T15:23:51.391335469Z" level=info msg="StartContainer for \"6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9\"" Feb 13 15:23:51.448355 systemd[1]: run-containerd-runc-k8s.io-6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9-runc.eudnx7.mount: Deactivated successfully. Feb 13 15:23:51.465806 systemd[1]: Started cri-containerd-6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9.scope - libcontainer container 6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9. Feb 13 15:23:51.509924 systemd[1]: cri-containerd-6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9.scope: Deactivated successfully. Feb 13 15:23:51.513985 containerd[1932]: time="2025-02-13T15:23:51.513750205Z" level=info msg="StartContainer for \"6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9\" returns successfully" Feb 13 15:23:51.551477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9-rootfs.mount: Deactivated successfully. Feb 13 15:23:51.561826 containerd[1932]: time="2025-02-13T15:23:51.561751898Z" level=info msg="shim disconnected" id=6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9 namespace=k8s.io Feb 13 15:23:51.562400 containerd[1932]: time="2025-02-13T15:23:51.562133582Z" level=warning msg="cleaning up after shim disconnected" id=6ef2fab547b0d6f2c129b801f882b59743e26fd9701d73d0d5b79513b1322fb9 namespace=k8s.io Feb 13 15:23:51.562400 containerd[1932]: time="2025-02-13T15:23:51.562164086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:23:52.082762 kubelet[3391]: E0213 15:23:52.082709 3391 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:23:52.360209 containerd[1932]: time="2025-02-13T15:23:52.359848106Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:23:52.389244 containerd[1932]: time="2025-02-13T15:23:52.389181854Z" level=info msg="CreateContainer within sandbox \"cfd6ae9ec0de27ff1f4a09d55fe5b3950ede2fcc332a629371dd52f0de0f2a4f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fea9ff1a2e681ea41b7b8b722971cd838f77bb5732fd9804f3ad934848c8c7ce\"" Feb 13 15:23:52.393537 containerd[1932]: time="2025-02-13T15:23:52.391243226Z" level=info msg="StartContainer for \"fea9ff1a2e681ea41b7b8b722971cd838f77bb5732fd9804f3ad934848c8c7ce\"" Feb 13 15:23:52.462874 systemd[1]: Started cri-containerd-fea9ff1a2e681ea41b7b8b722971cd838f77bb5732fd9804f3ad934848c8c7ce.scope - libcontainer container fea9ff1a2e681ea41b7b8b722971cd838f77bb5732fd9804f3ad934848c8c7ce. Feb 13 15:23:52.527752 containerd[1932]: time="2025-02-13T15:23:52.527664290Z" level=info msg="StartContainer for \"fea9ff1a2e681ea41b7b8b722971cd838f77bb5732fd9804f3ad934848c8c7ce\" returns successfully" Feb 13 15:23:53.381550 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:23:55.369676 systemd[1]: run-containerd-runc-k8s.io-fea9ff1a2e681ea41b7b8b722971cd838f77bb5732fd9804f3ad934848c8c7ce-runc.nJyZTM.mount: Deactivated successfully. Feb 13 15:23:55.644119 update_engine[1925]: I20250213 15:23:55.643951 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:23:55.644687 update_engine[1925]: I20250213 15:23:55.644341 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:23:55.645148 update_engine[1925]: I20250213 15:23:55.644711 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:23:55.646008 update_engine[1925]: E20250213 15:23:55.645857 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:23:55.646008 update_engine[1925]: I20250213 15:23:55.645964 1925 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:23:57.736140 (udev-worker)[6023]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:57.741656 (udev-worker)[6025]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:23:57.746568 systemd-networkd[1852]: lxc_health: Link UP Feb 13 15:23:57.774572 systemd-networkd[1852]: lxc_health: Gained carrier Feb 13 15:23:58.554450 kubelet[3391]: I0213 15:23:58.554348 3391 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ztfzr" podStartSLOduration=10.554325224 podStartE2EDuration="10.554325224s" podCreationTimestamp="2025-02-13 15:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:23:53.403976103 +0000 UTC m=+126.807690727" watchObservedRunningTime="2025-02-13 15:23:58.554325224 +0000 UTC m=+131.958039836" Feb 13 15:23:59.816167 systemd-networkd[1852]: lxc_health: Gained IPv6LL Feb 13 15:24:02.613435 ntpd[1917]: Listen normally on 15 lxc_health [fe80::8417:49ff:fe25:7f47%14]:123 Feb 13 15:24:02.614078 ntpd[1917]: 13 Feb 15:24:02 ntpd[1917]: Listen normally on 15 lxc_health [fe80::8417:49ff:fe25:7f47%14]:123 Feb 13 15:24:04.687013 systemd[1]: run-containerd-runc-k8s.io-fea9ff1a2e681ea41b7b8b722971cd838f77bb5732fd9804f3ad934848c8c7ce-runc.9RWOcL.mount: Deactivated successfully. Feb 13 15:24:04.832494 sshd[5275]: Connection closed by 147.75.109.163 port 40572 Feb 13 15:24:04.833616 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:04.842302 systemd-logind[1923]: Session 31 logged out. Waiting for processes to exit. Feb 13 15:24:04.844887 systemd[1]: sshd@30-172.31.28.93:22-147.75.109.163:40572.service: Deactivated successfully. Feb 13 15:24:04.854445 systemd[1]: session-31.scope: Deactivated successfully. Feb 13 15:24:04.862208 systemd-logind[1923]: Removed session 31. Feb 13 15:24:05.642858 update_engine[1925]: I20250213 15:24:05.642046 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:24:05.642858 update_engine[1925]: I20250213 15:24:05.642400 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:24:05.642858 update_engine[1925]: I20250213 15:24:05.642787 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:24:05.643911 update_engine[1925]: E20250213 15:24:05.643864 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:24:05.644078 update_engine[1925]: I20250213 15:24:05.644045 1925 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 15:24:15.651555 update_engine[1925]: I20250213 15:24:15.651203 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:24:15.652099 update_engine[1925]: I20250213 15:24:15.651592 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:24:15.652099 update_engine[1925]: I20250213 15:24:15.651927 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:24:15.652693 update_engine[1925]: E20250213 15:24:15.652635 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:24:15.652875 update_engine[1925]: I20250213 15:24:15.652724 1925 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:24:15.652875 update_engine[1925]: I20250213 15:24:15.652746 1925 omaha_request_action.cc:617] Omaha request response: Feb 13 15:24:15.652875 update_engine[1925]: E20250213 15:24:15.652854 1925 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 15:24:15.653045 update_engine[1925]: I20250213 15:24:15.652889 1925 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 15:24:15.653045 update_engine[1925]: I20250213 15:24:15.652905 1925 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:24:15.653045 update_engine[1925]: I20250213 15:24:15.652922 1925 update_attempter.cc:306] Processing Done. Feb 13 15:24:15.653045 update_engine[1925]: E20250213 15:24:15.652948 1925 update_attempter.cc:619] Update failed. Feb 13 15:24:15.653045 update_engine[1925]: I20250213 15:24:15.652966 1925 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 15:24:15.653045 update_engine[1925]: I20250213 15:24:15.652980 1925 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 15:24:15.653045 update_engine[1925]: I20250213 15:24:15.652997 1925 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 15:24:15.653360 update_engine[1925]: I20250213 15:24:15.653101 1925 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:24:15.653360 update_engine[1925]: I20250213 15:24:15.653137 1925 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:24:15.653360 update_engine[1925]: I20250213 15:24:15.653155 1925 omaha_request_action.cc:272] Request: Feb 13 15:24:15.653360 update_engine[1925]: Feb 13 15:24:15.653360 update_engine[1925]: Feb 13 15:24:15.653360 update_engine[1925]: Feb 13 15:24:15.653360 update_engine[1925]: Feb 13 15:24:15.653360 update_engine[1925]: Feb 13 15:24:15.653360 update_engine[1925]: Feb 13 15:24:15.653360 update_engine[1925]: I20250213 15:24:15.653172 1925 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:24:15.654102 update_engine[1925]: I20250213 15:24:15.653426 1925 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:24:15.654102 update_engine[1925]: I20250213 15:24:15.653887 1925 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:24:15.654628 update_engine[1925]: E20250213 15:24:15.654413 1925 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:24:15.654628 update_engine[1925]: I20250213 15:24:15.654529 1925 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:24:15.654628 update_engine[1925]: I20250213 15:24:15.654552 1925 omaha_request_action.cc:617] Omaha request response: Feb 13 15:24:15.654628 update_engine[1925]: I20250213 15:24:15.654570 1925 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:24:15.654628 update_engine[1925]: I20250213 15:24:15.654585 1925 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:24:15.654628 update_engine[1925]: I20250213 15:24:15.654601 1925 update_attempter.cc:306] Processing Done. Feb 13 15:24:15.654628 update_engine[1925]: I20250213 15:24:15.654618 1925 update_attempter.cc:310] Error event sent. Feb 13 15:24:15.655029 locksmithd[1942]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 15:24:15.655491 update_engine[1925]: I20250213 15:24:15.654639 1925 update_check_scheduler.cc:74] Next update check in 46m36s Feb 13 15:24:15.655739 locksmithd[1942]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 15:24:19.637039 kubelet[3391]: E0213 15:24:19.636897 3391 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:24:20.384713 systemd[1]: cri-containerd-f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e.scope: Deactivated successfully. Feb 13 15:24:20.386930 systemd[1]: cri-containerd-f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e.scope: Consumed 4.393s CPU time, 18.2M memory peak, 0B memory swap peak. Feb 13 15:24:20.430016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e-rootfs.mount: Deactivated successfully. Feb 13 15:24:20.439566 containerd[1932]: time="2025-02-13T15:24:20.439441409Z" level=info msg="shim disconnected" id=f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e namespace=k8s.io Feb 13 15:24:20.439566 containerd[1932]: time="2025-02-13T15:24:20.439549865Z" level=warning msg="cleaning up after shim disconnected" id=f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e namespace=k8s.io Feb 13 15:24:20.439566 containerd[1932]: time="2025-02-13T15:24:20.439571213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:20.462008 containerd[1932]: time="2025-02-13T15:24:20.460066637Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:24:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:24:21.459112 kubelet[3391]: I0213 15:24:21.458795 3391 scope.go:117] "RemoveContainer" containerID="f0ca1961fa4f8ac30c867f509abfe4f2f036316b545fd9cd83449b36508d495e" Feb 13 15:24:21.461859 containerd[1932]: time="2025-02-13T15:24:21.461808078Z" level=info msg="CreateContainer within sandbox \"a6ec2e2f55dc96f37ed1d0eb35ee2b6790869456aefca9eebc9b528b7338cb77\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:24:21.489016 containerd[1932]: time="2025-02-13T15:24:21.488938650Z" level=info msg="CreateContainer within sandbox \"a6ec2e2f55dc96f37ed1d0eb35ee2b6790869456aefca9eebc9b528b7338cb77\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"768fd8cd5ef3652013840948b12f7655aff54b7cf24702c3cb7afa84474b7a92\"" Feb 13 15:24:21.490183 containerd[1932]: time="2025-02-13T15:24:21.489795318Z" level=info msg="StartContainer for \"768fd8cd5ef3652013840948b12f7655aff54b7cf24702c3cb7afa84474b7a92\"" Feb 13 15:24:21.544806 systemd[1]: run-containerd-runc-k8s.io-768fd8cd5ef3652013840948b12f7655aff54b7cf24702c3cb7afa84474b7a92-runc.r3jBK0.mount: Deactivated successfully. Feb 13 15:24:21.553842 systemd[1]: Started cri-containerd-768fd8cd5ef3652013840948b12f7655aff54b7cf24702c3cb7afa84474b7a92.scope - libcontainer container 768fd8cd5ef3652013840948b12f7655aff54b7cf24702c3cb7afa84474b7a92. Feb 13 15:24:21.620364 containerd[1932]: time="2025-02-13T15:24:21.620285551Z" level=info msg="StartContainer for \"768fd8cd5ef3652013840948b12f7655aff54b7cf24702c3cb7afa84474b7a92\" returns successfully" Feb 13 15:24:23.841892 systemd[1]: cri-containerd-92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc.scope: Deactivated successfully. Feb 13 15:24:23.843080 systemd[1]: cri-containerd-92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc.scope: Consumed 3.130s CPU time, 16.1M memory peak, 0B memory swap peak. Feb 13 15:24:23.882123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc-rootfs.mount: Deactivated successfully. Feb 13 15:24:23.894017 containerd[1932]: time="2025-02-13T15:24:23.893922502Z" level=info msg="shim disconnected" id=92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc namespace=k8s.io Feb 13 15:24:23.894017 containerd[1932]: time="2025-02-13T15:24:23.893998258Z" level=warning msg="cleaning up after shim disconnected" id=92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc namespace=k8s.io Feb 13 15:24:23.894017 containerd[1932]: time="2025-02-13T15:24:23.894020314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:24:24.471663 kubelet[3391]: I0213 15:24:24.471615 3391 scope.go:117] "RemoveContainer" containerID="92a1065b591eed33f537e4300ecffb9803975357cedc0b0f8a6e2a68009008fc" Feb 13 15:24:24.474965 containerd[1932]: time="2025-02-13T15:24:24.474701613Z" level=info msg="CreateContainer within sandbox \"eadd8fee6f572dd40457ff88510b8adab116e4f2b7188c4d7e0ffb878ce64d66\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:24:24.498734 containerd[1932]: time="2025-02-13T15:24:24.498670557Z" level=info msg="CreateContainer within sandbox \"eadd8fee6f572dd40457ff88510b8adab116e4f2b7188c4d7e0ffb878ce64d66\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5b0d1246da71a5350d0ebc19b44bb9b0d78f26193e8a392f770111360ba192d4\"" Feb 13 15:24:24.499888 containerd[1932]: time="2025-02-13T15:24:24.499475565Z" level=info msg="StartContainer for \"5b0d1246da71a5350d0ebc19b44bb9b0d78f26193e8a392f770111360ba192d4\"" Feb 13 15:24:24.556824 systemd[1]: Started cri-containerd-5b0d1246da71a5350d0ebc19b44bb9b0d78f26193e8a392f770111360ba192d4.scope - libcontainer container 5b0d1246da71a5350d0ebc19b44bb9b0d78f26193e8a392f770111360ba192d4. Feb 13 15:24:24.623575 containerd[1932]: time="2025-02-13T15:24:24.623335882Z" level=info msg="StartContainer for \"5b0d1246da71a5350d0ebc19b44bb9b0d78f26193e8a392f770111360ba192d4\" returns successfully" Feb 13 15:24:29.639007 kubelet[3391]: E0213 15:24:29.638663 3391 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"