Feb 13 19:01:26.175716 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:01:26.175770 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:01:26.175796 kernel: KASLR disabled due to lack of seed Feb 13 19:01:26.175815 kernel: efi: EFI v2.7 by EDK II Feb 13 19:01:26.175832 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 19:01:26.175848 kernel: secureboot: Secure boot disabled Feb 13 19:01:26.175869 kernel: ACPI: Early table checksum verification disabled Feb 13 19:01:26.175885 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:01:26.175903 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:01:26.175919 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:01:26.175942 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:01:26.175963 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:01:26.175981 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:01:26.175997 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:01:26.176016 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:01:26.176037 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:01:26.176054 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:01:26.176070 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:01:26.176087 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:01:26.176103 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:01:26.176119 kernel: printk: bootconsole [uart0] enabled Feb 13 19:01:26.176136 kernel: NUMA: Failed to initialise from firmware Feb 13 19:01:26.176152 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:01:26.176169 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:01:26.176186 kernel: Zone ranges: Feb 13 19:01:26.176202 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:01:26.176224 kernel: DMA32 empty Feb 13 19:01:26.176281 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:01:26.176312 kernel: Movable zone start for each node Feb 13 19:01:26.176366 kernel: Early memory node ranges Feb 13 19:01:26.176385 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:01:26.176402 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:01:26.176419 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:01:26.176435 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:01:26.176452 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:01:26.176469 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:01:26.176486 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:01:26.176503 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:01:26.176531 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:01:26.176549 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:01:26.176572 kernel: psci: probing for conduit method from ACPI. Feb 13 19:01:26.176591 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:01:26.176610 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:01:26.176631 kernel: psci: Trusted OS migration not required Feb 13 19:01:26.176649 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:01:26.176666 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:01:26.176703 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:01:26.176725 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:01:26.176744 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:01:26.176762 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:01:26.176780 kernel: CPU features: detected: Spectre-v2 Feb 13 19:01:26.176798 kernel: CPU features: detected: Spectre-v3a Feb 13 19:01:26.176815 kernel: CPU features: detected: Spectre-BHB Feb 13 19:01:26.176832 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:01:26.176852 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:01:26.176878 kernel: alternatives: applying boot alternatives Feb 13 19:01:26.176898 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:01:26.176918 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:01:26.176937 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:01:26.176956 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:01:26.176974 kernel: Fallback order for Node 0: 0 Feb 13 19:01:26.176991 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:01:26.177008 kernel: Policy zone: Normal Feb 13 19:01:26.177027 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:01:26.177043 kernel: software IO TLB: area num 2. Feb 13 19:01:26.177065 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:01:26.177084 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:01:26.177101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:01:26.177118 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:01:26.177136 kernel: rcu: RCU event tracing is enabled. Feb 13 19:01:26.177154 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:01:26.177171 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:01:26.177189 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:01:26.177206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:01:26.177223 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:01:26.177240 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:01:26.177303 kernel: GICv3: 96 SPIs implemented Feb 13 19:01:26.177321 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:01:26.177338 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:01:26.177355 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:01:26.177372 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:01:26.177389 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:01:26.177406 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:01:26.177423 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:01:26.177440 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:01:26.177457 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:01:26.177474 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:01:26.177491 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:01:26.177514 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:01:26.177531 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:01:26.177549 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:01:26.177566 kernel: Console: colour dummy device 80x25 Feb 13 19:01:26.177584 kernel: printk: console [tty1] enabled Feb 13 19:01:26.177601 kernel: ACPI: Core revision 20230628 Feb 13 19:01:26.177619 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:01:26.177637 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:01:26.177654 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:01:26.177672 kernel: landlock: Up and running. Feb 13 19:01:26.177693 kernel: SELinux: Initializing. Feb 13 19:01:26.177711 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:26.177728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:26.177746 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:01:26.177764 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:01:26.177781 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:01:26.177800 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:01:26.177818 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:01:26.177839 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:01:26.177857 kernel: Remapping and enabling EFI services. Feb 13 19:01:26.177874 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:01:26.177891 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:01:26.177909 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:01:26.177926 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:01:26.177944 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:01:26.177962 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:01:26.177979 kernel: SMP: Total of 2 processors activated. Feb 13 19:01:26.177997 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:01:26.178022 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:01:26.178039 kernel: CPU features: detected: CRC32 instructions Feb 13 19:01:26.178069 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:01:26.178093 kernel: alternatives: applying system-wide alternatives Feb 13 19:01:26.178112 kernel: devtmpfs: initialized Feb 13 19:01:26.178130 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:01:26.178149 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:01:26.178168 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:01:26.178187 kernel: SMBIOS 3.0.0 present. Feb 13 19:01:26.178210 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:01:26.178228 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:01:26.178271 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:01:26.178295 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:01:26.178314 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:01:26.178332 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:01:26.178351 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Feb 13 19:01:26.178375 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:01:26.178394 kernel: cpuidle: using governor menu Feb 13 19:01:26.178412 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:01:26.178430 kernel: ASID allocator initialised with 65536 entries Feb 13 19:01:26.178448 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:01:26.178466 kernel: Serial: AMBA PL011 UART driver Feb 13 19:01:26.178484 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 19:01:26.178502 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:01:26.178521 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:01:26.178543 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:01:26.178561 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:01:26.178579 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:01:26.178597 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:01:26.178615 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:01:26.178634 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:01:26.178652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:01:26.178670 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:01:26.178689 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:01:26.178711 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:01:26.178730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:01:26.178748 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:01:26.178767 kernel: ACPI: Interpreter enabled Feb 13 19:01:26.178785 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:01:26.178804 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:01:26.178822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:01:26.179194 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:01:26.179531 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:01:26.179743 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:01:26.179942 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:01:26.180156 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:01:26.180184 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:01:26.180203 kernel: acpiphp: Slot [1] registered Feb 13 19:01:26.180222 kernel: acpiphp: Slot [2] registered Feb 13 19:01:26.180241 kernel: acpiphp: Slot [3] registered Feb 13 19:01:26.180299 kernel: acpiphp: Slot [4] registered Feb 13 19:01:26.180320 kernel: acpiphp: Slot [5] registered Feb 13 19:01:26.180338 kernel: acpiphp: Slot [6] registered Feb 13 19:01:26.180358 kernel: acpiphp: Slot [7] registered Feb 13 19:01:26.180376 kernel: acpiphp: Slot [8] registered Feb 13 19:01:26.180396 kernel: acpiphp: Slot [9] registered Feb 13 19:01:26.180414 kernel: acpiphp: Slot [10] registered Feb 13 19:01:26.180432 kernel: acpiphp: Slot [11] registered Feb 13 19:01:26.180451 kernel: acpiphp: Slot [12] registered Feb 13 19:01:26.180470 kernel: acpiphp: Slot [13] registered Feb 13 19:01:26.180495 kernel: acpiphp: Slot [14] registered Feb 13 19:01:26.180514 kernel: acpiphp: Slot [15] registered Feb 13 19:01:26.180532 kernel: acpiphp: Slot [16] registered Feb 13 19:01:26.180550 kernel: acpiphp: Slot [17] registered Feb 13 19:01:26.180569 kernel: acpiphp: Slot [18] registered Feb 13 19:01:26.180587 kernel: acpiphp: Slot [19] registered Feb 13 19:01:26.180605 kernel: acpiphp: Slot [20] registered Feb 13 19:01:26.180623 kernel: acpiphp: Slot [21] registered Feb 13 19:01:26.180642 kernel: acpiphp: Slot [22] registered Feb 13 19:01:26.180665 kernel: acpiphp: Slot [23] registered Feb 13 19:01:26.180704 kernel: acpiphp: Slot [24] registered Feb 13 19:01:26.180725 kernel: acpiphp: Slot [25] registered Feb 13 19:01:26.180743 kernel: acpiphp: Slot [26] registered Feb 13 19:01:26.180762 kernel: acpiphp: Slot [27] registered Feb 13 19:01:26.180781 kernel: acpiphp: Slot [28] registered Feb 13 19:01:26.180799 kernel: acpiphp: Slot [29] registered Feb 13 19:01:26.180817 kernel: acpiphp: Slot [30] registered Feb 13 19:01:26.180836 kernel: acpiphp: Slot [31] registered Feb 13 19:01:26.180854 kernel: PCI host bridge to bus 0000:00 Feb 13 19:01:26.181139 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:01:26.181418 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:01:26.181637 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:01:26.181839 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:01:26.182330 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:01:26.184883 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:01:26.185160 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:01:26.185522 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:01:26.185746 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:01:26.185949 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:01:26.186165 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:01:26.186437 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:01:26.186768 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:01:26.187015 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:01:26.187232 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:01:26.187538 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:01:26.187768 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:01:26.187991 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:01:26.188375 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:01:26.188648 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:01:26.188928 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:01:26.189158 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:01:26.189969 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:01:26.190013 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:01:26.190032 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:01:26.190051 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:01:26.190069 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:01:26.190087 kernel: iommu: Default domain type: Translated Feb 13 19:01:26.190118 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:01:26.190136 kernel: efivars: Registered efivars operations Feb 13 19:01:26.190154 kernel: vgaarb: loaded Feb 13 19:01:26.190173 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:01:26.190192 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:01:26.190211 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:01:26.190229 kernel: pnp: PnP ACPI init Feb 13 19:01:26.192423 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:01:26.192467 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:01:26.192486 kernel: NET: Registered PF_INET protocol family Feb 13 19:01:26.192505 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:01:26.192523 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:01:26.192541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:01:26.192559 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:01:26.192578 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:01:26.192595 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:01:26.192614 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:26.192636 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:26.192654 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:01:26.192684 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:01:26.192709 kernel: kvm [1]: HYP mode not available Feb 13 19:01:26.192728 kernel: Initialise system trusted keyrings Feb 13 19:01:26.192746 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:01:26.192765 kernel: Key type asymmetric registered Feb 13 19:01:26.192782 kernel: Asymmetric key parser 'x509' registered Feb 13 19:01:26.192800 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:01:26.192824 kernel: io scheduler mq-deadline registered Feb 13 19:01:26.192842 kernel: io scheduler kyber registered Feb 13 19:01:26.192860 kernel: io scheduler bfq registered Feb 13 19:01:26.193087 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:01:26.193114 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:01:26.193132 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:01:26.193151 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:01:26.193186 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:01:26.193213 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:01:26.193232 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:01:26.193472 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:01:26.193498 kernel: printk: console [ttyS0] disabled Feb 13 19:01:26.193517 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:01:26.193535 kernel: printk: console [ttyS0] enabled Feb 13 19:01:26.193553 kernel: printk: bootconsole [uart0] disabled Feb 13 19:01:26.193571 kernel: thunder_xcv, ver 1.0 Feb 13 19:01:26.193589 kernel: thunder_bgx, ver 1.0 Feb 13 19:01:26.193607 kernel: nicpf, ver 1.0 Feb 13 19:01:26.193631 kernel: nicvf, ver 1.0 Feb 13 19:01:26.193847 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:01:26.194035 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:01:25 UTC (1739473285) Feb 13 19:01:26.194061 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:01:26.194079 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:01:26.194097 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:01:26.194115 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:01:26.194138 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:01:26.194156 kernel: Segment Routing with IPv6 Feb 13 19:01:26.194174 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:01:26.194193 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:01:26.194211 kernel: Key type dns_resolver registered Feb 13 19:01:26.194229 kernel: registered taskstats version 1 Feb 13 19:01:26.194997 kernel: Loading compiled-in X.509 certificates Feb 13 19:01:26.195030 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:01:26.195049 kernel: Key type .fscrypt registered Feb 13 19:01:26.195067 kernel: Key type fscrypt-provisioning registered Feb 13 19:01:26.195096 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:01:26.195115 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:01:26.195133 kernel: ima: No architecture policies found Feb 13 19:01:26.195151 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:01:26.195171 kernel: clk: Disabling unused clocks Feb 13 19:01:26.195190 kernel: Freeing unused kernel memory: 39680K Feb 13 19:01:26.195209 kernel: Run /init as init process Feb 13 19:01:26.195227 kernel: with arguments: Feb 13 19:01:26.195273 kernel: /init Feb 13 19:01:26.195302 kernel: with environment: Feb 13 19:01:26.195320 kernel: HOME=/ Feb 13 19:01:26.195339 kernel: TERM=linux Feb 13 19:01:26.195357 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:01:26.195379 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:01:26.195403 systemd[1]: Detected virtualization amazon. Feb 13 19:01:26.195423 systemd[1]: Detected architecture arm64. Feb 13 19:01:26.195448 systemd[1]: Running in initrd. Feb 13 19:01:26.195468 systemd[1]: No hostname configured, using default hostname. Feb 13 19:01:26.195488 systemd[1]: Hostname set to . Feb 13 19:01:26.195508 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:01:26.195528 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:01:26.195548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:26.195567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:26.195589 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:01:26.195613 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:01:26.195633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:01:26.195654 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:01:26.195676 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:01:26.195697 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:01:26.195717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:26.195736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:26.195761 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:01:26.195781 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:01:26.195800 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:01:26.195820 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:01:26.195839 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:01:26.195859 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:01:26.195879 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:01:26.195898 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:01:26.195918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:26.195943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:26.195963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:26.195982 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:01:26.196002 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:01:26.196021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:01:26.196041 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:01:26.196060 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:01:26.196080 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:01:26.196104 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:01:26.196124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:26.196144 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:01:26.196209 systemd-journald[253]: Collecting audit messages is disabled. Feb 13 19:01:26.196279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:26.196301 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:01:26.196323 systemd-journald[253]: Journal started Feb 13 19:01:26.196365 systemd-journald[253]: Runtime Journal (/run/log/journal/ec2abb106d4a787e9d4827951012fe13) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:01:26.184064 systemd-modules-load[254]: Inserted module 'overlay' Feb 13 19:01:26.216420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:01:26.216501 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:01:26.224304 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:01:26.230300 kernel: Bridge firewalling registered Feb 13 19:01:26.232378 systemd-modules-load[254]: Inserted module 'br_netfilter' Feb 13 19:01:26.233479 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:26.238345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:26.246337 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:01:26.257727 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:26.267145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:01:26.274047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:01:26.276772 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:01:26.310676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:26.323324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:26.334345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:26.344669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:01:26.350548 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:26.364772 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:01:26.411061 dracut-cmdline[291]: dracut-dracut-053 Feb 13 19:01:26.421288 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:01:26.439547 systemd-resolved[287]: Positive Trust Anchors: Feb 13 19:01:26.439608 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:01:26.439671 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:01:26.606290 kernel: SCSI subsystem initialized Feb 13 19:01:26.614497 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:01:26.628319 kernel: iscsi: registered transport (tcp) Feb 13 19:01:26.652287 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:01:26.652360 kernel: QLogic iSCSI HBA Driver Feb 13 19:01:26.701298 kernel: random: crng init done Feb 13 19:01:26.701560 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 19:01:26.705305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:01:26.708623 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:26.746482 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:01:26.753743 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:01:26.804831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:01:26.804911 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:01:26.804940 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:01:26.875317 kernel: raid6: neonx8 gen() 6546 MB/s Feb 13 19:01:26.892310 kernel: raid6: neonx4 gen() 6294 MB/s Feb 13 19:01:26.909302 kernel: raid6: neonx2 gen() 5324 MB/s Feb 13 19:01:26.926300 kernel: raid6: neonx1 gen() 3886 MB/s Feb 13 19:01:26.943300 kernel: raid6: int64x8 gen() 3769 MB/s Feb 13 19:01:26.960302 kernel: raid6: int64x4 gen() 3665 MB/s Feb 13 19:01:26.977303 kernel: raid6: int64x2 gen() 3539 MB/s Feb 13 19:01:26.995084 kernel: raid6: int64x1 gen() 2749 MB/s Feb 13 19:01:26.995152 kernel: raid6: using algorithm neonx8 gen() 6546 MB/s Feb 13 19:01:27.013083 kernel: raid6: .... xor() 4893 MB/s, rmw enabled Feb 13 19:01:27.013160 kernel: raid6: using neon recovery algorithm Feb 13 19:01:27.021298 kernel: xor: measuring software checksum speed Feb 13 19:01:27.022294 kernel: 8regs : 10145 MB/sec Feb 13 19:01:27.024508 kernel: 32regs : 10871 MB/sec Feb 13 19:01:27.024571 kernel: arm64_neon : 9476 MB/sec Feb 13 19:01:27.024610 kernel: xor: using function: 32regs (10871 MB/sec) Feb 13 19:01:27.113308 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:01:27.136364 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:01:27.146588 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:27.192473 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 19:01:27.202751 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:27.214343 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:01:27.255842 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Feb 13 19:01:27.319874 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:01:27.331567 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:01:27.456424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:27.467561 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:01:27.520068 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:01:27.526162 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:01:27.531524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:27.534732 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:01:27.552710 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:01:27.588352 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:01:27.671349 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:01:27.671415 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:01:27.715140 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:01:27.715940 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:01:27.716212 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d3:f3:8c:eb:9d Feb 13 19:01:27.687097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:01:27.687415 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:27.690191 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:27.735609 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:01:27.735665 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:01:27.692688 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:01:27.693006 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:27.695837 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:27.720224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:27.722910 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:27.754331 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:01:27.766317 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:01:27.766396 kernel: GPT:9289727 != 16777215 Feb 13 19:01:27.766422 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:01:27.766448 kernel: GPT:9289727 != 16777215 Feb 13 19:01:27.766471 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:01:27.766495 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:27.780310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:27.789642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:27.843961 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:27.878336 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (531) Feb 13 19:01:27.913513 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (528) Feb 13 19:01:27.953862 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:01:27.981775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:01:28.029772 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:01:28.045897 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:01:28.051398 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:01:28.069674 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:01:28.090076 disk-uuid[664]: Primary Header is updated. Feb 13 19:01:28.090076 disk-uuid[664]: Secondary Entries is updated. Feb 13 19:01:28.090076 disk-uuid[664]: Secondary Header is updated. Feb 13 19:01:28.096417 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:29.121675 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:29.122469 disk-uuid[665]: The operation has completed successfully. Feb 13 19:01:29.326855 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:01:29.329367 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:01:29.367566 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:01:29.381013 sh[926]: Success Feb 13 19:01:29.408294 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:01:29.504397 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:01:29.519503 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:01:29.524307 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:01:29.567742 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:01:29.567818 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:29.567859 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:01:29.570682 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:01:29.570752 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:01:29.702298 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:01:29.740839 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:01:29.743462 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:01:29.751635 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:01:29.761750 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:01:29.800132 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:29.800205 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:29.801422 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:29.814276 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:29.834933 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:01:29.838459 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:29.851958 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:01:29.864617 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:01:29.947193 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:01:29.961592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:01:30.031040 systemd-networkd[1118]: lo: Link UP Feb 13 19:01:30.031642 systemd-networkd[1118]: lo: Gained carrier Feb 13 19:01:30.034941 systemd-networkd[1118]: Enumeration completed Feb 13 19:01:30.036596 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:30.036604 systemd-networkd[1118]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:01:30.039376 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:01:30.042715 systemd[1]: Reached target network.target - Network. Feb 13 19:01:30.049789 systemd-networkd[1118]: eth0: Link UP Feb 13 19:01:30.049797 systemd-networkd[1118]: eth0: Gained carrier Feb 13 19:01:30.049815 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:30.080397 systemd-networkd[1118]: eth0: DHCPv4 address 172.31.22.173/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:01:30.243536 ignition[1054]: Ignition 2.20.0 Feb 13 19:01:30.243565 ignition[1054]: Stage: fetch-offline Feb 13 19:01:30.243998 ignition[1054]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:30.244023 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:30.245536 ignition[1054]: Ignition finished successfully Feb 13 19:01:30.253444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:01:30.268713 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:01:30.290395 ignition[1128]: Ignition 2.20.0 Feb 13 19:01:30.290425 ignition[1128]: Stage: fetch Feb 13 19:01:30.292005 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:30.292032 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:30.292441 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:30.313192 ignition[1128]: PUT result: OK Feb 13 19:01:30.316282 ignition[1128]: parsed url from cmdline: "" Feb 13 19:01:30.316304 ignition[1128]: no config URL provided Feb 13 19:01:30.316320 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:01:30.316346 ignition[1128]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:01:30.316381 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:30.317995 ignition[1128]: PUT result: OK Feb 13 19:01:30.318115 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:01:30.325024 ignition[1128]: GET result: OK Feb 13 19:01:30.325181 ignition[1128]: parsing config with SHA512: fb5f22a4ce22009a319317a2f553f8079525426ffd1c34d76d8b73568ff2864ad8f3a6b76965ad8cf270731a62197751fb659a1ca11bac8a6e8ee66dfc57a0af Feb 13 19:01:30.336613 unknown[1128]: fetched base config from "system" Feb 13 19:01:30.336645 unknown[1128]: fetched base config from "system" Feb 13 19:01:30.337748 ignition[1128]: fetch: fetch complete Feb 13 19:01:30.336677 unknown[1128]: fetched user config from "aws" Feb 13 19:01:30.337761 ignition[1128]: fetch: fetch passed Feb 13 19:01:30.343214 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:01:30.337852 ignition[1128]: Ignition finished successfully Feb 13 19:01:30.361545 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:01:30.384991 ignition[1134]: Ignition 2.20.0 Feb 13 19:01:30.385522 ignition[1134]: Stage: kargs Feb 13 19:01:30.386114 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:30.386139 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:30.387013 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:30.391107 ignition[1134]: PUT result: OK Feb 13 19:01:30.399092 ignition[1134]: kargs: kargs passed Feb 13 19:01:30.399189 ignition[1134]: Ignition finished successfully Feb 13 19:01:30.403651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:01:30.419502 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:01:30.441155 ignition[1140]: Ignition 2.20.0 Feb 13 19:01:30.441688 ignition[1140]: Stage: disks Feb 13 19:01:30.442301 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:30.442327 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:30.442501 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:30.444911 ignition[1140]: PUT result: OK Feb 13 19:01:30.454596 ignition[1140]: disks: disks passed Feb 13 19:01:30.454692 ignition[1140]: Ignition finished successfully Feb 13 19:01:30.459395 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:01:30.464337 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:01:30.468386 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:01:30.470662 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:01:30.472541 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:01:30.474421 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:01:30.490626 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:01:30.539417 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:01:30.546394 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:01:30.565589 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:01:30.645280 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:01:30.646085 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:01:30.647348 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:01:30.663457 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:01:30.670520 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:01:30.673642 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:01:30.673716 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:01:30.673765 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:01:30.696274 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1167) Feb 13 19:01:30.700046 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:30.700092 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:30.701320 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:30.708689 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:01:30.720224 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:30.727663 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:01:30.734441 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:01:31.149664 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:01:31.157542 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:01:31.176021 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:01:31.184185 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:01:31.531767 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:01:31.539424 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:01:31.548545 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:01:31.564659 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:01:31.569379 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:31.606353 ignition[1280]: INFO : Ignition 2.20.0 Feb 13 19:01:31.606353 ignition[1280]: INFO : Stage: mount Feb 13 19:01:31.609816 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:31.609816 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:31.614736 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:31.617273 ignition[1280]: INFO : PUT result: OK Feb 13 19:01:31.621987 ignition[1280]: INFO : mount: mount passed Feb 13 19:01:31.623808 ignition[1280]: INFO : Ignition finished successfully Feb 13 19:01:31.623617 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:01:31.629824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:01:31.653561 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:01:31.673599 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:01:31.712281 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) Feb 13 19:01:31.715717 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:31.715770 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:31.715795 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:31.723308 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:31.725417 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:01:31.764849 ignition[1308]: INFO : Ignition 2.20.0 Feb 13 19:01:31.764849 ignition[1308]: INFO : Stage: files Feb 13 19:01:31.768159 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:31.768159 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:31.768159 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:31.775042 ignition[1308]: INFO : PUT result: OK Feb 13 19:01:31.779129 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:01:31.783753 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:01:31.783753 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:01:31.815708 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:01:31.818496 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:01:31.821304 unknown[1308]: wrote ssh authorized keys file for user: core Feb 13 19:01:31.825543 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:01:31.828927 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:01:31.828927 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:01:31.864381 systemd-networkd[1118]: eth0: Gained IPv6LL Feb 13 19:01:31.925565 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:01:32.091675 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:01:32.091675 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:01:32.098435 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:01:32.550155 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:01:32.691724 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:01:32.691724 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:32.698273 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:01:33.106797 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:01:33.436681 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:33.440823 ignition[1308]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:01:33.440823 ignition[1308]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:01:33.440823 ignition[1308]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:01:33.440823 ignition[1308]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:01:33.440823 ignition[1308]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:01:33.440823 ignition[1308]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:01:33.457957 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:01:33.457957 ignition[1308]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:01:33.457957 ignition[1308]: INFO : files: files passed Feb 13 19:01:33.457957 ignition[1308]: INFO : Ignition finished successfully Feb 13 19:01:33.467579 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:01:33.475575 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:01:33.487676 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:01:33.500521 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:01:33.500738 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:01:33.521210 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:33.521210 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:33.527286 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:33.532944 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:01:33.539410 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:01:33.558204 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:01:33.602895 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:01:33.603748 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:01:33.610952 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:01:33.613615 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:01:33.619406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:01:33.628597 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:01:33.656085 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:01:33.671963 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:01:33.693421 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:33.694476 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:33.694715 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:01:33.694961 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:01:33.695182 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:01:33.696167 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:01:33.696797 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:01:33.697080 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:01:33.697402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:01:33.697677 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:01:33.697973 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:01:33.698294 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:01:33.698593 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:01:33.698882 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:01:33.699166 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:01:33.699722 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:01:33.699927 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:01:33.700665 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:33.700991 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:33.701210 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:01:33.723847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:33.724040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:01:33.724335 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:01:33.736189 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:01:33.736482 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:01:33.741074 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:01:33.741294 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:01:33.786675 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:01:33.788816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:01:33.789083 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:33.801694 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:01:33.804416 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:01:33.804774 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:33.807347 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:01:33.807578 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:01:33.829575 ignition[1361]: INFO : Ignition 2.20.0 Feb 13 19:01:33.829575 ignition[1361]: INFO : Stage: umount Feb 13 19:01:33.829575 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:33.829575 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:33.829575 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:33.842073 ignition[1361]: INFO : PUT result: OK Feb 13 19:01:33.836973 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:01:33.847460 ignition[1361]: INFO : umount: umount passed Feb 13 19:01:33.850184 ignition[1361]: INFO : Ignition finished successfully Feb 13 19:01:33.848961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:01:33.856044 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:01:33.859561 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:01:33.864159 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:01:33.864348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:01:33.870490 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:01:33.870594 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:01:33.872776 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:01:33.872862 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:01:33.874818 systemd[1]: Stopped target network.target - Network. Feb 13 19:01:33.876413 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:01:33.876495 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:01:33.878694 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:01:33.880305 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:01:33.882027 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:33.890387 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:01:33.892211 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:01:33.894473 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:01:33.894557 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:01:33.896559 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:01:33.896660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:01:33.898642 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:01:33.898734 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:01:33.902563 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:01:33.902658 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:01:33.905728 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:01:33.908674 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:01:33.915187 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:01:33.915565 systemd-networkd[1118]: eth0: DHCPv6 lease lost Feb 13 19:01:33.926006 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:01:33.926286 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:01:33.942484 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:01:33.942743 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:01:33.947749 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:01:33.949873 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:01:33.954765 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:01:33.954872 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:33.957880 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:01:33.957987 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:01:33.997398 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:01:34.008089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:01:34.008211 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:01:34.010630 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:01:34.010711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:34.013227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:01:34.013333 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:34.015802 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:01:34.015878 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:34.018819 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:34.066880 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:01:34.067334 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:01:34.074602 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:01:34.075067 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:34.078949 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:01:34.079032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:34.081694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:01:34.081762 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:34.085329 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:01:34.085417 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:01:34.087593 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:01:34.087672 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:01:34.090006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:01:34.090085 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:34.121607 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:01:34.123983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:01:34.124091 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:34.126584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:01:34.126667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:34.139783 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:01:34.139960 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:01:34.145227 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:01:34.169528 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:01:34.186459 systemd[1]: Switching root. Feb 13 19:01:34.255610 systemd-journald[253]: Journal stopped Feb 13 19:01:36.833841 systemd-journald[253]: Received SIGTERM from PID 1 (systemd). Feb 13 19:01:36.833962 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:01:36.834183 kernel: SELinux: policy capability open_perms=1 Feb 13 19:01:36.834222 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:01:36.834551 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:01:36.834586 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:01:36.834617 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:01:36.834653 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:01:36.834683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:01:36.834713 kernel: audit: type=1403 audit(1739473294.966:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:01:36.834755 systemd[1]: Successfully loaded SELinux policy in 85.355ms. Feb 13 19:01:36.834804 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.173ms. Feb 13 19:01:36.834839 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:01:36.834869 systemd[1]: Detected virtualization amazon. Feb 13 19:01:36.834897 systemd[1]: Detected architecture arm64. Feb 13 19:01:36.834925 systemd[1]: Detected first boot. Feb 13 19:01:36.834959 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:01:36.834991 zram_generator::config[1404]: No configuration found. Feb 13 19:01:36.835029 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:01:36.835057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:01:36.835096 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:01:36.835128 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:01:36.835161 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:01:36.835193 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:01:36.835230 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:01:36.837388 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:01:36.837441 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:01:36.837475 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:01:36.837509 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:01:36.837548 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:01:36.837579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:36.837611 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:36.837641 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:01:36.837676 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:01:36.837707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:01:36.837739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:01:36.837771 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:01:36.837804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:36.837844 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:01:36.837876 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:01:36.837908 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:01:36.837942 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:01:36.837974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:36.838006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:01:36.838034 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:01:36.838071 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:01:36.838101 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:01:36.838129 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:01:36.838160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:36.838190 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:36.838224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:36.838291 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:01:36.838324 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:01:36.838353 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:01:36.838383 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:01:36.838414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:01:36.838445 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:01:36.838475 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:01:36.838507 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:01:36.838542 systemd[1]: Reached target machines.target - Containers. Feb 13 19:01:36.838572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:01:36.838607 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:36.838639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:01:36.838669 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:01:36.838700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:36.838731 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:01:36.838762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:36.838795 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:01:36.838824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:36.838855 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:01:36.838886 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:01:36.838916 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:01:36.838944 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:01:36.838975 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:01:36.839003 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:01:36.839032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:01:36.839064 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:01:36.839093 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:01:36.839124 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:01:36.839155 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:01:36.839185 systemd[1]: Stopped verity-setup.service. Feb 13 19:01:36.839217 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:01:36.839279 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:01:36.839361 systemd-journald[1485]: Collecting audit messages is disabled. Feb 13 19:01:36.839419 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:01:36.839456 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:01:36.839485 systemd-journald[1485]: Journal started Feb 13 19:01:36.839531 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec2abb106d4a787e9d4827951012fe13) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:01:36.319297 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:01:36.400904 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:01:36.401682 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:01:36.843512 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:01:36.850090 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:01:36.853762 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:01:36.857399 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:36.863284 kernel: ACPI: bus type drm_connector registered Feb 13 19:01:36.864038 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:01:36.864419 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:01:36.872598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:36.872982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:36.873324 kernel: fuse: init (API version 7.39) Feb 13 19:01:36.876517 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:01:36.877398 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:01:36.880134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:36.880518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:36.884022 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:01:36.885509 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:01:36.908388 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:01:36.922363 kernel: loop: module loaded Feb 13 19:01:36.924149 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:01:36.927656 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:36.928086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:36.933106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:36.948727 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:01:36.958072 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:01:36.971512 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:01:36.974470 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:01:36.974543 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:01:36.980845 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:01:36.994389 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:01:37.001653 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:01:37.005718 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:37.017570 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:01:37.027614 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:01:37.029923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:01:37.032350 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:01:37.034568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:01:37.037955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:01:37.042774 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:01:37.048153 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:01:37.050688 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:01:37.053988 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:01:37.075430 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:01:37.094775 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:01:37.131103 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec2abb106d4a787e9d4827951012fe13 is 76.937ms for 909 entries. Feb 13 19:01:37.131103 systemd-journald[1485]: System Journal (/var/log/journal/ec2abb106d4a787e9d4827951012fe13) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:01:37.231740 systemd-journald[1485]: Received client request to flush runtime journal. Feb 13 19:01:37.232594 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 19:01:37.136098 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:01:37.140041 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:01:37.154646 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:01:37.239727 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:37.245782 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:01:37.256926 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:01:37.264841 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:01:37.284313 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:01:37.293176 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:01:37.307524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:01:37.326293 kernel: loop1: detected capacity change from 0 to 53784 Feb 13 19:01:37.343019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:37.354659 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:01:37.397324 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 19:01:37.403378 udevadm[1553]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:01:37.426071 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Feb 13 19:01:37.426104 systemd-tmpfiles[1550]: ACLs are not supported, ignoring. Feb 13 19:01:37.445666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:37.470545 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 19:01:37.571297 kernel: loop4: detected capacity change from 0 to 116808 Feb 13 19:01:37.593292 kernel: loop5: detected capacity change from 0 to 53784 Feb 13 19:01:37.617563 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 19:01:37.652342 kernel: loop7: detected capacity change from 0 to 113536 Feb 13 19:01:37.672812 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:01:37.673783 (sd-merge)[1558]: Merged extensions into '/usr'. Feb 13 19:01:37.681619 systemd[1]: Reloading requested from client PID 1532 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:01:37.681652 systemd[1]: Reloading... Feb 13 19:01:37.879377 zram_generator::config[1584]: No configuration found. Feb 13 19:01:38.255657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:38.369903 systemd[1]: Reloading finished in 687 ms. Feb 13 19:01:38.408444 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:01:38.411773 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:01:38.432688 systemd[1]: Starting ensure-sysext.service... Feb 13 19:01:38.437590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:01:38.447613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:38.457438 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:01:38.457473 systemd[1]: Reloading... Feb 13 19:01:38.523071 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:01:38.524599 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:01:38.530846 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:01:38.531769 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Feb 13 19:01:38.532699 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Feb 13 19:01:38.543444 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:01:38.543469 systemd-tmpfiles[1637]: Skipping /boot Feb 13 19:01:38.573546 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:01:38.574540 systemd-tmpfiles[1637]: Skipping /boot Feb 13 19:01:38.642550 systemd-udevd[1638]: Using default interface naming scheme 'v255'. Feb 13 19:01:38.678286 zram_generator::config[1667]: No configuration found. Feb 13 19:01:38.932423 (udev-worker)[1700]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:38.980018 ldconfig[1527]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:01:39.050789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:39.180022 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1685) Feb 13 19:01:39.213465 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:01:39.214218 systemd[1]: Reloading finished in 756 ms. Feb 13 19:01:39.250794 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:39.257363 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:01:39.281395 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:39.355101 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:01:39.378390 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:01:39.381108 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:39.387127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:39.393840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:39.400915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:39.403661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:39.413916 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:01:39.427838 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:01:39.449429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:01:39.460572 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:01:39.476866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:39.487693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:39.489923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:39.528901 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:39.529526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:39.561269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:39.561639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:39.568673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:39.583820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:39.589734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:39.591808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:39.592063 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:01:39.611047 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:01:39.625998 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:01:39.631040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:39.631452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:39.646101 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:01:39.649541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:01:39.658925 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:39.659367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:39.683551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:39.692791 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:01:39.706920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:39.717367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:01:39.720138 augenrules[1874]: No rules Feb 13 19:01:39.723485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:39.725872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:39.729739 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:01:39.733064 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:01:39.740845 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:01:39.744453 systemd[1]: Finished ensure-sysext.service. Feb 13 19:01:39.746234 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:01:39.748386 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:01:39.757607 lvm[1871]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:01:39.779416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:01:39.809190 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:01:39.824860 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:01:39.832724 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:01:39.835393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:01:39.839133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:39.839529 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:39.843893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:01:39.847178 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:01:39.855370 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:01:39.858137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:39.867628 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:01:39.873363 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:01:39.895209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:39.896701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:39.899579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:01:39.929347 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:01:39.946279 lvm[1895]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:01:39.959593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:40.010395 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:01:40.027966 systemd-networkd[1837]: lo: Link UP Feb 13 19:01:40.027991 systemd-networkd[1837]: lo: Gained carrier Feb 13 19:01:40.030956 systemd-networkd[1837]: Enumeration completed Feb 13 19:01:40.031150 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:01:40.034121 systemd-networkd[1837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:40.034143 systemd-networkd[1837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:01:40.038526 systemd-networkd[1837]: eth0: Link UP Feb 13 19:01:40.038835 systemd-networkd[1837]: eth0: Gained carrier Feb 13 19:01:40.038882 systemd-networkd[1837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:40.040736 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:01:40.051365 systemd-networkd[1837]: eth0: DHCPv4 address 172.31.22.173/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:01:40.054072 systemd-resolved[1843]: Positive Trust Anchors: Feb 13 19:01:40.054118 systemd-resolved[1843]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:01:40.054180 systemd-resolved[1843]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:01:40.065881 systemd-resolved[1843]: Defaulting to hostname 'linux'. Feb 13 19:01:40.068991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:01:40.071358 systemd[1]: Reached target network.target - Network. Feb 13 19:01:40.073095 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:40.075439 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:01:40.077735 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:01:40.080214 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:01:40.082842 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:01:40.085136 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:01:40.087465 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:01:40.089826 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:01:40.089881 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:01:40.091610 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:01:40.094885 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:01:40.099528 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:01:40.129453 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:01:40.132622 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:01:40.134948 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:01:40.137093 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:01:40.139336 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:01:40.139392 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:01:40.154979 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:01:40.161190 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:01:40.170589 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:01:40.177417 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:01:40.186382 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:01:40.188401 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:01:40.192714 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:01:40.201635 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:01:40.227656 jq[1912]: false Feb 13 19:01:40.217940 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:01:40.228517 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:01:40.239635 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:01:40.252626 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:01:40.267648 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:01:40.272158 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:01:40.274366 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:01:40.282537 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:01:40.291929 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:01:40.299657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:01:40.300056 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:01:40.348543 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:01:40.372282 tar[1927]: linux-arm64/helm Feb 13 19:01:40.376108 dbus-daemon[1911]: [system] SELinux support is enabled Feb 13 19:01:40.377595 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:01:40.386296 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:01:40.386391 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:01:40.389528 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:01:40.389595 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:01:40.408665 dbus-daemon[1911]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1837 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:01:40.415576 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:01:40.423564 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:01:40.426270 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:01:40.428387 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:01:40.440394 extend-filesystems[1913]: Found loop4 Feb 13 19:01:40.440394 extend-filesystems[1913]: Found loop5 Feb 13 19:01:40.440394 extend-filesystems[1913]: Found loop6 Feb 13 19:01:40.440394 extend-filesystems[1913]: Found loop7 Feb 13 19:01:40.440394 extend-filesystems[1913]: Found nvme0n1 Feb 13 19:01:40.440394 extend-filesystems[1913]: Found nvme0n1p1 Feb 13 19:01:40.440394 extend-filesystems[1913]: Found nvme0n1p2 Feb 13 19:01:40.472624 extend-filesystems[1913]: Found nvme0n1p3 Feb 13 19:01:40.472624 extend-filesystems[1913]: Found usr Feb 13 19:01:40.472624 extend-filesystems[1913]: Found nvme0n1p4 Feb 13 19:01:40.472624 extend-filesystems[1913]: Found nvme0n1p6 Feb 13 19:01:40.472624 extend-filesystems[1913]: Found nvme0n1p7 Feb 13 19:01:40.472624 extend-filesystems[1913]: Found nvme0n1p9 Feb 13 19:01:40.472624 extend-filesystems[1913]: Checking size of /dev/nvme0n1p9 Feb 13 19:01:40.508560 jq[1925]: true Feb 13 19:01:40.521695 update_engine[1924]: I20250213 19:01:40.517040 1924 main.cc:92] Flatcar Update Engine starting Feb 13 19:01:40.533087 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:01:40.539084 update_engine[1924]: I20250213 19:01:40.533202 1924 update_check_scheduler.cc:74] Next update check in 2m3s Feb 13 19:01:40.546577 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:01:40.568638 (ntainerd)[1945]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:01:40.573102 jq[1954]: true Feb 13 19:01:40.575384 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:01:40.584282 extend-filesystems[1913]: Resized partition /dev/nvme0n1p9 Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: ---------------------------------------------------- Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: corporation. Support and training for ntp-4 are Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: available at https://www.nwtime.org/support Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: ---------------------------------------------------- Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: proto: precision = 0.096 usec (-23) Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: basedate set to 2025-02-01 Feb 13 19:01:40.588451 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: gps base set to 2025-02-02 (week 2352) Feb 13 19:01:40.575469 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:01:40.589779 extend-filesystems[1960]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:01:40.575491 ntpd[1915]: ---------------------------------------------------- Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Listen normally on 3 eth0 172.31.22.173:123 Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Listen normally on 4 lo [::1]:123 Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: bind(21) AF_INET6 fe80::4d3:f3ff:fe8c:eb9d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: unable to create socket on eth0 (5) for fe80::4d3:f3ff:fe8c:eb9d%2#123 Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: failed to init interface for address fe80::4d3:f3ff:fe8c:eb9d%2 Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:40.605617 ntpd[1915]: 13 Feb 19:01:40 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:40.575510 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:01:40.575529 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:01:40.575548 ntpd[1915]: corporation. Support and training for ntp-4 are Feb 13 19:01:40.575566 ntpd[1915]: available at https://www.nwtime.org/support Feb 13 19:01:40.575584 ntpd[1915]: ---------------------------------------------------- Feb 13 19:01:40.580761 ntpd[1915]: proto: precision = 0.096 usec (-23) Feb 13 19:01:40.581848 ntpd[1915]: basedate set to 2025-02-01 Feb 13 19:01:40.581886 ntpd[1915]: gps base set to 2025-02-02 (week 2352) Feb 13 19:01:40.591928 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:01:40.592034 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:01:40.592471 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:01:40.592549 ntpd[1915]: Listen normally on 3 eth0 172.31.22.173:123 Feb 13 19:01:40.592640 ntpd[1915]: Listen normally on 4 lo [::1]:123 Feb 13 19:01:40.592728 ntpd[1915]: bind(21) AF_INET6 fe80::4d3:f3ff:fe8c:eb9d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:40.592773 ntpd[1915]: unable to create socket on eth0 (5) for fe80::4d3:f3ff:fe8c:eb9d%2#123 Feb 13 19:01:40.592800 ntpd[1915]: failed to init interface for address fe80::4d3:f3ff:fe8c:eb9d%2 Feb 13 19:01:40.592858 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Feb 13 19:01:40.601123 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:40.601182 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:40.627682 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:01:40.611841 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:01:40.617524 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:01:40.676076 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:01:40.736315 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:01:40.763212 extend-filesystems[1960]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:01:40.763212 extend-filesystems[1960]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:01:40.763212 extend-filesystems[1960]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:01:40.781850 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:01:40.783399 coreos-metadata[1910]: Feb 13 19:01:40.782 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:01:40.783890 extend-filesystems[1913]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:01:40.782353 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:01:40.797422 systemd-logind[1923]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:01:40.797486 systemd-logind[1923]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:01:40.809483 coreos-metadata[1910]: Feb 13 19:01:40.799 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:01:40.809483 coreos-metadata[1910]: Feb 13 19:01:40.801 INFO Fetch successful Feb 13 19:01:40.809483 coreos-metadata[1910]: Feb 13 19:01:40.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:01:40.809483 coreos-metadata[1910]: Feb 13 19:01:40.802 INFO Fetch successful Feb 13 19:01:40.809483 coreos-metadata[1910]: Feb 13 19:01:40.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:01:40.809483 coreos-metadata[1910]: Feb 13 19:01:40.808 INFO Fetch successful Feb 13 19:01:40.799845 systemd-logind[1923]: New seat seat0. Feb 13 19:01:40.819782 coreos-metadata[1910]: Feb 13 19:01:40.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:01:40.819782 coreos-metadata[1910]: Feb 13 19:01:40.819 INFO Fetch successful Feb 13 19:01:40.819782 coreos-metadata[1910]: Feb 13 19:01:40.819 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:01:40.810477 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:01:40.822398 coreos-metadata[1910]: Feb 13 19:01:40.821 INFO Fetch failed with 404: resource not found Feb 13 19:01:40.822398 coreos-metadata[1910]: Feb 13 19:01:40.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:01:40.823848 coreos-metadata[1910]: Feb 13 19:01:40.823 INFO Fetch successful Feb 13 19:01:40.823848 coreos-metadata[1910]: Feb 13 19:01:40.823 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:01:40.834115 coreos-metadata[1910]: Feb 13 19:01:40.828 INFO Fetch successful Feb 13 19:01:40.834115 coreos-metadata[1910]: Feb 13 19:01:40.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:01:40.834115 coreos-metadata[1910]: Feb 13 19:01:40.832 INFO Fetch successful Feb 13 19:01:40.834115 coreos-metadata[1910]: Feb 13 19:01:40.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:01:40.838832 coreos-metadata[1910]: Feb 13 19:01:40.836 INFO Fetch successful Feb 13 19:01:40.838832 coreos-metadata[1910]: Feb 13 19:01:40.836 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:01:40.850311 coreos-metadata[1910]: Feb 13 19:01:40.845 INFO Fetch successful Feb 13 19:01:40.915726 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:01:40.916032 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:01:40.933494 dbus-daemon[1911]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1940 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:01:40.942545 bash[1993]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:01:40.948731 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:01:40.954391 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:01:40.982821 systemd[1]: Starting sshkeys.service... Feb 13 19:01:41.010020 polkitd[1995]: Started polkitd version 121 Feb 13 19:01:41.011368 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:01:41.014184 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:01:41.061637 polkitd[1995]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:01:41.062171 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:01:41.061793 polkitd[1995]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:01:41.078351 polkitd[1995]: Finished loading, compiling and executing 2 rules Feb 13 19:01:41.090156 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:01:41.091233 polkitd[1995]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:01:41.094949 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1686) Feb 13 19:01:41.096188 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:01:41.099324 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:01:41.159942 systemd-hostnamed[1940]: Hostname set to (transient) Feb 13 19:01:41.159973 systemd-resolved[1843]: System hostname changed to 'ip-172-31-22-173'. Feb 13 19:01:41.346014 coreos-metadata[2013]: Feb 13 19:01:41.344 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:01:41.348745 coreos-metadata[2013]: Feb 13 19:01:41.348 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:01:41.352543 coreos-metadata[2013]: Feb 13 19:01:41.352 INFO Fetch successful Feb 13 19:01:41.352543 coreos-metadata[2013]: Feb 13 19:01:41.352 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:01:41.353389 coreos-metadata[2013]: Feb 13 19:01:41.353 INFO Fetch successful Feb 13 19:01:41.356640 unknown[2013]: wrote ssh authorized keys file for user: core Feb 13 19:01:41.420185 locksmithd[1955]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:01:41.458643 update-ssh-keys[2066]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:01:41.458755 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:01:41.466948 systemd[1]: Finished sshkeys.service. Feb 13 19:01:41.523285 containerd[1945]: time="2025-02-13T19:01:41.522146459Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:01:41.577032 ntpd[1915]: bind(24) AF_INET6 fe80::4d3:f3ff:fe8c:eb9d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:41.580140 ntpd[1915]: 13 Feb 19:01:41 ntpd[1915]: bind(24) AF_INET6 fe80::4d3:f3ff:fe8c:eb9d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:41.580140 ntpd[1915]: 13 Feb 19:01:41 ntpd[1915]: unable to create socket on eth0 (6) for fe80::4d3:f3ff:fe8c:eb9d%2#123 Feb 13 19:01:41.580140 ntpd[1915]: 13 Feb 19:01:41 ntpd[1915]: failed to init interface for address fe80::4d3:f3ff:fe8c:eb9d%2 Feb 13 19:01:41.577606 ntpd[1915]: unable to create socket on eth0 (6) for fe80::4d3:f3ff:fe8c:eb9d%2#123 Feb 13 19:01:41.577638 ntpd[1915]: failed to init interface for address fe80::4d3:f3ff:fe8c:eb9d%2 Feb 13 19:01:41.691452 containerd[1945]: time="2025-02-13T19:01:41.691026336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:41.698742 containerd[1945]: time="2025-02-13T19:01:41.697591248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:41.698742 containerd[1945]: time="2025-02-13T19:01:41.698439192Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:01:41.698742 containerd[1945]: time="2025-02-13T19:01:41.698505396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:01:41.699453 containerd[1945]: time="2025-02-13T19:01:41.699394956Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:01:41.699606 containerd[1945]: time="2025-02-13T19:01:41.699578652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:41.700115 containerd[1945]: time="2025-02-13T19:01:41.700078452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:41.700836 containerd[1945]: time="2025-02-13T19:01:41.700300188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:41.702621 containerd[1945]: time="2025-02-13T19:01:41.702108828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:41.702621 containerd[1945]: time="2025-02-13T19:01:41.702179712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:41.702621 containerd[1945]: time="2025-02-13T19:01:41.702216780Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:41.702621 containerd[1945]: time="2025-02-13T19:01:41.702272544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:41.702621 containerd[1945]: time="2025-02-13T19:01:41.702549648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:41.704170 containerd[1945]: time="2025-02-13T19:01:41.704017152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:41.705118 containerd[1945]: time="2025-02-13T19:01:41.704861724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:41.705118 containerd[1945]: time="2025-02-13T19:01:41.704906496Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:01:41.706553 containerd[1945]: time="2025-02-13T19:01:41.706206192Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:01:41.706553 containerd[1945]: time="2025-02-13T19:01:41.706407108Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:01:41.713306 containerd[1945]: time="2025-02-13T19:01:41.713235180Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:01:41.713556 containerd[1945]: time="2025-02-13T19:01:41.713527812Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:01:41.714467 containerd[1945]: time="2025-02-13T19:01:41.714024660Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:01:41.714467 containerd[1945]: time="2025-02-13T19:01:41.714070860Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:01:41.714467 containerd[1945]: time="2025-02-13T19:01:41.714114096Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:01:41.714467 containerd[1945]: time="2025-02-13T19:01:41.714397884Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717072408Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717401532Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717437268Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717471456Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717504756Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717536772Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717565956Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717597168Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717630252Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.717739 containerd[1945]: time="2025-02-13T19:01:41.717660840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.717701268Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718286976Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718335132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718367400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718402428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718432812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718461528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718491336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718518300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718547496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718577592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718612464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718641144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719184 containerd[1945]: time="2025-02-13T19:01:41.718669032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719789 containerd[1945]: time="2025-02-13T19:01:41.718701048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719789 containerd[1945]: time="2025-02-13T19:01:41.718731600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:01:41.719789 containerd[1945]: time="2025-02-13T19:01:41.718773492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719789 containerd[1945]: time="2025-02-13T19:01:41.718807116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.719789 containerd[1945]: time="2025-02-13T19:01:41.718847220Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720691476Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720786996Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720817884Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720847128Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720870516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720903108Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720927648Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:01:41.723351 containerd[1945]: time="2025-02-13T19:01:41.720952404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:01:41.723765 containerd[1945]: time="2025-02-13T19:01:41.721466148Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:01:41.723765 containerd[1945]: time="2025-02-13T19:01:41.721554876Z" level=info msg="Connect containerd service" Feb 13 19:01:41.723765 containerd[1945]: time="2025-02-13T19:01:41.721627956Z" level=info msg="using legacy CRI server" Feb 13 19:01:41.723765 containerd[1945]: time="2025-02-13T19:01:41.721645644Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:01:41.723765 containerd[1945]: time="2025-02-13T19:01:41.721881684Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:01:41.728157 containerd[1945]: time="2025-02-13T19:01:41.727469460Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:01:41.728157 containerd[1945]: time="2025-02-13T19:01:41.727672368Z" level=info msg="Start subscribing containerd event" Feb 13 19:01:41.728157 containerd[1945]: time="2025-02-13T19:01:41.727736928Z" level=info msg="Start recovering state" Feb 13 19:01:41.728157 containerd[1945]: time="2025-02-13T19:01:41.727852644Z" level=info msg="Start event monitor" Feb 13 19:01:41.728157 containerd[1945]: time="2025-02-13T19:01:41.727876680Z" level=info msg="Start snapshots syncer" Feb 13 19:01:41.728157 containerd[1945]: time="2025-02-13T19:01:41.727898352Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:01:41.728157 containerd[1945]: time="2025-02-13T19:01:41.727916052Z" level=info msg="Start streaming server" Feb 13 19:01:41.730133 containerd[1945]: time="2025-02-13T19:01:41.730082268Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:01:41.731205 containerd[1945]: time="2025-02-13T19:01:41.730880760Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:01:41.731336 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:01:41.733314 containerd[1945]: time="2025-02-13T19:01:41.731182248Z" level=info msg="containerd successfully booted in 0.228516s" Feb 13 19:01:41.912477 systemd-networkd[1837]: eth0: Gained IPv6LL Feb 13 19:01:41.921217 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:01:41.924828 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:01:41.938934 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:01:41.950616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:41.968738 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:01:42.072989 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:01:42.104040 amazon-ssm-agent[2119]: Initializing new seelog logger Feb 13 19:01:42.105033 amazon-ssm-agent[2119]: New Seelog Logger Creation Complete Feb 13 19:01:42.105922 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.106017 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.106877 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 processing appconfig overrides Feb 13 19:01:42.108227 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.108950 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.109155 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 processing appconfig overrides Feb 13 19:01:42.110618 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.110618 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.110618 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 processing appconfig overrides Feb 13 19:01:42.112046 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO Proxy environment variables: Feb 13 19:01:42.118162 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.118337 amazon-ssm-agent[2119]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:42.119436 amazon-ssm-agent[2119]: 2025/02/13 19:01:42 processing appconfig overrides Feb 13 19:01:42.214059 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO https_proxy: Feb 13 19:01:42.316199 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO http_proxy: Feb 13 19:01:42.336913 tar[1927]: linux-arm64/LICENSE Feb 13 19:01:42.337479 tar[1927]: linux-arm64/README.md Feb 13 19:01:42.379735 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:01:42.416268 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO no_proxy: Feb 13 19:01:42.512564 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:01:42.610966 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:01:42.713223 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO Agent will take identity from EC2 Feb 13 19:01:42.813128 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:42.912418 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [Registrar] Starting registrar module Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:01:42.927850 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:01:43.013593 amazon-ssm-agent[2119]: 2025-02-13 19:01:42 INFO [CredentialRefresher] Next credential rotation will be in 31.083286722533334 minutes Feb 13 19:01:43.334382 sshd_keygen[1958]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:01:43.387204 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:01:43.399819 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:01:43.406771 systemd[1]: Started sshd@0-172.31.22.173:22-147.75.109.163:36188.service - OpenSSH per-connection server daemon (147.75.109.163:36188). Feb 13 19:01:43.416217 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:01:43.420359 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:01:43.433483 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:01:43.457735 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:01:43.471774 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:01:43.483990 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:01:43.487729 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:01:43.670308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:43.673847 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:01:43.680473 systemd[1]: Startup finished in 1.084s (kernel) + 9.133s (initrd) + 8.797s (userspace) = 19.015s. Feb 13 19:01:43.684934 sshd[2149]: Accepted publickey for core from 147.75.109.163 port 36188 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:43.685113 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:01:43.695560 sshd-session[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:43.722149 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:01:43.730862 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:01:43.739383 systemd-logind[1923]: New session 1 of user core. Feb 13 19:01:43.770938 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:01:43.785011 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:01:43.792809 (systemd)[2170]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:01:43.971009 amazon-ssm-agent[2119]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:01:44.026602 systemd[2170]: Queued start job for default target default.target. Feb 13 19:01:44.034888 systemd[2170]: Created slice app.slice - User Application Slice. Feb 13 19:01:44.034956 systemd[2170]: Reached target paths.target - Paths. Feb 13 19:01:44.034989 systemd[2170]: Reached target timers.target - Timers. Feb 13 19:01:44.038565 systemd[2170]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:01:44.071805 amazon-ssm-agent[2119]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2181) started Feb 13 19:01:44.076058 systemd[2170]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:01:44.076348 systemd[2170]: Reached target sockets.target - Sockets. Feb 13 19:01:44.076394 systemd[2170]: Reached target basic.target - Basic System. Feb 13 19:01:44.076477 systemd[2170]: Reached target default.target - Main User Target. Feb 13 19:01:44.076548 systemd[2170]: Startup finished in 271ms. Feb 13 19:01:44.077722 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:01:44.087025 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:01:44.173551 amazon-ssm-agent[2119]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:01:44.261329 systemd[1]: Started sshd@1-172.31.22.173:22-147.75.109.163:34226.service - OpenSSH per-connection server daemon (147.75.109.163:34226). Feb 13 19:01:44.462963 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 34226 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:44.465554 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:44.474986 systemd-logind[1923]: New session 2 of user core. Feb 13 19:01:44.482561 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:01:44.576182 ntpd[1915]: Listen normally on 7 eth0 [fe80::4d3:f3ff:fe8c:eb9d%2]:123 Feb 13 19:01:44.576797 ntpd[1915]: 13 Feb 19:01:44 ntpd[1915]: Listen normally on 7 eth0 [fe80::4d3:f3ff:fe8c:eb9d%2]:123 Feb 13 19:01:44.610401 sshd[2200]: Connection closed by 147.75.109.163 port 34226 Feb 13 19:01:44.613356 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:44.618591 systemd[1]: sshd@1-172.31.22.173:22-147.75.109.163:34226.service: Deactivated successfully. Feb 13 19:01:44.625736 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:01:44.631227 systemd-logind[1923]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:01:44.647994 systemd[1]: Started sshd@2-172.31.22.173:22-147.75.109.163:34242.service - OpenSSH per-connection server daemon (147.75.109.163:34242). Feb 13 19:01:44.650086 systemd-logind[1923]: Removed session 2. Feb 13 19:01:44.764128 kubelet[2163]: E0213 19:01:44.763976 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:01:44.768317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:01:44.768701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:01:44.769361 systemd[1]: kubelet.service: Consumed 1.280s CPU time. Feb 13 19:01:44.844104 sshd[2205]: Accepted publickey for core from 147.75.109.163 port 34242 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:44.846551 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:44.854951 systemd-logind[1923]: New session 3 of user core. Feb 13 19:01:44.862563 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:01:44.981742 sshd[2208]: Connection closed by 147.75.109.163 port 34242 Feb 13 19:01:44.981006 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:44.986519 systemd[1]: sshd@2-172.31.22.173:22-147.75.109.163:34242.service: Deactivated successfully. Feb 13 19:01:44.989827 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:01:44.993462 systemd-logind[1923]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:01:44.995289 systemd-logind[1923]: Removed session 3. Feb 13 19:01:45.015365 systemd[1]: Started sshd@3-172.31.22.173:22-147.75.109.163:34256.service - OpenSSH per-connection server daemon (147.75.109.163:34256). Feb 13 19:01:45.210895 sshd[2213]: Accepted publickey for core from 147.75.109.163 port 34256 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:45.213404 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:45.220817 systemd-logind[1923]: New session 4 of user core. Feb 13 19:01:45.231509 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:01:45.357603 sshd[2215]: Connection closed by 147.75.109.163 port 34256 Feb 13 19:01:45.358411 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:45.364977 systemd[1]: sshd@3-172.31.22.173:22-147.75.109.163:34256.service: Deactivated successfully. Feb 13 19:01:45.368911 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:01:45.370503 systemd-logind[1923]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:01:45.372448 systemd-logind[1923]: Removed session 4. Feb 13 19:01:45.397771 systemd[1]: Started sshd@4-172.31.22.173:22-147.75.109.163:34268.service - OpenSSH per-connection server daemon (147.75.109.163:34268). Feb 13 19:01:45.575985 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 34268 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:45.578659 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:45.585735 systemd-logind[1923]: New session 5 of user core. Feb 13 19:01:45.595474 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:01:45.728780 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:01:45.729441 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:45.744878 sudo[2223]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:45.768319 sshd[2222]: Connection closed by 147.75.109.163 port 34268 Feb 13 19:01:45.769379 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:45.775480 systemd-logind[1923]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:01:45.775864 systemd[1]: sshd@4-172.31.22.173:22-147.75.109.163:34268.service: Deactivated successfully. Feb 13 19:01:45.779192 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:01:45.782872 systemd-logind[1923]: Removed session 5. Feb 13 19:01:45.810816 systemd[1]: Started sshd@5-172.31.22.173:22-147.75.109.163:34274.service - OpenSSH per-connection server daemon (147.75.109.163:34274). Feb 13 19:01:46.006670 sshd[2228]: Accepted publickey for core from 147.75.109.163 port 34274 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:46.009472 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:46.021699 systemd-logind[1923]: New session 6 of user core. Feb 13 19:01:46.029619 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:01:46.138837 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:01:46.139639 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:46.146898 sudo[2232]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:46.158608 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:01:46.159479 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:46.190951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:01:46.246760 augenrules[2254]: No rules Feb 13 19:01:46.249528 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:01:46.250204 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:01:46.252973 sudo[2231]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:46.276771 sshd[2230]: Connection closed by 147.75.109.163 port 34274 Feb 13 19:01:46.276612 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:46.282717 systemd-logind[1923]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:01:46.283403 systemd[1]: sshd@5-172.31.22.173:22-147.75.109.163:34274.service: Deactivated successfully. Feb 13 19:01:46.286740 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:01:46.290764 systemd-logind[1923]: Removed session 6. Feb 13 19:01:46.317761 systemd[1]: Started sshd@6-172.31.22.173:22-147.75.109.163:34276.service - OpenSSH per-connection server daemon (147.75.109.163:34276). Feb 13 19:01:46.510199 sshd[2262]: Accepted publickey for core from 147.75.109.163 port 34276 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:46.512862 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:46.522678 systemd-logind[1923]: New session 7 of user core. Feb 13 19:01:46.532614 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:01:46.637621 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:01:46.638226 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:47.898066 systemd-resolved[1843]: Clock change detected. Flushing caches. Feb 13 19:01:47.968361 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:01:47.977410 (dockerd)[2282]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:01:48.429542 dockerd[2282]: time="2025-02-13T19:01:48.429418870Z" level=info msg="Starting up" Feb 13 19:01:48.753121 systemd[1]: var-lib-docker-metacopy\x2dcheck2047099118-merged.mount: Deactivated successfully. Feb 13 19:01:48.764170 dockerd[2282]: time="2025-02-13T19:01:48.764096436Z" level=info msg="Loading containers: start." Feb 13 19:01:49.026177 kernel: Initializing XFRM netlink socket Feb 13 19:01:49.076780 (udev-worker)[2308]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:49.177507 systemd-networkd[1837]: docker0: Link UP Feb 13 19:01:49.220200 dockerd[2282]: time="2025-02-13T19:01:49.220127722Z" level=info msg="Loading containers: done." Feb 13 19:01:49.242146 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3421852087-merged.mount: Deactivated successfully. Feb 13 19:01:49.260044 dockerd[2282]: time="2025-02-13T19:01:49.259966486Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:01:49.260261 dockerd[2282]: time="2025-02-13T19:01:49.260111686Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:01:49.260330 dockerd[2282]: time="2025-02-13T19:01:49.260300878Z" level=info msg="Daemon has completed initialization" Feb 13 19:01:49.317256 dockerd[2282]: time="2025-02-13T19:01:49.316485323Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:01:49.316628 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:01:50.396162 containerd[1945]: time="2025-02-13T19:01:50.396102168Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:01:51.052488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2812994605.mount: Deactivated successfully. Feb 13 19:01:52.336296 containerd[1945]: time="2025-02-13T19:01:52.336217610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:52.338292 containerd[1945]: time="2025-02-13T19:01:52.338206598Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 19:01:52.339268 containerd[1945]: time="2025-02-13T19:01:52.339180374Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:52.346915 containerd[1945]: time="2025-02-13T19:01:52.345128018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:52.347699 containerd[1945]: time="2025-02-13T19:01:52.347634710Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.951471546s" Feb 13 19:01:52.347898 containerd[1945]: time="2025-02-13T19:01:52.347845754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:01:52.349001 containerd[1945]: time="2025-02-13T19:01:52.348850334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:01:53.949421 containerd[1945]: time="2025-02-13T19:01:53.949361550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:53.952158 containerd[1945]: time="2025-02-13T19:01:53.952032810Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 19:01:53.953113 containerd[1945]: time="2025-02-13T19:01:53.953026182Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:53.959275 containerd[1945]: time="2025-02-13T19:01:53.959183970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:53.961673 containerd[1945]: time="2025-02-13T19:01:53.961602810Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.612234868s" Feb 13 19:01:53.962204 containerd[1945]: time="2025-02-13T19:01:53.961905798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:01:53.963289 containerd[1945]: time="2025-02-13T19:01:53.962771682Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:01:55.305681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:01:55.315243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:55.347909 containerd[1945]: time="2025-02-13T19:01:55.347278865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:55.350285 containerd[1945]: time="2025-02-13T19:01:55.350195957Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 19:01:55.351836 containerd[1945]: time="2025-02-13T19:01:55.351750185Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:55.363912 containerd[1945]: time="2025-02-13T19:01:55.362656841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:55.366018 containerd[1945]: time="2025-02-13T19:01:55.365963105Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.403126731s" Feb 13 19:01:55.366212 containerd[1945]: time="2025-02-13T19:01:55.366184913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:01:55.367271 containerd[1945]: time="2025-02-13T19:01:55.367224629Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:01:55.620916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:55.635406 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:01:55.724845 kubelet[2541]: E0213 19:01:55.724751 2541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:01:55.733103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:01:55.733783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:01:56.759322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045517532.mount: Deactivated successfully. Feb 13 19:01:57.288936 containerd[1945]: time="2025-02-13T19:01:57.288741474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:57.290424 containerd[1945]: time="2025-02-13T19:01:57.290341578Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:01:57.291551 containerd[1945]: time="2025-02-13T19:01:57.291478818Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:57.295914 containerd[1945]: time="2025-02-13T19:01:57.295436946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:57.297195 containerd[1945]: time="2025-02-13T19:01:57.296963526Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.929527865s" Feb 13 19:01:57.297195 containerd[1945]: time="2025-02-13T19:01:57.297018306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:01:57.298302 containerd[1945]: time="2025-02-13T19:01:57.298147950Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:01:57.904558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount616762839.mount: Deactivated successfully. Feb 13 19:01:58.904645 containerd[1945]: time="2025-02-13T19:01:58.904581346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:58.906529 containerd[1945]: time="2025-02-13T19:01:58.905811394Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:01:58.908121 containerd[1945]: time="2025-02-13T19:01:58.908076058Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:58.915909 containerd[1945]: time="2025-02-13T19:01:58.915071446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:58.917552 containerd[1945]: time="2025-02-13T19:01:58.917500942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.619297768s" Feb 13 19:01:58.917724 containerd[1945]: time="2025-02-13T19:01:58.917692810Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:01:58.919554 containerd[1945]: time="2025-02-13T19:01:58.919511854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:01:59.478442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555304903.mount: Deactivated successfully. Feb 13 19:01:59.485962 containerd[1945]: time="2025-02-13T19:01:59.485858109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:59.489597 containerd[1945]: time="2025-02-13T19:01:59.489513849Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:01:59.491243 containerd[1945]: time="2025-02-13T19:01:59.491184165Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:59.494998 containerd[1945]: time="2025-02-13T19:01:59.494919453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:59.496852 containerd[1945]: time="2025-02-13T19:01:59.496642233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 576.681903ms" Feb 13 19:01:59.496852 containerd[1945]: time="2025-02-13T19:01:59.496694289Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:01:59.497681 containerd[1945]: time="2025-02-13T19:01:59.497386041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:02:00.041719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292430140.mount: Deactivated successfully. Feb 13 19:02:02.410145 containerd[1945]: time="2025-02-13T19:02:02.410050236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:02.424030 containerd[1945]: time="2025-02-13T19:02:02.423940620Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 19:02:02.434937 containerd[1945]: time="2025-02-13T19:02:02.434813508Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:02.457290 containerd[1945]: time="2025-02-13T19:02:02.457180008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:02.460369 containerd[1945]: time="2025-02-13T19:02:02.459761232Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.962321119s" Feb 13 19:02:02.460369 containerd[1945]: time="2025-02-13T19:02:02.459823776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:02:05.805215 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:02:05.814376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:06.114167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:06.134740 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:06.215541 kubelet[2685]: E0213 19:02:06.215392 2685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:06.220703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:06.221204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:11.272586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:11.281418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:11.332717 systemd[1]: Reloading requested from client PID 2700 ('systemctl') (unit session-7.scope)... Feb 13 19:02:11.332749 systemd[1]: Reloading... Feb 13 19:02:11.592007 zram_generator::config[2744]: No configuration found. Feb 13 19:02:11.807589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:11.971242 systemd[1]: Reloading finished in 637 ms. Feb 13 19:02:12.031066 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:02:12.072389 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:02:12.072622 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:02:12.074013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:12.080525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:12.369253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:12.379434 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:12.451931 kubelet[2809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:12.451931 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:12.451931 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:12.451931 kubelet[2809]: I0213 19:02:12.451787 2809 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:13.657333 kubelet[2809]: I0213 19:02:13.657268 2809 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:02:13.658017 kubelet[2809]: I0213 19:02:13.657988 2809 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:13.658829 kubelet[2809]: I0213 19:02:13.658773 2809 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:02:13.714448 kubelet[2809]: E0213 19:02:13.714386 2809 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.173:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.716214 kubelet[2809]: I0213 19:02:13.715958 2809 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:13.735589 kubelet[2809]: E0213 19:02:13.735516 2809 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:13.735589 kubelet[2809]: I0213 19:02:13.735588 2809 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:13.742402 kubelet[2809]: I0213 19:02:13.742345 2809 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:13.742661 kubelet[2809]: I0213 19:02:13.742630 2809 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:02:13.743046 kubelet[2809]: I0213 19:02:13.742985 2809 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:13.743376 kubelet[2809]: I0213 19:02:13.743046 2809 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-173","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:13.743574 kubelet[2809]: I0213 19:02:13.743386 2809 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:13.743574 kubelet[2809]: I0213 19:02:13.743409 2809 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:02:13.743690 kubelet[2809]: I0213 19:02:13.743617 2809 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:13.747636 kubelet[2809]: I0213 19:02:13.747023 2809 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:02:13.747636 kubelet[2809]: I0213 19:02:13.747086 2809 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:13.747636 kubelet[2809]: I0213 19:02:13.747154 2809 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:02:13.747636 kubelet[2809]: I0213 19:02:13.747175 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:13.758224 kubelet[2809]: W0213 19:02:13.758106 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-173&limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:13.758363 kubelet[2809]: E0213 19:02:13.758241 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-173&limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.761150 kubelet[2809]: W0213 19:02:13.761019 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.173:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:13.761293 kubelet[2809]: E0213 19:02:13.761154 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.173:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.761347 kubelet[2809]: I0213 19:02:13.761309 2809 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:13.764510 kubelet[2809]: I0213 19:02:13.764441 2809 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:13.765895 kubelet[2809]: W0213 19:02:13.765822 2809 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:02:13.768122 kubelet[2809]: I0213 19:02:13.767627 2809 server.go:1269] "Started kubelet" Feb 13 19:02:13.769026 kubelet[2809]: I0213 19:02:13.768960 2809 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:13.770829 kubelet[2809]: I0213 19:02:13.770769 2809 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:02:13.773795 kubelet[2809]: I0213 19:02:13.773700 2809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:13.774546 kubelet[2809]: I0213 19:02:13.774500 2809 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:13.777600 kubelet[2809]: E0213 19:02:13.775584 2809 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.173:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.173:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-173.1823d9c7eb7b3518 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-173,UID:ip-172-31-22-173,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-173,},FirstTimestamp:2025-02-13 19:02:13.767583 +0000 UTC m=+1.380158312,LastTimestamp:2025-02-13 19:02:13.767583 +0000 UTC m=+1.380158312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-173,}" Feb 13 19:02:13.777867 kubelet[2809]: I0213 19:02:13.777746 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:13.778072 kubelet[2809]: I0213 19:02:13.778029 2809 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:13.783273 kubelet[2809]: I0213 19:02:13.782556 2809 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:02:13.783273 kubelet[2809]: I0213 19:02:13.782785 2809 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:02:13.783273 kubelet[2809]: I0213 19:02:13.782949 2809 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:13.783838 kubelet[2809]: W0213 19:02:13.783744 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:13.785844 kubelet[2809]: E0213 19:02:13.783849 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.785844 kubelet[2809]: E0213 19:02:13.784350 2809 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-173\" not found" Feb 13 19:02:13.785844 kubelet[2809]: E0213 19:02:13.784492 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-173?timeout=10s\": dial tcp 172.31.22.173:6443: connect: connection refused" interval="200ms" Feb 13 19:02:13.786346 kubelet[2809]: I0213 19:02:13.786289 2809 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:13.786506 kubelet[2809]: I0213 19:02:13.786456 2809 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:13.789288 kubelet[2809]: E0213 19:02:13.789232 2809 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:13.789539 kubelet[2809]: I0213 19:02:13.789498 2809 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:13.818142 kubelet[2809]: I0213 19:02:13.818052 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:13.822836 kubelet[2809]: I0213 19:02:13.822740 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:13.822836 kubelet[2809]: I0213 19:02:13.822811 2809 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:02:13.822836 kubelet[2809]: I0213 19:02:13.822848 2809 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:02:13.823127 kubelet[2809]: E0213 19:02:13.823043 2809 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:13.829509 kubelet[2809]: W0213 19:02:13.829230 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:13.829509 kubelet[2809]: E0213 19:02:13.829345 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.846329 kubelet[2809]: I0213 19:02:13.846290 2809 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:02:13.846932 kubelet[2809]: I0213 19:02:13.846529 2809 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:13.846932 kubelet[2809]: I0213 19:02:13.846569 2809 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:13.850260 kubelet[2809]: I0213 19:02:13.850158 2809 policy_none.go:49] "None policy: Start" Feb 13 19:02:13.851915 kubelet[2809]: I0213 19:02:13.851849 2809 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:02:13.851915 kubelet[2809]: I0213 19:02:13.851923 2809 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:13.861662 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:02:13.881635 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:02:13.884945 kubelet[2809]: E0213 19:02:13.884843 2809 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-173\" not found" Feb 13 19:02:13.888772 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:02:13.898682 kubelet[2809]: I0213 19:02:13.898633 2809 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:13.899579 kubelet[2809]: I0213 19:02:13.899548 2809 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:13.900273 kubelet[2809]: I0213 19:02:13.899958 2809 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:13.900419 kubelet[2809]: I0213 19:02:13.900384 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:13.904649 kubelet[2809]: E0213 19:02:13.904426 2809 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-173\" not found" Feb 13 19:02:13.943077 systemd[1]: Created slice kubepods-burstable-pod9fe94b7ea8574fdcd8ff46b52dbb703b.slice - libcontainer container kubepods-burstable-pod9fe94b7ea8574fdcd8ff46b52dbb703b.slice. Feb 13 19:02:13.968450 systemd[1]: Created slice kubepods-burstable-pod7c16f7b72c1eaab5ee9365a6371b4b0f.slice - libcontainer container kubepods-burstable-pod7c16f7b72c1eaab5ee9365a6371b4b0f.slice. Feb 13 19:02:13.977253 systemd[1]: Created slice kubepods-burstable-pod794cd58122b8301acebde41290be6fd8.slice - libcontainer container kubepods-burstable-pod794cd58122b8301acebde41290be6fd8.slice. Feb 13 19:02:13.984384 kubelet[2809]: I0213 19:02:13.984329 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:13.984543 kubelet[2809]: I0213 19:02:13.984462 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:13.985072 kubelet[2809]: I0213 19:02:13.984836 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c16f7b72c1eaab5ee9365a6371b4b0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-173\" (UID: \"7c16f7b72c1eaab5ee9365a6371b4b0f\") " pod="kube-system/kube-scheduler-ip-172-31-22-173" Feb 13 19:02:13.985072 kubelet[2809]: I0213 19:02:13.984923 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/794cd58122b8301acebde41290be6fd8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-173\" (UID: \"794cd58122b8301acebde41290be6fd8\") " pod="kube-system/kube-apiserver-ip-172-31-22-173" Feb 13 19:02:13.985072 kubelet[2809]: I0213 19:02:13.984962 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/794cd58122b8301acebde41290be6fd8-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-173\" (UID: \"794cd58122b8301acebde41290be6fd8\") " pod="kube-system/kube-apiserver-ip-172-31-22-173" Feb 13 19:02:13.985072 kubelet[2809]: I0213 19:02:13.984998 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:13.985072 kubelet[2809]: E0213 19:02:13.984954 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-173?timeout=10s\": dial tcp 172.31.22.173:6443: connect: connection refused" interval="400ms" Feb 13 19:02:13.985500 kubelet[2809]: I0213 19:02:13.985036 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:13.985500 kubelet[2809]: I0213 19:02:13.985091 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:13.985500 kubelet[2809]: I0213 19:02:13.985129 2809 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/794cd58122b8301acebde41290be6fd8-ca-certs\") pod \"kube-apiserver-ip-172-31-22-173\" (UID: \"794cd58122b8301acebde41290be6fd8\") " pod="kube-system/kube-apiserver-ip-172-31-22-173" Feb 13 19:02:14.002653 kubelet[2809]: I0213 19:02:14.002617 2809 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-173" Feb 13 19:02:14.003651 kubelet[2809]: E0213 19:02:14.003581 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.173:6443/api/v1/nodes\": dial tcp 172.31.22.173:6443: connect: connection refused" node="ip-172-31-22-173" Feb 13 19:02:14.206197 kubelet[2809]: I0213 19:02:14.206053 2809 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-173" Feb 13 19:02:14.207078 kubelet[2809]: E0213 19:02:14.206548 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.173:6443/api/v1/nodes\": dial tcp 172.31.22.173:6443: connect: connection refused" node="ip-172-31-22-173" Feb 13 19:02:14.262236 containerd[1945]: time="2025-02-13T19:02:14.262138630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-173,Uid:9fe94b7ea8574fdcd8ff46b52dbb703b,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:14.274442 containerd[1945]: time="2025-02-13T19:02:14.274061591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-173,Uid:7c16f7b72c1eaab5ee9365a6371b4b0f,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:14.282797 containerd[1945]: time="2025-02-13T19:02:14.282736607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-173,Uid:794cd58122b8301acebde41290be6fd8,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:14.385847 kubelet[2809]: E0213 19:02:14.385769 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-173?timeout=10s\": dial tcp 172.31.22.173:6443: connect: connection refused" interval="800ms" Feb 13 19:02:14.588028 kubelet[2809]: W0213 19:02:14.587858 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:14.588028 kubelet[2809]: E0213 19:02:14.587985 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.173:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:14.609336 kubelet[2809]: I0213 19:02:14.609278 2809 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-173" Feb 13 19:02:14.609792 kubelet[2809]: E0213 19:02:14.609734 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.173:6443/api/v1/nodes\": dial tcp 172.31.22.173:6443: connect: connection refused" node="ip-172-31-22-173" Feb 13 19:02:14.724690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount429985949.mount: Deactivated successfully. Feb 13 19:02:14.736421 containerd[1945]: time="2025-02-13T19:02:14.736363741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.740294 containerd[1945]: time="2025-02-13T19:02:14.739856845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.742178 containerd[1945]: time="2025-02-13T19:02:14.742087741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:02:14.743144 containerd[1945]: time="2025-02-13T19:02:14.743090641Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:14.745205 containerd[1945]: time="2025-02-13T19:02:14.745116229Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.747855 containerd[1945]: time="2025-02-13T19:02:14.747705877Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:14.751055 containerd[1945]: time="2025-02-13T19:02:14.750979417Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.755982 containerd[1945]: time="2025-02-13T19:02:14.755636881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 481.467098ms" Feb 13 19:02:14.757488 containerd[1945]: time="2025-02-13T19:02:14.757428061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.761322 containerd[1945]: time="2025-02-13T19:02:14.761132221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.879807ms" Feb 13 19:02:14.771242 containerd[1945]: time="2025-02-13T19:02:14.771169405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.322182ms" Feb 13 19:02:15.003755 kubelet[2809]: W0213 19:02:15.003574 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-173&limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:15.003755 kubelet[2809]: E0213 19:02:15.003678 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.173:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-173&limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:15.036222 kubelet[2809]: W0213 19:02:15.035777 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.173:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:15.036520 kubelet[2809]: E0213 19:02:15.035905 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.173:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:15.048526 containerd[1945]: time="2025-02-13T19:02:15.045949570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:15.048526 containerd[1945]: time="2025-02-13T19:02:15.046289446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:15.048526 containerd[1945]: time="2025-02-13T19:02:15.046439890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:15.048526 containerd[1945]: time="2025-02-13T19:02:15.047575138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:15.050823 containerd[1945]: time="2025-02-13T19:02:15.050485870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:15.051047 containerd[1945]: time="2025-02-13T19:02:15.050915362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:15.051279 containerd[1945]: time="2025-02-13T19:02:15.050967586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:15.053032 containerd[1945]: time="2025-02-13T19:02:15.052821682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:15.053372 containerd[1945]: time="2025-02-13T19:02:15.053232166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:15.053567 containerd[1945]: time="2025-02-13T19:02:15.053490538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:15.055666 containerd[1945]: time="2025-02-13T19:02:15.055483894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:15.057706 containerd[1945]: time="2025-02-13T19:02:15.057358750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:15.116221 systemd[1]: Started cri-containerd-2f7e2177fa57eb34c3bd9faf3e8ca2343fc210441be2f4b389fea58edfa65ffc.scope - libcontainer container 2f7e2177fa57eb34c3bd9faf3e8ca2343fc210441be2f4b389fea58edfa65ffc. Feb 13 19:02:15.121158 systemd[1]: Started cri-containerd-fe4fdca819cc47d6b155185a4c9a61dc3cb11c588322dc5d3862d9168be08b96.scope - libcontainer container fe4fdca819cc47d6b155185a4c9a61dc3cb11c588322dc5d3862d9168be08b96. Feb 13 19:02:15.134492 systemd[1]: Started cri-containerd-0606a001fb5eb388a5b4bbb6d27800784df878e8702d035ee4a154895aeadf86.scope - libcontainer container 0606a001fb5eb388a5b4bbb6d27800784df878e8702d035ee4a154895aeadf86. Feb 13 19:02:15.140207 kubelet[2809]: W0213 19:02:15.139748 2809 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.173:6443: connect: connection refused Feb 13 19:02:15.140207 kubelet[2809]: E0213 19:02:15.140157 2809 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.173:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.173:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:15.187534 kubelet[2809]: E0213 19:02:15.187294 2809 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-173?timeout=10s\": dial tcp 172.31.22.173:6443: connect: connection refused" interval="1.6s" Feb 13 19:02:15.218300 containerd[1945]: time="2025-02-13T19:02:15.218196491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-173,Uid:9fe94b7ea8574fdcd8ff46b52dbb703b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe4fdca819cc47d6b155185a4c9a61dc3cb11c588322dc5d3862d9168be08b96\"" Feb 13 19:02:15.226265 containerd[1945]: time="2025-02-13T19:02:15.225777719Z" level=info msg="CreateContainer within sandbox \"fe4fdca819cc47d6b155185a4c9a61dc3cb11c588322dc5d3862d9168be08b96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:02:15.267983 containerd[1945]: time="2025-02-13T19:02:15.267319451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-173,Uid:794cd58122b8301acebde41290be6fd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0606a001fb5eb388a5b4bbb6d27800784df878e8702d035ee4a154895aeadf86\"" Feb 13 19:02:15.275937 containerd[1945]: time="2025-02-13T19:02:15.275811995Z" level=info msg="CreateContainer within sandbox \"0606a001fb5eb388a5b4bbb6d27800784df878e8702d035ee4a154895aeadf86\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:02:15.276701 containerd[1945]: time="2025-02-13T19:02:15.276644387Z" level=info msg="CreateContainer within sandbox \"fe4fdca819cc47d6b155185a4c9a61dc3cb11c588322dc5d3862d9168be08b96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f\"" Feb 13 19:02:15.279929 containerd[1945]: time="2025-02-13T19:02:15.279145680Z" level=info msg="StartContainer for \"9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f\"" Feb 13 19:02:15.288440 containerd[1945]: time="2025-02-13T19:02:15.288351612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-173,Uid:7c16f7b72c1eaab5ee9365a6371b4b0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f7e2177fa57eb34c3bd9faf3e8ca2343fc210441be2f4b389fea58edfa65ffc\"" Feb 13 19:02:15.294401 containerd[1945]: time="2025-02-13T19:02:15.294350232Z" level=info msg="CreateContainer within sandbox \"2f7e2177fa57eb34c3bd9faf3e8ca2343fc210441be2f4b389fea58edfa65ffc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:02:15.304493 containerd[1945]: time="2025-02-13T19:02:15.304428216Z" level=info msg="CreateContainer within sandbox \"0606a001fb5eb388a5b4bbb6d27800784df878e8702d035ee4a154895aeadf86\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3404e6f6bdb451b86f39e2d251ba021969dd46abbe6d6925b65a7c8f3a089104\"" Feb 13 19:02:15.307000 containerd[1945]: time="2025-02-13T19:02:15.306953340Z" level=info msg="StartContainer for \"3404e6f6bdb451b86f39e2d251ba021969dd46abbe6d6925b65a7c8f3a089104\"" Feb 13 19:02:15.339065 containerd[1945]: time="2025-02-13T19:02:15.337921296Z" level=info msg="CreateContainer within sandbox \"2f7e2177fa57eb34c3bd9faf3e8ca2343fc210441be2f4b389fea58edfa65ffc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0\"" Feb 13 19:02:15.341990 containerd[1945]: time="2025-02-13T19:02:15.341406708Z" level=info msg="StartContainer for \"c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0\"" Feb 13 19:02:15.346765 systemd[1]: Started cri-containerd-9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f.scope - libcontainer container 9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f. Feb 13 19:02:15.387268 systemd[1]: Started cri-containerd-3404e6f6bdb451b86f39e2d251ba021969dd46abbe6d6925b65a7c8f3a089104.scope - libcontainer container 3404e6f6bdb451b86f39e2d251ba021969dd46abbe6d6925b65a7c8f3a089104. Feb 13 19:02:15.419780 kubelet[2809]: I0213 19:02:15.419202 2809 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-173" Feb 13 19:02:15.419780 kubelet[2809]: E0213 19:02:15.419716 2809 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.173:6443/api/v1/nodes\": dial tcp 172.31.22.173:6443: connect: connection refused" node="ip-172-31-22-173" Feb 13 19:02:15.435179 systemd[1]: Started cri-containerd-c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0.scope - libcontainer container c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0. Feb 13 19:02:15.484999 containerd[1945]: time="2025-02-13T19:02:15.484926133Z" level=info msg="StartContainer for \"9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f\" returns successfully" Feb 13 19:02:15.543757 containerd[1945]: time="2025-02-13T19:02:15.542585017Z" level=info msg="StartContainer for \"3404e6f6bdb451b86f39e2d251ba021969dd46abbe6d6925b65a7c8f3a089104\" returns successfully" Feb 13 19:02:15.598000 containerd[1945]: time="2025-02-13T19:02:15.597917149Z" level=info msg="StartContainer for \"c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0\" returns successfully" Feb 13 19:02:17.023251 kubelet[2809]: I0213 19:02:17.023215 2809 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-173" Feb 13 19:02:21.033321 kubelet[2809]: E0213 19:02:21.033253 2809 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-173\" not found" node="ip-172-31-22-173" Feb 13 19:02:21.056558 kubelet[2809]: I0213 19:02:21.056310 2809 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-173" Feb 13 19:02:21.056558 kubelet[2809]: E0213 19:02:21.056375 2809 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-22-173\": node \"ip-172-31-22-173\" not found" Feb 13 19:02:21.101114 kubelet[2809]: E0213 19:02:21.100660 2809 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-173.1823d9c7eb7b3518 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-173,UID:ip-172-31-22-173,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-173,},FirstTimestamp:2025-02-13 19:02:13.767583 +0000 UTC m=+1.380158312,LastTimestamp:2025-02-13 19:02:13.767583 +0000 UTC m=+1.380158312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-173,}" Feb 13 19:02:21.170903 kubelet[2809]: E0213 19:02:21.169034 2809 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-173.1823d9c7ecc52c24 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-173,UID:ip-172-31-22-173,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-22-173,},FirstTimestamp:2025-02-13 19:02:13.789207588 +0000 UTC m=+1.401782888,LastTimestamp:2025-02-13 19:02:13.789207588 +0000 UTC m=+1.401782888,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-173,}" Feb 13 19:02:21.231572 kubelet[2809]: E0213 19:02:21.231192 2809 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-173.1823d9c7f0135db0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-173,UID:ip-172-31-22-173,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-22-173 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-22-173,},FirstTimestamp:2025-02-13 19:02:13.844663728 +0000 UTC m=+1.457239016,LastTimestamp:2025-02-13 19:02:13.844663728 +0000 UTC m=+1.457239016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-173,}" Feb 13 19:02:21.762577 kubelet[2809]: I0213 19:02:21.762498 2809 apiserver.go:52] "Watching apiserver" Feb 13 19:02:21.783676 kubelet[2809]: I0213 19:02:21.783627 2809 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:02:23.375256 systemd[1]: Reloading requested from client PID 3079 ('systemctl') (unit session-7.scope)... Feb 13 19:02:23.375293 systemd[1]: Reloading... Feb 13 19:02:23.540265 zram_generator::config[3119]: No configuration found. Feb 13 19:02:23.847379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:23.893575 kubelet[2809]: I0213 19:02:23.893450 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-173" podStartSLOduration=1.893428954 podStartE2EDuration="1.893428954s" podCreationTimestamp="2025-02-13 19:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:23.89320759 +0000 UTC m=+11.505782926" watchObservedRunningTime="2025-02-13 19:02:23.893428954 +0000 UTC m=+11.506004254" Feb 13 19:02:23.894191 kubelet[2809]: I0213 19:02:23.893701 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-173" podStartSLOduration=1.893689174 podStartE2EDuration="1.893689174s" podCreationTimestamp="2025-02-13 19:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:23.872555134 +0000 UTC m=+11.485130446" watchObservedRunningTime="2025-02-13 19:02:23.893689174 +0000 UTC m=+11.506264486" Feb 13 19:02:24.048594 systemd[1]: Reloading finished in 672 ms. Feb 13 19:02:24.123217 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:24.137712 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:24.138158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:24.138241 systemd[1]: kubelet.service: Consumed 2.099s CPU time, 116.8M memory peak, 0B memory swap peak. Feb 13 19:02:24.152678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:24.457799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:24.475476 (kubelet)[3181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:24.553568 kubelet[3181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:24.555950 kubelet[3181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:24.555950 kubelet[3181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:24.555950 kubelet[3181]: I0213 19:02:24.554466 3181 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:24.576243 kubelet[3181]: I0213 19:02:24.575996 3181 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:02:24.576243 kubelet[3181]: I0213 19:02:24.576069 3181 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:24.577235 kubelet[3181]: I0213 19:02:24.577172 3181 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:02:24.581593 kubelet[3181]: I0213 19:02:24.581549 3181 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:02:24.586847 kubelet[3181]: I0213 19:02:24.586287 3181 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:24.598780 kubelet[3181]: E0213 19:02:24.598729 3181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:24.600068 kubelet[3181]: I0213 19:02:24.600031 3181 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:24.606412 kubelet[3181]: I0213 19:02:24.606343 3181 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:24.607022 kubelet[3181]: I0213 19:02:24.606748 3181 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:02:24.607326 kubelet[3181]: I0213 19:02:24.607275 3181 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:24.607700 kubelet[3181]: I0213 19:02:24.607411 3181 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-173","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:24.608511 kubelet[3181]: I0213 19:02:24.608276 3181 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:24.608511 kubelet[3181]: I0213 19:02:24.608311 3181 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:02:24.608511 kubelet[3181]: I0213 19:02:24.608372 3181 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:24.613426 kubelet[3181]: I0213 19:02:24.613386 3181 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:02:24.614752 kubelet[3181]: I0213 19:02:24.613576 3181 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:24.614752 kubelet[3181]: I0213 19:02:24.613629 3181 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:02:24.614752 kubelet[3181]: I0213 19:02:24.613651 3181 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:24.622527 kubelet[3181]: I0213 19:02:24.621996 3181 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:24.623606 kubelet[3181]: I0213 19:02:24.623491 3181 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:24.626418 kubelet[3181]: I0213 19:02:24.625448 3181 server.go:1269] "Started kubelet" Feb 13 19:02:24.640179 sudo[3194]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:02:24.640819 sudo[3194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:02:24.643905 kubelet[3181]: I0213 19:02:24.643456 3181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:24.653999 kubelet[3181]: I0213 19:02:24.652458 3181 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:24.656902 kubelet[3181]: I0213 19:02:24.654856 3181 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:02:24.656902 kubelet[3181]: I0213 19:02:24.656468 3181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:24.656902 kubelet[3181]: I0213 19:02:24.656780 3181 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:24.668905 kubelet[3181]: I0213 19:02:24.667320 3181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:24.673031 kubelet[3181]: I0213 19:02:24.671048 3181 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:02:24.673031 kubelet[3181]: E0213 19:02:24.671377 3181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-173\" not found" Feb 13 19:02:24.673031 kubelet[3181]: I0213 19:02:24.672324 3181 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:02:24.673031 kubelet[3181]: I0213 19:02:24.672566 3181 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:24.677976 kubelet[3181]: I0213 19:02:24.677486 3181 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:24.677976 kubelet[3181]: I0213 19:02:24.677667 3181 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:24.691341 kubelet[3181]: I0213 19:02:24.691274 3181 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:24.747288 kubelet[3181]: I0213 19:02:24.744738 3181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:24.763299 kubelet[3181]: I0213 19:02:24.763043 3181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:24.766529 kubelet[3181]: I0213 19:02:24.766476 3181 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:02:24.766529 kubelet[3181]: I0213 19:02:24.766528 3181 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:02:24.766851 kubelet[3181]: E0213 19:02:24.766593 3181 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:24.868014 kubelet[3181]: E0213 19:02:24.867950 3181 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:02:24.889710 kubelet[3181]: I0213 19:02:24.889080 3181 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:02:24.889710 kubelet[3181]: I0213 19:02:24.889109 3181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:24.889710 kubelet[3181]: I0213 19:02:24.889143 3181 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:24.890552 kubelet[3181]: I0213 19:02:24.890240 3181 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:02:24.890552 kubelet[3181]: I0213 19:02:24.890276 3181 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:02:24.890552 kubelet[3181]: I0213 19:02:24.890318 3181 policy_none.go:49] "None policy: Start" Feb 13 19:02:24.895286 kubelet[3181]: I0213 19:02:24.895178 3181 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:02:24.895286 kubelet[3181]: I0213 19:02:24.895242 3181 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:24.895585 kubelet[3181]: I0213 19:02:24.895555 3181 state_mem.go:75] "Updated machine memory state" Feb 13 19:02:24.910792 kubelet[3181]: I0213 19:02:24.910755 3181 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:24.911500 kubelet[3181]: I0213 19:02:24.911301 3181 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:24.911500 kubelet[3181]: I0213 19:02:24.911352 3181 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:24.912303 kubelet[3181]: I0213 19:02:24.912165 3181 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:25.038857 kubelet[3181]: I0213 19:02:25.038674 3181 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-173" Feb 13 19:02:25.056838 kubelet[3181]: I0213 19:02:25.056783 3181 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-22-173" Feb 13 19:02:25.057022 kubelet[3181]: I0213 19:02:25.056931 3181 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-173" Feb 13 19:02:25.081903 kubelet[3181]: I0213 19:02:25.078008 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:25.081903 kubelet[3181]: E0213 19:02:25.081483 3181 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-173\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-173" Feb 13 19:02:25.083490 kubelet[3181]: I0213 19:02:25.082283 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c16f7b72c1eaab5ee9365a6371b4b0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-173\" (UID: \"7c16f7b72c1eaab5ee9365a6371b4b0f\") " pod="kube-system/kube-scheduler-ip-172-31-22-173" Feb 13 19:02:25.083490 kubelet[3181]: I0213 19:02:25.082360 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/794cd58122b8301acebde41290be6fd8-ca-certs\") pod \"kube-apiserver-ip-172-31-22-173\" (UID: \"794cd58122b8301acebde41290be6fd8\") " pod="kube-system/kube-apiserver-ip-172-31-22-173" Feb 13 19:02:25.083490 kubelet[3181]: I0213 19:02:25.082397 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/794cd58122b8301acebde41290be6fd8-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-173\" (UID: \"794cd58122b8301acebde41290be6fd8\") " pod="kube-system/kube-apiserver-ip-172-31-22-173" Feb 13 19:02:25.083490 kubelet[3181]: I0213 19:02:25.082433 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:25.083490 kubelet[3181]: I0213 19:02:25.082470 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:25.084340 kubelet[3181]: I0213 19:02:25.082562 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/794cd58122b8301acebde41290be6fd8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-173\" (UID: \"794cd58122b8301acebde41290be6fd8\") " pod="kube-system/kube-apiserver-ip-172-31-22-173" Feb 13 19:02:25.084340 kubelet[3181]: I0213 19:02:25.082607 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:25.084340 kubelet[3181]: I0213 19:02:25.082643 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fe94b7ea8574fdcd8ff46b52dbb703b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-173\" (UID: \"9fe94b7ea8574fdcd8ff46b52dbb703b\") " pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:25.088308 kubelet[3181]: E0213 19:02:25.088253 3181 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-22-173\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-173" Feb 13 19:02:25.596142 sudo[3194]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:25.629332 kubelet[3181]: I0213 19:02:25.629033 3181 apiserver.go:52] "Watching apiserver" Feb 13 19:02:25.673431 kubelet[3181]: I0213 19:02:25.673321 3181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:02:25.836327 kubelet[3181]: E0213 19:02:25.836123 3181 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-173\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-173" Feb 13 19:02:25.928905 kubelet[3181]: I0213 19:02:25.927502 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-173" podStartSLOduration=0.927479532 podStartE2EDuration="927.479532ms" podCreationTimestamp="2025-02-13 19:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:25.889936224 +0000 UTC m=+1.408246664" watchObservedRunningTime="2025-02-13 19:02:25.927479532 +0000 UTC m=+1.445789984" Feb 13 19:02:25.988190 update_engine[1924]: I20250213 19:02:25.987125 1924 update_attempter.cc:509] Updating boot flags... Feb 13 19:02:26.094535 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3235) Feb 13 19:02:28.264090 kubelet[3181]: I0213 19:02:28.264042 3181 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:02:28.265155 containerd[1945]: time="2025-02-13T19:02:28.265102872Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:02:28.265831 kubelet[3181]: I0213 19:02:28.265479 3181 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:02:28.924959 systemd[1]: Created slice kubepods-besteffort-pod3432b032_db22_4cfa_8435_ee6244be4984.slice - libcontainer container kubepods-besteffort-pod3432b032_db22_4cfa_8435_ee6244be4984.slice. Feb 13 19:02:28.968129 systemd[1]: Created slice kubepods-burstable-pod68ab4ee2_4ed0_4fea_84f5_437f5293bfe6.slice - libcontainer container kubepods-burstable-pod68ab4ee2_4ed0_4fea_84f5_437f5293bfe6.slice. Feb 13 19:02:29.012933 kubelet[3181]: I0213 19:02:29.012486 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-run\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013325 kubelet[3181]: I0213 19:02:29.013014 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-bpf-maps\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013325 kubelet[3181]: I0213 19:02:29.013067 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-config-path\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013325 kubelet[3181]: I0213 19:02:29.013109 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-net\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013325 kubelet[3181]: I0213 19:02:29.013145 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-clustermesh-secrets\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013325 kubelet[3181]: I0213 19:02:29.013179 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-cgroup\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013325 kubelet[3181]: I0213 19:02:29.013214 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cni-path\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013645 kubelet[3181]: I0213 19:02:29.013250 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-etc-cni-netd\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013645 kubelet[3181]: I0213 19:02:29.013284 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hubble-tls\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013645 kubelet[3181]: I0213 19:02:29.013321 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-xtables-lock\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013645 kubelet[3181]: I0213 19:02:29.013358 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hostproc\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.013645 kubelet[3181]: I0213 19:02:29.013394 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3432b032-db22-4cfa-8435-ee6244be4984-kube-proxy\") pod \"kube-proxy-9zk58\" (UID: \"3432b032-db22-4cfa-8435-ee6244be4984\") " pod="kube-system/kube-proxy-9zk58" Feb 13 19:02:29.013645 kubelet[3181]: I0213 19:02:29.013438 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3432b032-db22-4cfa-8435-ee6244be4984-xtables-lock\") pod \"kube-proxy-9zk58\" (UID: \"3432b032-db22-4cfa-8435-ee6244be4984\") " pod="kube-system/kube-proxy-9zk58" Feb 13 19:02:29.013971 kubelet[3181]: I0213 19:02:29.013474 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9x62\" (UniqueName: \"kubernetes.io/projected/3432b032-db22-4cfa-8435-ee6244be4984-kube-api-access-j9x62\") pod \"kube-proxy-9zk58\" (UID: \"3432b032-db22-4cfa-8435-ee6244be4984\") " pod="kube-system/kube-proxy-9zk58" Feb 13 19:02:29.013971 kubelet[3181]: I0213 19:02:29.013513 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-kernel\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.015909 kubelet[3181]: I0213 19:02:29.013552 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntpkf\" (UniqueName: \"kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-kube-api-access-ntpkf\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.015909 kubelet[3181]: I0213 19:02:29.014745 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3432b032-db22-4cfa-8435-ee6244be4984-lib-modules\") pod \"kube-proxy-9zk58\" (UID: \"3432b032-db22-4cfa-8435-ee6244be4984\") " pod="kube-system/kube-proxy-9zk58" Feb 13 19:02:29.015909 kubelet[3181]: I0213 19:02:29.014836 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-lib-modules\") pod \"cilium-9t8m2\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " pod="kube-system/cilium-9t8m2" Feb 13 19:02:29.262185 kubelet[3181]: E0213 19:02:29.261044 3181 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:02:29.262185 kubelet[3181]: E0213 19:02:29.261096 3181 projected.go:194] Error preparing data for projected volume kube-api-access-ntpkf for pod kube-system/cilium-9t8m2: configmap "kube-root-ca.crt" not found Feb 13 19:02:29.266396 kubelet[3181]: E0213 19:02:29.264077 3181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-kube-api-access-ntpkf podName:68ab4ee2-4ed0-4fea-84f5-437f5293bfe6 nodeName:}" failed. No retries permitted until 2025-02-13 19:02:29.764036685 +0000 UTC m=+5.282347125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ntpkf" (UniqueName: "kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-kube-api-access-ntpkf") pod "cilium-9t8m2" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6") : configmap "kube-root-ca.crt" not found Feb 13 19:02:29.266396 kubelet[3181]: E0213 19:02:29.265563 3181 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:02:29.266396 kubelet[3181]: E0213 19:02:29.265603 3181 projected.go:194] Error preparing data for projected volume kube-api-access-j9x62 for pod kube-system/kube-proxy-9zk58: configmap "kube-root-ca.crt" not found Feb 13 19:02:29.266396 kubelet[3181]: E0213 19:02:29.265696 3181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3432b032-db22-4cfa-8435-ee6244be4984-kube-api-access-j9x62 podName:3432b032-db22-4cfa-8435-ee6244be4984 nodeName:}" failed. No retries permitted until 2025-02-13 19:02:29.765667977 +0000 UTC m=+5.283978417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j9x62" (UniqueName: "kubernetes.io/projected/3432b032-db22-4cfa-8435-ee6244be4984-kube-api-access-j9x62") pod "kube-proxy-9zk58" (UID: "3432b032-db22-4cfa-8435-ee6244be4984") : configmap "kube-root-ca.crt" not found Feb 13 19:02:29.583368 systemd[1]: Created slice kubepods-besteffort-podf29da52e_11fb_4b73_a8ba_613c7ab48164.slice - libcontainer container kubepods-besteffort-podf29da52e_11fb_4b73_a8ba_613c7ab48164.slice. Feb 13 19:02:29.620828 kubelet[3181]: I0213 19:02:29.620753 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f29da52e-11fb-4b73-a8ba-613c7ab48164-cilium-config-path\") pod \"cilium-operator-5d85765b45-jm8xg\" (UID: \"f29da52e-11fb-4b73-a8ba-613c7ab48164\") " pod="kube-system/cilium-operator-5d85765b45-jm8xg" Feb 13 19:02:29.620828 kubelet[3181]: I0213 19:02:29.620827 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmhr\" (UniqueName: \"kubernetes.io/projected/f29da52e-11fb-4b73-a8ba-613c7ab48164-kube-api-access-psmhr\") pod \"cilium-operator-5d85765b45-jm8xg\" (UID: \"f29da52e-11fb-4b73-a8ba-613c7ab48164\") " pod="kube-system/cilium-operator-5d85765b45-jm8xg" Feb 13 19:02:29.636266 sudo[2265]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:29.661931 sshd[2264]: Connection closed by 147.75.109.163 port 34276 Feb 13 19:02:29.662226 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:29.667063 systemd[1]: sshd@6-172.31.22.173:22-147.75.109.163:34276.service: Deactivated successfully. Feb 13 19:02:29.670790 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:02:29.671553 systemd[1]: session-7.scope: Consumed 13.387s CPU time, 154.0M memory peak, 0B memory swap peak. Feb 13 19:02:29.673008 systemd-logind[1923]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:02:29.676793 systemd-logind[1923]: Removed session 7. Feb 13 19:02:29.848245 containerd[1945]: time="2025-02-13T19:02:29.847390672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zk58,Uid:3432b032-db22-4cfa-8435-ee6244be4984,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:29.879650 containerd[1945]: time="2025-02-13T19:02:29.878825920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9t8m2,Uid:68ab4ee2-4ed0-4fea-84f5-437f5293bfe6,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:29.883922 containerd[1945]: time="2025-02-13T19:02:29.883496020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:29.885030 containerd[1945]: time="2025-02-13T19:02:29.884855980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:29.885220 containerd[1945]: time="2025-02-13T19:02:29.884950888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.885448 containerd[1945]: time="2025-02-13T19:02:29.885369616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.894023 containerd[1945]: time="2025-02-13T19:02:29.893853496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jm8xg,Uid:f29da52e-11fb-4b73-a8ba-613c7ab48164,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:29.928413 containerd[1945]: time="2025-02-13T19:02:29.923747548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:29.928413 containerd[1945]: time="2025-02-13T19:02:29.923856508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:29.928413 containerd[1945]: time="2025-02-13T19:02:29.924206668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.928413 containerd[1945]: time="2025-02-13T19:02:29.925221232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.933246 systemd[1]: Started cri-containerd-5cbdf5feb9dca2be21600f339c4df18c94625f2e24fb85bc9884e52b96104597.scope - libcontainer container 5cbdf5feb9dca2be21600f339c4df18c94625f2e24fb85bc9884e52b96104597. Feb 13 19:02:29.991988 systemd[1]: Started cri-containerd-f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316.scope - libcontainer container f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316. Feb 13 19:02:30.009108 containerd[1945]: time="2025-02-13T19:02:30.007160569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:30.009108 containerd[1945]: time="2025-02-13T19:02:30.007254133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:30.009108 containerd[1945]: time="2025-02-13T19:02:30.007296313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:30.009354 containerd[1945]: time="2025-02-13T19:02:30.008983009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:30.012682 containerd[1945]: time="2025-02-13T19:02:30.012615841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zk58,Uid:3432b032-db22-4cfa-8435-ee6244be4984,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cbdf5feb9dca2be21600f339c4df18c94625f2e24fb85bc9884e52b96104597\"" Feb 13 19:02:30.025645 containerd[1945]: time="2025-02-13T19:02:30.025563529Z" level=info msg="CreateContainer within sandbox \"5cbdf5feb9dca2be21600f339c4df18c94625f2e24fb85bc9884e52b96104597\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:02:30.068209 systemd[1]: Started cri-containerd-22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7.scope - libcontainer container 22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7. Feb 13 19:02:30.071420 containerd[1945]: time="2025-02-13T19:02:30.071220601Z" level=info msg="CreateContainer within sandbox \"5cbdf5feb9dca2be21600f339c4df18c94625f2e24fb85bc9884e52b96104597\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4722ab90c851cfa1c66c0e89dd51ce3766f61d546822f75dbbdf249e534295d0\"" Feb 13 19:02:30.075235 containerd[1945]: time="2025-02-13T19:02:30.074756893Z" level=info msg="StartContainer for \"4722ab90c851cfa1c66c0e89dd51ce3766f61d546822f75dbbdf249e534295d0\"" Feb 13 19:02:30.085201 containerd[1945]: time="2025-02-13T19:02:30.084680941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9t8m2,Uid:68ab4ee2-4ed0-4fea-84f5-437f5293bfe6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\"" Feb 13 19:02:30.092227 containerd[1945]: time="2025-02-13T19:02:30.091448953Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:02:30.178209 systemd[1]: Started cri-containerd-4722ab90c851cfa1c66c0e89dd51ce3766f61d546822f75dbbdf249e534295d0.scope - libcontainer container 4722ab90c851cfa1c66c0e89dd51ce3766f61d546822f75dbbdf249e534295d0. Feb 13 19:02:30.204986 containerd[1945]: time="2025-02-13T19:02:30.204846314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jm8xg,Uid:f29da52e-11fb-4b73-a8ba-613c7ab48164,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\"" Feb 13 19:02:30.251642 containerd[1945]: time="2025-02-13T19:02:30.251524562Z" level=info msg="StartContainer for \"4722ab90c851cfa1c66c0e89dd51ce3766f61d546822f75dbbdf249e534295d0\" returns successfully" Feb 13 19:02:30.946963 kubelet[3181]: I0213 19:02:30.946691 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9zk58" podStartSLOduration=2.946665473 podStartE2EDuration="2.946665473s" podCreationTimestamp="2025-02-13 19:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:30.863079797 +0000 UTC m=+6.381390261" watchObservedRunningTime="2025-02-13 19:02:30.946665473 +0000 UTC m=+6.464975913" Feb 13 19:02:36.857144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571115846.mount: Deactivated successfully. Feb 13 19:02:39.562906 containerd[1945]: time="2025-02-13T19:02:39.561535044Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.563974 containerd[1945]: time="2025-02-13T19:02:39.563909316Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:02:39.565179 containerd[1945]: time="2025-02-13T19:02:39.565137000Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:39.568546 containerd[1945]: time="2025-02-13T19:02:39.568486308Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.476975783s" Feb 13 19:02:39.568764 containerd[1945]: time="2025-02-13T19:02:39.568732620Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:02:39.572821 containerd[1945]: time="2025-02-13T19:02:39.572753496Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:02:39.576820 containerd[1945]: time="2025-02-13T19:02:39.576512640Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:02:39.597149 containerd[1945]: time="2025-02-13T19:02:39.597073320Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\"" Feb 13 19:02:39.597952 containerd[1945]: time="2025-02-13T19:02:39.597863172Z" level=info msg="StartContainer for \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\"" Feb 13 19:02:39.650233 systemd[1]: Started cri-containerd-c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3.scope - libcontainer container c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3. Feb 13 19:02:39.711626 containerd[1945]: time="2025-02-13T19:02:39.711545089Z" level=info msg="StartContainer for \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\" returns successfully" Feb 13 19:02:39.734859 systemd[1]: cri-containerd-c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3.scope: Deactivated successfully. Feb 13 19:02:40.477048 containerd[1945]: time="2025-02-13T19:02:40.476663701Z" level=info msg="shim disconnected" id=c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3 namespace=k8s.io Feb 13 19:02:40.477048 containerd[1945]: time="2025-02-13T19:02:40.476740681Z" level=warning msg="cleaning up after shim disconnected" id=c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3 namespace=k8s.io Feb 13 19:02:40.477048 containerd[1945]: time="2025-02-13T19:02:40.476760481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:40.588451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3-rootfs.mount: Deactivated successfully. Feb 13 19:02:40.893723 containerd[1945]: time="2025-02-13T19:02:40.893486415Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:02:40.929910 containerd[1945]: time="2025-02-13T19:02:40.929261427Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\"" Feb 13 19:02:40.938228 containerd[1945]: time="2025-02-13T19:02:40.937413219Z" level=info msg="StartContainer for \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\"" Feb 13 19:02:40.991271 systemd[1]: Started cri-containerd-f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13.scope - libcontainer container f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13. Feb 13 19:02:41.040703 containerd[1945]: time="2025-02-13T19:02:41.040634807Z" level=info msg="StartContainer for \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\" returns successfully" Feb 13 19:02:41.065632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:41.067271 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:41.067403 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:41.079072 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:41.079542 systemd[1]: cri-containerd-f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13.scope: Deactivated successfully. Feb 13 19:02:41.124367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:41.134040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13-rootfs.mount: Deactivated successfully. Feb 13 19:02:41.145613 containerd[1945]: time="2025-02-13T19:02:41.144953604Z" level=info msg="shim disconnected" id=f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13 namespace=k8s.io Feb 13 19:02:41.146293 containerd[1945]: time="2025-02-13T19:02:41.146218476Z" level=warning msg="cleaning up after shim disconnected" id=f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13 namespace=k8s.io Feb 13 19:02:41.146406 containerd[1945]: time="2025-02-13T19:02:41.146308680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:41.910361 containerd[1945]: time="2025-02-13T19:02:41.910299472Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:02:41.967116 containerd[1945]: time="2025-02-13T19:02:41.967023748Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\"" Feb 13 19:02:41.968849 containerd[1945]: time="2025-02-13T19:02:41.968687488Z" level=info msg="StartContainer for \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\"" Feb 13 19:02:42.052358 systemd[1]: Started cri-containerd-50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306.scope - libcontainer container 50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306. Feb 13 19:02:42.088913 containerd[1945]: time="2025-02-13T19:02:42.088795429Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.090345 containerd[1945]: time="2025-02-13T19:02:42.090237229Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:02:42.093108 containerd[1945]: time="2025-02-13T19:02:42.093022909Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.098244 containerd[1945]: time="2025-02-13T19:02:42.098186089Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.525337745s" Feb 13 19:02:42.098644 containerd[1945]: time="2025-02-13T19:02:42.098607289Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:02:42.104725 containerd[1945]: time="2025-02-13T19:02:42.104672725Z" level=info msg="CreateContainer within sandbox \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:02:42.130753 containerd[1945]: time="2025-02-13T19:02:42.130639225Z" level=info msg="CreateContainer within sandbox \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\"" Feb 13 19:02:42.133240 containerd[1945]: time="2025-02-13T19:02:42.133175125Z" level=info msg="StartContainer for \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\"" Feb 13 19:02:42.139653 containerd[1945]: time="2025-02-13T19:02:42.138566245Z" level=info msg="StartContainer for \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\" returns successfully" Feb 13 19:02:42.147159 systemd[1]: cri-containerd-50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306.scope: Deactivated successfully. Feb 13 19:02:42.228331 systemd[1]: Started cri-containerd-8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a.scope - libcontainer container 8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a. Feb 13 19:02:42.301565 containerd[1945]: time="2025-02-13T19:02:42.300795266Z" level=info msg="StartContainer for \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\" returns successfully" Feb 13 19:02:42.321363 containerd[1945]: time="2025-02-13T19:02:42.320051450Z" level=info msg="shim disconnected" id=50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306 namespace=k8s.io Feb 13 19:02:42.321363 containerd[1945]: time="2025-02-13T19:02:42.321121154Z" level=warning msg="cleaning up after shim disconnected" id=50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306 namespace=k8s.io Feb 13 19:02:42.321363 containerd[1945]: time="2025-02-13T19:02:42.321150218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:42.592691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306-rootfs.mount: Deactivated successfully. Feb 13 19:02:42.927571 containerd[1945]: time="2025-02-13T19:02:42.927259289Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:02:42.957495 containerd[1945]: time="2025-02-13T19:02:42.957299573Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\"" Feb 13 19:02:42.959079 containerd[1945]: time="2025-02-13T19:02:42.958783661Z" level=info msg="StartContainer for \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\"" Feb 13 19:02:43.042423 systemd[1]: run-containerd-runc-k8s.io-60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39-runc.3bV6Ws.mount: Deactivated successfully. Feb 13 19:02:43.059167 systemd[1]: Started cri-containerd-60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39.scope - libcontainer container 60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39. Feb 13 19:02:43.170213 containerd[1945]: time="2025-02-13T19:02:43.170138918Z" level=info msg="StartContainer for \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\" returns successfully" Feb 13 19:02:43.173301 systemd[1]: cri-containerd-60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39.scope: Deactivated successfully. Feb 13 19:02:43.236577 containerd[1945]: time="2025-02-13T19:02:43.236132006Z" level=info msg="shim disconnected" id=60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39 namespace=k8s.io Feb 13 19:02:43.236577 containerd[1945]: time="2025-02-13T19:02:43.236210438Z" level=warning msg="cleaning up after shim disconnected" id=60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39 namespace=k8s.io Feb 13 19:02:43.236577 containerd[1945]: time="2025-02-13T19:02:43.236229494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:43.312768 kubelet[3181]: I0213 19:02:43.312665 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-jm8xg" podStartSLOduration=2.418895308 podStartE2EDuration="14.312641691s" podCreationTimestamp="2025-02-13 19:02:29 +0000 UTC" firstStartedPulling="2025-02-13 19:02:30.207589826 +0000 UTC m=+5.725900266" lastFinishedPulling="2025-02-13 19:02:42.101336221 +0000 UTC m=+17.619646649" observedRunningTime="2025-02-13 19:02:42.977916425 +0000 UTC m=+18.496226865" watchObservedRunningTime="2025-02-13 19:02:43.312641691 +0000 UTC m=+18.830952131" Feb 13 19:02:43.590152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39-rootfs.mount: Deactivated successfully. Feb 13 19:02:43.930782 containerd[1945]: time="2025-02-13T19:02:43.929359518Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:02:43.964867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486986011.mount: Deactivated successfully. Feb 13 19:02:43.966561 containerd[1945]: time="2025-02-13T19:02:43.965846646Z" level=info msg="CreateContainer within sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\"" Feb 13 19:02:43.969427 containerd[1945]: time="2025-02-13T19:02:43.969364614Z" level=info msg="StartContainer for \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\"" Feb 13 19:02:44.033662 systemd[1]: Started cri-containerd-c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20.scope - libcontainer container c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20. Feb 13 19:02:44.097133 containerd[1945]: time="2025-02-13T19:02:44.096275799Z" level=info msg="StartContainer for \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\" returns successfully" Feb 13 19:02:44.324630 kubelet[3181]: I0213 19:02:44.323503 3181 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:02:44.399603 systemd[1]: Created slice kubepods-burstable-pod576b239c_726a_40f8_9f49_80d0568dc587.slice - libcontainer container kubepods-burstable-pod576b239c_726a_40f8_9f49_80d0568dc587.slice. Feb 13 19:02:44.422698 systemd[1]: Created slice kubepods-burstable-pod6be37e89_39a2_4f4e_b98d_70b0ba7c60fb.slice - libcontainer container kubepods-burstable-pod6be37e89_39a2_4f4e_b98d_70b0ba7c60fb.slice. Feb 13 19:02:44.434199 kubelet[3181]: I0213 19:02:44.432980 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/576b239c-726a-40f8-9f49-80d0568dc587-config-volume\") pod \"coredns-6f6b679f8f-92bt2\" (UID: \"576b239c-726a-40f8-9f49-80d0568dc587\") " pod="kube-system/coredns-6f6b679f8f-92bt2" Feb 13 19:02:44.434651 kubelet[3181]: I0213 19:02:44.434392 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtfs\" (UniqueName: \"kubernetes.io/projected/6be37e89-39a2-4f4e-b98d-70b0ba7c60fb-kube-api-access-5xtfs\") pod \"coredns-6f6b679f8f-mnhqh\" (UID: \"6be37e89-39a2-4f4e-b98d-70b0ba7c60fb\") " pod="kube-system/coredns-6f6b679f8f-mnhqh" Feb 13 19:02:44.437473 kubelet[3181]: I0213 19:02:44.437161 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5ljc\" (UniqueName: \"kubernetes.io/projected/576b239c-726a-40f8-9f49-80d0568dc587-kube-api-access-w5ljc\") pod \"coredns-6f6b679f8f-92bt2\" (UID: \"576b239c-726a-40f8-9f49-80d0568dc587\") " pod="kube-system/coredns-6f6b679f8f-92bt2" Feb 13 19:02:44.437847 kubelet[3181]: I0213 19:02:44.437698 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6be37e89-39a2-4f4e-b98d-70b0ba7c60fb-config-volume\") pod \"coredns-6f6b679f8f-mnhqh\" (UID: \"6be37e89-39a2-4f4e-b98d-70b0ba7c60fb\") " pod="kube-system/coredns-6f6b679f8f-mnhqh" Feb 13 19:02:44.590053 systemd[1]: run-containerd-runc-k8s.io-c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20-runc.NRrPs0.mount: Deactivated successfully. Feb 13 19:02:44.745147 containerd[1945]: time="2025-02-13T19:02:44.744684750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-92bt2,Uid:576b239c-726a-40f8-9f49-80d0568dc587,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:44.757200 containerd[1945]: time="2025-02-13T19:02:44.757144146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mnhqh,Uid:6be37e89-39a2-4f4e-b98d-70b0ba7c60fb,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:44.981209 kubelet[3181]: I0213 19:02:44.981015 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9t8m2" podStartSLOduration=7.5000356759999995 podStartE2EDuration="16.980990695s" podCreationTimestamp="2025-02-13 19:02:28 +0000 UTC" firstStartedPulling="2025-02-13 19:02:30.089398645 +0000 UTC m=+5.607709073" lastFinishedPulling="2025-02-13 19:02:39.570353664 +0000 UTC m=+15.088664092" observedRunningTime="2025-02-13 19:02:44.977650915 +0000 UTC m=+20.495961379" watchObservedRunningTime="2025-02-13 19:02:44.980990695 +0000 UTC m=+20.499301123" Feb 13 19:02:47.182925 systemd-networkd[1837]: cilium_host: Link UP Feb 13 19:02:47.184855 (udev-worker)[4067]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:47.186029 systemd-networkd[1837]: cilium_net: Link UP Feb 13 19:02:47.188248 systemd-networkd[1837]: cilium_net: Gained carrier Feb 13 19:02:47.189621 systemd-networkd[1837]: cilium_host: Gained carrier Feb 13 19:02:47.190718 (udev-worker)[4101]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:47.192544 systemd-networkd[1837]: cilium_net: Gained IPv6LL Feb 13 19:02:47.192967 systemd-networkd[1837]: cilium_host: Gained IPv6LL Feb 13 19:02:47.356793 (udev-worker)[4108]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:47.370167 systemd-networkd[1837]: cilium_vxlan: Link UP Feb 13 19:02:47.370186 systemd-networkd[1837]: cilium_vxlan: Gained carrier Feb 13 19:02:47.842986 kernel: NET: Registered PF_ALG protocol family Feb 13 19:02:48.986568 systemd-networkd[1837]: cilium_vxlan: Gained IPv6LL Feb 13 19:02:49.130116 systemd-networkd[1837]: lxc_health: Link UP Feb 13 19:02:49.138677 systemd-networkd[1837]: lxc_health: Gained carrier Feb 13 19:02:49.871835 systemd-networkd[1837]: lxcf91bd20a11df: Link UP Feb 13 19:02:49.885207 kernel: eth0: renamed from tmp7e673 Feb 13 19:02:49.891189 systemd-networkd[1837]: lxcf91bd20a11df: Gained carrier Feb 13 19:02:49.926016 systemd-networkd[1837]: lxc838d09980493: Link UP Feb 13 19:02:49.937045 kernel: eth0: renamed from tmp4edbf Feb 13 19:02:49.942683 systemd-networkd[1837]: lxc838d09980493: Gained carrier Feb 13 19:02:49.948182 (udev-worker)[4113]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:50.329241 systemd-networkd[1837]: lxc_health: Gained IPv6LL Feb 13 19:02:51.161299 systemd-networkd[1837]: lxc838d09980493: Gained IPv6LL Feb 13 19:02:51.865174 systemd-networkd[1837]: lxcf91bd20a11df: Gained IPv6LL Feb 13 19:02:53.896949 ntpd[1915]: Listen normally on 8 cilium_host 192.168.0.117:123 Feb 13 19:02:53.897090 ntpd[1915]: Listen normally on 9 cilium_net [fe80::2445:27ff:febe:f365%4]:123 Feb 13 19:02:53.897540 ntpd[1915]: 13 Feb 19:02:53 ntpd[1915]: Listen normally on 8 cilium_host 192.168.0.117:123 Feb 13 19:02:53.897540 ntpd[1915]: 13 Feb 19:02:53 ntpd[1915]: Listen normally on 9 cilium_net [fe80::2445:27ff:febe:f365%4]:123 Feb 13 19:02:53.897540 ntpd[1915]: 13 Feb 19:02:53 ntpd[1915]: Listen normally on 10 cilium_host [fe80::18d2:76ff:fe93:9408%5]:123 Feb 13 19:02:53.897540 ntpd[1915]: 13 Feb 19:02:53 ntpd[1915]: Listen normally on 11 cilium_vxlan [fe80::3812:17ff:fec9:2c95%6]:123 Feb 13 19:02:53.897540 ntpd[1915]: 13 Feb 19:02:53 ntpd[1915]: Listen normally on 12 lxc_health [fe80::241c:7eff:fef3:2996%8]:123 Feb 13 19:02:53.897540 ntpd[1915]: 13 Feb 19:02:53 ntpd[1915]: Listen normally on 13 lxcf91bd20a11df [fe80::88b6:cbff:fed5:21e0%10]:123 Feb 13 19:02:53.897540 ntpd[1915]: 13 Feb 19:02:53 ntpd[1915]: Listen normally on 14 lxc838d09980493 [fe80::8bc:11ff:fedc:74dc%12]:123 Feb 13 19:02:53.897170 ntpd[1915]: Listen normally on 10 cilium_host [fe80::18d2:76ff:fe93:9408%5]:123 Feb 13 19:02:53.897245 ntpd[1915]: Listen normally on 11 cilium_vxlan [fe80::3812:17ff:fec9:2c95%6]:123 Feb 13 19:02:53.897314 ntpd[1915]: Listen normally on 12 lxc_health [fe80::241c:7eff:fef3:2996%8]:123 Feb 13 19:02:53.897383 ntpd[1915]: Listen normally on 13 lxcf91bd20a11df [fe80::88b6:cbff:fed5:21e0%10]:123 Feb 13 19:02:53.897449 ntpd[1915]: Listen normally on 14 lxc838d09980493 [fe80::8bc:11ff:fedc:74dc%12]:123 Feb 13 19:02:58.001697 containerd[1945]: time="2025-02-13T19:02:58.001443544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:58.001697 containerd[1945]: time="2025-02-13T19:02:58.001555432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:58.001697 containerd[1945]: time="2025-02-13T19:02:58.001594312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:58.003984 containerd[1945]: time="2025-02-13T19:02:58.003008128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:58.073195 systemd[1]: Started cri-containerd-7e67360b3a9ca5619baccc12b82407a101d367366aa22704321923dbcdeeb682.scope - libcontainer container 7e67360b3a9ca5619baccc12b82407a101d367366aa22704321923dbcdeeb682. Feb 13 19:02:58.102905 containerd[1945]: time="2025-02-13T19:02:58.102448252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:58.102905 containerd[1945]: time="2025-02-13T19:02:58.102575836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:58.102905 containerd[1945]: time="2025-02-13T19:02:58.102614728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:58.104826 containerd[1945]: time="2025-02-13T19:02:58.103327828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:58.179224 systemd[1]: Started cri-containerd-4edbfa864df6d1dbcb575235157b3a04bd019acab486d1ea5dd0bddefa5fdf7f.scope - libcontainer container 4edbfa864df6d1dbcb575235157b3a04bd019acab486d1ea5dd0bddefa5fdf7f. Feb 13 19:02:58.219022 containerd[1945]: time="2025-02-13T19:02:58.218469257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-92bt2,Uid:576b239c-726a-40f8-9f49-80d0568dc587,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e67360b3a9ca5619baccc12b82407a101d367366aa22704321923dbcdeeb682\"" Feb 13 19:02:58.233948 containerd[1945]: time="2025-02-13T19:02:58.231228773Z" level=info msg="CreateContainer within sandbox \"7e67360b3a9ca5619baccc12b82407a101d367366aa22704321923dbcdeeb682\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:02:58.266559 containerd[1945]: time="2025-02-13T19:02:58.266384177Z" level=info msg="CreateContainer within sandbox \"7e67360b3a9ca5619baccc12b82407a101d367366aa22704321923dbcdeeb682\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d72e79fd7425873ed21ef36e22d57c8dbd03e84ae9a990c177fbfbb680b45cc1\"" Feb 13 19:02:58.270153 containerd[1945]: time="2025-02-13T19:02:58.269971505Z" level=info msg="StartContainer for \"d72e79fd7425873ed21ef36e22d57c8dbd03e84ae9a990c177fbfbb680b45cc1\"" Feb 13 19:02:58.366266 containerd[1945]: time="2025-02-13T19:02:58.365588418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mnhqh,Uid:6be37e89-39a2-4f4e-b98d-70b0ba7c60fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4edbfa864df6d1dbcb575235157b3a04bd019acab486d1ea5dd0bddefa5fdf7f\"" Feb 13 19:02:58.368678 systemd[1]: Started cri-containerd-d72e79fd7425873ed21ef36e22d57c8dbd03e84ae9a990c177fbfbb680b45cc1.scope - libcontainer container d72e79fd7425873ed21ef36e22d57c8dbd03e84ae9a990c177fbfbb680b45cc1. Feb 13 19:02:58.378658 containerd[1945]: time="2025-02-13T19:02:58.378283074Z" level=info msg="CreateContainer within sandbox \"4edbfa864df6d1dbcb575235157b3a04bd019acab486d1ea5dd0bddefa5fdf7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:02:58.404758 containerd[1945]: time="2025-02-13T19:02:58.404666826Z" level=info msg="CreateContainer within sandbox \"4edbfa864df6d1dbcb575235157b3a04bd019acab486d1ea5dd0bddefa5fdf7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f8bfb5aeaa5790cff532cb9ece59ba5229f55ebafa6502e651fd14518c69afc\"" Feb 13 19:02:58.406838 containerd[1945]: time="2025-02-13T19:02:58.406768218Z" level=info msg="StartContainer for \"2f8bfb5aeaa5790cff532cb9ece59ba5229f55ebafa6502e651fd14518c69afc\"" Feb 13 19:02:58.495187 systemd[1]: Started cri-containerd-2f8bfb5aeaa5790cff532cb9ece59ba5229f55ebafa6502e651fd14518c69afc.scope - libcontainer container 2f8bfb5aeaa5790cff532cb9ece59ba5229f55ebafa6502e651fd14518c69afc. Feb 13 19:02:58.512403 containerd[1945]: time="2025-02-13T19:02:58.512320746Z" level=info msg="StartContainer for \"d72e79fd7425873ed21ef36e22d57c8dbd03e84ae9a990c177fbfbb680b45cc1\" returns successfully" Feb 13 19:02:58.614759 containerd[1945]: time="2025-02-13T19:02:58.614577655Z" level=info msg="StartContainer for \"2f8bfb5aeaa5790cff532cb9ece59ba5229f55ebafa6502e651fd14518c69afc\" returns successfully" Feb 13 19:02:59.030421 kubelet[3181]: I0213 19:02:59.027831 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-92bt2" podStartSLOduration=30.027806681 podStartE2EDuration="30.027806681s" podCreationTimestamp="2025-02-13 19:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:59.022995305 +0000 UTC m=+34.541305769" watchObservedRunningTime="2025-02-13 19:02:59.027806681 +0000 UTC m=+34.546117133" Feb 13 19:02:59.084817 kubelet[3181]: I0213 19:02:59.084710 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mnhqh" podStartSLOduration=30.084684665 podStartE2EDuration="30.084684665s" podCreationTimestamp="2025-02-13 19:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:59.083273069 +0000 UTC m=+34.601583569" watchObservedRunningTime="2025-02-13 19:02:59.084684665 +0000 UTC m=+34.602995105" Feb 13 19:03:11.467552 systemd[1]: Started sshd@7-172.31.22.173:22-147.75.109.163:42006.service - OpenSSH per-connection server daemon (147.75.109.163:42006). Feb 13 19:03:11.658277 sshd[4646]: Accepted publickey for core from 147.75.109.163 port 42006 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:11.661145 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:11.670046 systemd-logind[1923]: New session 8 of user core. Feb 13 19:03:11.686185 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:03:11.949807 sshd[4648]: Connection closed by 147.75.109.163 port 42006 Feb 13 19:03:11.949675 sshd-session[4646]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:11.955334 systemd[1]: sshd@7-172.31.22.173:22-147.75.109.163:42006.service: Deactivated successfully. Feb 13 19:03:11.961418 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:03:11.965926 systemd-logind[1923]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:03:11.968134 systemd-logind[1923]: Removed session 8. Feb 13 19:03:16.989404 systemd[1]: Started sshd@8-172.31.22.173:22-147.75.109.163:42010.service - OpenSSH per-connection server daemon (147.75.109.163:42010). Feb 13 19:03:17.190331 sshd[4662]: Accepted publickey for core from 147.75.109.163 port 42010 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:17.192823 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:17.201808 systemd-logind[1923]: New session 9 of user core. Feb 13 19:03:17.212189 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:03:17.458625 sshd[4664]: Connection closed by 147.75.109.163 port 42010 Feb 13 19:03:17.457942 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:17.462946 systemd[1]: sshd@8-172.31.22.173:22-147.75.109.163:42010.service: Deactivated successfully. Feb 13 19:03:17.467859 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:03:17.471742 systemd-logind[1923]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:03:17.474381 systemd-logind[1923]: Removed session 9. Feb 13 19:03:22.499436 systemd[1]: Started sshd@9-172.31.22.173:22-147.75.109.163:59028.service - OpenSSH per-connection server daemon (147.75.109.163:59028). Feb 13 19:03:22.689344 sshd[4676]: Accepted publickey for core from 147.75.109.163 port 59028 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:22.693470 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:22.702982 systemd-logind[1923]: New session 10 of user core. Feb 13 19:03:22.711209 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:03:22.956648 sshd[4678]: Connection closed by 147.75.109.163 port 59028 Feb 13 19:03:22.956598 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:22.962773 systemd[1]: sshd@9-172.31.22.173:22-147.75.109.163:59028.service: Deactivated successfully. Feb 13 19:03:22.968010 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:03:22.969401 systemd-logind[1923]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:03:22.971659 systemd-logind[1923]: Removed session 10. Feb 13 19:03:28.000382 systemd[1]: Started sshd@10-172.31.22.173:22-147.75.109.163:59044.service - OpenSSH per-connection server daemon (147.75.109.163:59044). Feb 13 19:03:28.199110 sshd[4692]: Accepted publickey for core from 147.75.109.163 port 59044 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:28.201930 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:28.208970 systemd-logind[1923]: New session 11 of user core. Feb 13 19:03:28.217161 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:03:28.468179 sshd[4695]: Connection closed by 147.75.109.163 port 59044 Feb 13 19:03:28.469088 sshd-session[4692]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:28.475692 systemd[1]: sshd@10-172.31.22.173:22-147.75.109.163:59044.service: Deactivated successfully. Feb 13 19:03:28.480012 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:03:28.482586 systemd-logind[1923]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:03:28.484780 systemd-logind[1923]: Removed session 11. Feb 13 19:03:28.508389 systemd[1]: Started sshd@11-172.31.22.173:22-147.75.109.163:59060.service - OpenSSH per-connection server daemon (147.75.109.163:59060). Feb 13 19:03:28.692830 sshd[4707]: Accepted publickey for core from 147.75.109.163 port 59060 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:28.695642 sshd-session[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:28.703807 systemd-logind[1923]: New session 12 of user core. Feb 13 19:03:28.713142 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:03:29.028919 sshd[4709]: Connection closed by 147.75.109.163 port 59060 Feb 13 19:03:29.027610 sshd-session[4707]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:29.035385 systemd[1]: sshd@11-172.31.22.173:22-147.75.109.163:59060.service: Deactivated successfully. Feb 13 19:03:29.040767 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:03:29.045253 systemd-logind[1923]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:03:29.072528 systemd[1]: Started sshd@12-172.31.22.173:22-147.75.109.163:59070.service - OpenSSH per-connection server daemon (147.75.109.163:59070). Feb 13 19:03:29.076707 systemd-logind[1923]: Removed session 12. Feb 13 19:03:29.272476 sshd[4718]: Accepted publickey for core from 147.75.109.163 port 59070 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:29.274291 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:29.282939 systemd-logind[1923]: New session 13 of user core. Feb 13 19:03:29.290168 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:03:29.524923 sshd[4720]: Connection closed by 147.75.109.163 port 59070 Feb 13 19:03:29.525742 sshd-session[4718]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:29.532606 systemd-logind[1923]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:03:29.534208 systemd[1]: sshd@12-172.31.22.173:22-147.75.109.163:59070.service: Deactivated successfully. Feb 13 19:03:29.539232 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:03:29.541151 systemd-logind[1923]: Removed session 13. Feb 13 19:03:34.565418 systemd[1]: Started sshd@13-172.31.22.173:22-147.75.109.163:54122.service - OpenSSH per-connection server daemon (147.75.109.163:54122). Feb 13 19:03:34.750174 sshd[4733]: Accepted publickey for core from 147.75.109.163 port 54122 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:34.752841 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:34.761412 systemd-logind[1923]: New session 14 of user core. Feb 13 19:03:34.770198 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:03:35.024797 sshd[4735]: Connection closed by 147.75.109.163 port 54122 Feb 13 19:03:35.025826 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:35.032761 systemd[1]: sshd@13-172.31.22.173:22-147.75.109.163:54122.service: Deactivated successfully. Feb 13 19:03:35.036765 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:03:35.039343 systemd-logind[1923]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:03:35.041686 systemd-logind[1923]: Removed session 14. Feb 13 19:03:40.063383 systemd[1]: Started sshd@14-172.31.22.173:22-147.75.109.163:45040.service - OpenSSH per-connection server daemon (147.75.109.163:45040). Feb 13 19:03:40.251115 sshd[4746]: Accepted publickey for core from 147.75.109.163 port 45040 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:40.253640 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:40.262990 systemd-logind[1923]: New session 15 of user core. Feb 13 19:03:40.272125 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:03:40.519512 sshd[4748]: Connection closed by 147.75.109.163 port 45040 Feb 13 19:03:40.520435 sshd-session[4746]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:40.527231 systemd[1]: sshd@14-172.31.22.173:22-147.75.109.163:45040.service: Deactivated successfully. Feb 13 19:03:40.533171 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:03:40.535601 systemd-logind[1923]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:03:40.537422 systemd-logind[1923]: Removed session 15. Feb 13 19:03:43.989891 update_engine[1924]: I20250213 19:03:43.989784 1924 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:03:43.989891 update_engine[1924]: I20250213 19:03:43.989863 1924 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:03:43.990537 update_engine[1924]: I20250213 19:03:43.990181 1924 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:03:43.991382 update_engine[1924]: I20250213 19:03:43.991055 1924 omaha_request_params.cc:62] Current group set to stable Feb 13 19:03:43.991382 update_engine[1924]: I20250213 19:03:43.991216 1924 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:03:43.991382 update_engine[1924]: I20250213 19:03:43.991240 1924 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:03:43.991382 update_engine[1924]: I20250213 19:03:43.991273 1924 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:03:43.991382 update_engine[1924]: I20250213 19:03:43.991328 1924 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:03:43.991682 locksmithd[1955]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:03:43.992111 update_engine[1924]: I20250213 19:03:43.991791 1924 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:03:43.992111 update_engine[1924]: I20250213 19:03:43.991819 1924 omaha_request_action.cc:272] Request: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: Feb 13 19:03:43.992111 update_engine[1924]: I20250213 19:03:43.991837 1924 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:03:43.993923 update_engine[1924]: I20250213 19:03:43.993808 1924 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:03:43.994392 update_engine[1924]: I20250213 19:03:43.994334 1924 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:03:44.017494 update_engine[1924]: E20250213 19:03:44.017422 1924 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:03:44.017609 update_engine[1924]: I20250213 19:03:44.017542 1924 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:03:45.562388 systemd[1]: Started sshd@15-172.31.22.173:22-147.75.109.163:45052.service - OpenSSH per-connection server daemon (147.75.109.163:45052). Feb 13 19:03:45.744215 sshd[4759]: Accepted publickey for core from 147.75.109.163 port 45052 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:45.746944 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:45.754506 systemd-logind[1923]: New session 16 of user core. Feb 13 19:03:45.761161 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:03:46.000454 sshd[4761]: Connection closed by 147.75.109.163 port 45052 Feb 13 19:03:46.001610 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:46.008010 systemd-logind[1923]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:03:46.008854 systemd[1]: sshd@15-172.31.22.173:22-147.75.109.163:45052.service: Deactivated successfully. Feb 13 19:03:46.013254 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:03:46.017964 systemd-logind[1923]: Removed session 16. Feb 13 19:03:46.038539 systemd[1]: Started sshd@16-172.31.22.173:22-147.75.109.163:45054.service - OpenSSH per-connection server daemon (147.75.109.163:45054). Feb 13 19:03:46.223519 sshd[4772]: Accepted publickey for core from 147.75.109.163 port 45054 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:46.226082 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:46.233413 systemd-logind[1923]: New session 17 of user core. Feb 13 19:03:46.244156 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:03:46.543952 sshd[4774]: Connection closed by 147.75.109.163 port 45054 Feb 13 19:03:46.544796 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:46.549486 systemd[1]: sshd@16-172.31.22.173:22-147.75.109.163:45054.service: Deactivated successfully. Feb 13 19:03:46.554436 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:03:46.558767 systemd-logind[1923]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:03:46.562091 systemd-logind[1923]: Removed session 17. Feb 13 19:03:46.579221 systemd[1]: Started sshd@17-172.31.22.173:22-147.75.109.163:45058.service - OpenSSH per-connection server daemon (147.75.109.163:45058). Feb 13 19:03:46.780172 sshd[4784]: Accepted publickey for core from 147.75.109.163 port 45058 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:46.783173 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:46.791370 systemd-logind[1923]: New session 18 of user core. Feb 13 19:03:46.794138 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:03:49.288159 sshd[4786]: Connection closed by 147.75.109.163 port 45058 Feb 13 19:03:49.290393 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:49.300727 systemd[1]: sshd@17-172.31.22.173:22-147.75.109.163:45058.service: Deactivated successfully. Feb 13 19:03:49.307919 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:03:49.315182 systemd-logind[1923]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:03:49.341277 systemd[1]: Started sshd@18-172.31.22.173:22-147.75.109.163:50202.service - OpenSSH per-connection server daemon (147.75.109.163:50202). Feb 13 19:03:49.345561 systemd-logind[1923]: Removed session 18. Feb 13 19:03:49.543382 sshd[4803]: Accepted publickey for core from 147.75.109.163 port 50202 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:49.546487 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:49.560201 systemd-logind[1923]: New session 19 of user core. Feb 13 19:03:49.569160 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:03:50.054481 sshd[4805]: Connection closed by 147.75.109.163 port 50202 Feb 13 19:03:50.055617 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:50.060611 systemd-logind[1923]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:03:50.061579 systemd[1]: sshd@18-172.31.22.173:22-147.75.109.163:50202.service: Deactivated successfully. Feb 13 19:03:50.065923 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:03:50.069825 systemd-logind[1923]: Removed session 19. Feb 13 19:03:50.092464 systemd[1]: Started sshd@19-172.31.22.173:22-147.75.109.163:50216.service - OpenSSH per-connection server daemon (147.75.109.163:50216). Feb 13 19:03:50.297606 sshd[4814]: Accepted publickey for core from 147.75.109.163 port 50216 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:50.300150 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:50.309147 systemd-logind[1923]: New session 20 of user core. Feb 13 19:03:50.316149 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:03:50.562513 sshd[4816]: Connection closed by 147.75.109.163 port 50216 Feb 13 19:03:50.561593 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:50.566809 systemd[1]: sshd@19-172.31.22.173:22-147.75.109.163:50216.service: Deactivated successfully. Feb 13 19:03:50.570865 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:03:50.574504 systemd-logind[1923]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:03:50.576695 systemd-logind[1923]: Removed session 20. Feb 13 19:03:53.990697 update_engine[1924]: I20250213 19:03:53.989969 1924 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:03:53.990697 update_engine[1924]: I20250213 19:03:53.990321 1924 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:03:53.990697 update_engine[1924]: I20250213 19:03:53.990636 1924 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:03:53.991770 update_engine[1924]: E20250213 19:03:53.991729 1924 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:03:53.991949 update_engine[1924]: I20250213 19:03:53.991918 1924 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:03:55.601394 systemd[1]: Started sshd@20-172.31.22.173:22-147.75.109.163:50224.service - OpenSSH per-connection server daemon (147.75.109.163:50224). Feb 13 19:03:55.787238 sshd[4827]: Accepted publickey for core from 147.75.109.163 port 50224 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:55.789749 sshd-session[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:55.798248 systemd-logind[1923]: New session 21 of user core. Feb 13 19:03:55.802149 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:03:56.039426 sshd[4829]: Connection closed by 147.75.109.163 port 50224 Feb 13 19:03:56.040664 sshd-session[4827]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:56.046942 systemd[1]: sshd@20-172.31.22.173:22-147.75.109.163:50224.service: Deactivated successfully. Feb 13 19:03:56.051249 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:03:56.053262 systemd-logind[1923]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:03:56.055756 systemd-logind[1923]: Removed session 21. Feb 13 19:04:01.080432 systemd[1]: Started sshd@21-172.31.22.173:22-147.75.109.163:39080.service - OpenSSH per-connection server daemon (147.75.109.163:39080). Feb 13 19:04:01.269099 sshd[4845]: Accepted publickey for core from 147.75.109.163 port 39080 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:01.272204 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:01.281346 systemd-logind[1923]: New session 22 of user core. Feb 13 19:04:01.287163 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:04:01.527502 sshd[4847]: Connection closed by 147.75.109.163 port 39080 Feb 13 19:04:01.527298 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:01.532169 systemd-logind[1923]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:04:01.532862 systemd[1]: sshd@21-172.31.22.173:22-147.75.109.163:39080.service: Deactivated successfully. Feb 13 19:04:01.537451 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:04:01.540829 systemd-logind[1923]: Removed session 22. Feb 13 19:04:03.989469 update_engine[1924]: I20250213 19:04:03.988710 1924 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:04:03.989469 update_engine[1924]: I20250213 19:04:03.989106 1924 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:04:03.989469 update_engine[1924]: I20250213 19:04:03.989401 1924 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:04:03.990461 update_engine[1924]: E20250213 19:04:03.990418 1924 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:04:03.990617 update_engine[1924]: I20250213 19:04:03.990585 1924 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:04:06.574348 systemd[1]: Started sshd@22-172.31.22.173:22-147.75.109.163:39084.service - OpenSSH per-connection server daemon (147.75.109.163:39084). Feb 13 19:04:06.758922 sshd[4858]: Accepted publickey for core from 147.75.109.163 port 39084 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:06.761391 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:06.771140 systemd-logind[1923]: New session 23 of user core. Feb 13 19:04:06.780181 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:04:07.018664 sshd[4860]: Connection closed by 147.75.109.163 port 39084 Feb 13 19:04:07.019605 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:07.025827 systemd[1]: sshd@22-172.31.22.173:22-147.75.109.163:39084.service: Deactivated successfully. Feb 13 19:04:07.030966 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:04:07.032757 systemd-logind[1923]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:04:07.034708 systemd-logind[1923]: Removed session 23. Feb 13 19:04:12.065431 systemd[1]: Started sshd@23-172.31.22.173:22-147.75.109.163:52288.service - OpenSSH per-connection server daemon (147.75.109.163:52288). Feb 13 19:04:12.254152 sshd[4871]: Accepted publickey for core from 147.75.109.163 port 52288 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:12.256719 sshd-session[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:12.264411 systemd-logind[1923]: New session 24 of user core. Feb 13 19:04:12.274152 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:04:12.516127 sshd[4873]: Connection closed by 147.75.109.163 port 52288 Feb 13 19:04:12.517093 sshd-session[4871]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:12.524087 systemd[1]: sshd@23-172.31.22.173:22-147.75.109.163:52288.service: Deactivated successfully. Feb 13 19:04:12.528151 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:04:12.530039 systemd-logind[1923]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:04:12.532386 systemd-logind[1923]: Removed session 24. Feb 13 19:04:12.555421 systemd[1]: Started sshd@24-172.31.22.173:22-147.75.109.163:52304.service - OpenSSH per-connection server daemon (147.75.109.163:52304). Feb 13 19:04:12.755626 sshd[4883]: Accepted publickey for core from 147.75.109.163 port 52304 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:12.758106 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:12.766522 systemd-logind[1923]: New session 25 of user core. Feb 13 19:04:12.772253 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:04:13.987898 update_engine[1924]: I20250213 19:04:13.987201 1924 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:04:13.987898 update_engine[1924]: I20250213 19:04:13.987550 1924 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:04:13.987898 update_engine[1924]: I20250213 19:04:13.987841 1924 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:04:13.990760 update_engine[1924]: E20250213 19:04:13.988915 1924 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989017 1924 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989041 1924 omaha_request_action.cc:617] Omaha request response: Feb 13 19:04:13.990760 update_engine[1924]: E20250213 19:04:13.989171 1924 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989208 1924 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989224 1924 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989239 1924 update_attempter.cc:306] Processing Done. Feb 13 19:04:13.990760 update_engine[1924]: E20250213 19:04:13.989266 1924 update_attempter.cc:619] Update failed. Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989282 1924 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989297 1924 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989313 1924 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989423 1924 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989459 1924 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:04:13.990760 update_engine[1924]: I20250213 19:04:13.989476 1924 omaha_request_action.cc:272] Request: Feb 13 19:04:13.990760 update_engine[1924]: Feb 13 19:04:13.990760 update_engine[1924]: Feb 13 19:04:13.991635 update_engine[1924]: Feb 13 19:04:13.991635 update_engine[1924]: Feb 13 19:04:13.991635 update_engine[1924]: Feb 13 19:04:13.991635 update_engine[1924]: Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.989495 1924 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.989754 1924 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.990051 1924 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:04:13.991635 update_engine[1924]: E20250213 19:04:13.990803 1924 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.990919 1924 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.990942 1924 omaha_request_action.cc:617] Omaha request response: Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.990962 1924 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.990978 1924 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.990993 1924 update_attempter.cc:306] Processing Done. Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.991009 1924 update_attempter.cc:310] Error event sent. Feb 13 19:04:13.991635 update_engine[1924]: I20250213 19:04:13.991029 1924 update_check_scheduler.cc:74] Next update check in 43m2s Feb 13 19:04:13.992311 locksmithd[1955]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:04:13.992311 locksmithd[1955]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:04:15.733276 containerd[1945]: time="2025-02-13T19:04:15.733200214Z" level=info msg="StopContainer for \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\" with timeout 30 (s)" Feb 13 19:04:15.738770 containerd[1945]: time="2025-02-13T19:04:15.738714646Z" level=info msg="Stop container \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\" with signal terminated" Feb 13 19:04:15.772899 systemd[1]: cri-containerd-8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a.scope: Deactivated successfully. Feb 13 19:04:15.806427 containerd[1945]: time="2025-02-13T19:04:15.806329150Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:04:15.826811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a-rootfs.mount: Deactivated successfully. Feb 13 19:04:15.834073 containerd[1945]: time="2025-02-13T19:04:15.834010654Z" level=info msg="StopContainer for \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\" with timeout 2 (s)" Feb 13 19:04:15.834651 containerd[1945]: time="2025-02-13T19:04:15.834593422Z" level=info msg="Stop container \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\" with signal terminated" Feb 13 19:04:15.844640 containerd[1945]: time="2025-02-13T19:04:15.844488322Z" level=info msg="shim disconnected" id=8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a namespace=k8s.io Feb 13 19:04:15.845808 containerd[1945]: time="2025-02-13T19:04:15.844741570Z" level=warning msg="cleaning up after shim disconnected" id=8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a namespace=k8s.io Feb 13 19:04:15.845808 containerd[1945]: time="2025-02-13T19:04:15.844769578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:15.851534 systemd-networkd[1837]: lxc_health: Link DOWN Feb 13 19:04:15.853392 systemd-networkd[1837]: lxc_health: Lost carrier Feb 13 19:04:15.886385 systemd[1]: cri-containerd-c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20.scope: Deactivated successfully. Feb 13 19:04:15.886841 systemd[1]: cri-containerd-c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20.scope: Consumed 14.109s CPU time. Feb 13 19:04:15.901344 containerd[1945]: time="2025-02-13T19:04:15.900018131Z" level=info msg="StopContainer for \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\" returns successfully" Feb 13 19:04:15.901344 containerd[1945]: time="2025-02-13T19:04:15.901024655Z" level=info msg="StopPodSandbox for \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\"" Feb 13 19:04:15.901344 containerd[1945]: time="2025-02-13T19:04:15.901087799Z" level=info msg="Container to stop \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:15.909118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7-shm.mount: Deactivated successfully. Feb 13 19:04:15.921504 systemd[1]: cri-containerd-22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7.scope: Deactivated successfully. Feb 13 19:04:15.941999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20-rootfs.mount: Deactivated successfully. Feb 13 19:04:15.955029 containerd[1945]: time="2025-02-13T19:04:15.954711623Z" level=info msg="shim disconnected" id=c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20 namespace=k8s.io Feb 13 19:04:15.955312 containerd[1945]: time="2025-02-13T19:04:15.955020167Z" level=warning msg="cleaning up after shim disconnected" id=c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20 namespace=k8s.io Feb 13 19:04:15.955312 containerd[1945]: time="2025-02-13T19:04:15.955293275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:15.978615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7-rootfs.mount: Deactivated successfully. Feb 13 19:04:15.986564 containerd[1945]: time="2025-02-13T19:04:15.985249043Z" level=info msg="shim disconnected" id=22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7 namespace=k8s.io Feb 13 19:04:15.988227 containerd[1945]: time="2025-02-13T19:04:15.988092971Z" level=warning msg="cleaning up after shim disconnected" id=22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7 namespace=k8s.io Feb 13 19:04:15.988227 containerd[1945]: time="2025-02-13T19:04:15.988165199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:15.994223 containerd[1945]: time="2025-02-13T19:04:15.994151723Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:04:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:04:16.001245 containerd[1945]: time="2025-02-13T19:04:16.001076911Z" level=info msg="StopContainer for \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\" returns successfully" Feb 13 19:04:16.002861 containerd[1945]: time="2025-02-13T19:04:16.002561851Z" level=info msg="StopPodSandbox for \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\"" Feb 13 19:04:16.002861 containerd[1945]: time="2025-02-13T19:04:16.002637115Z" level=info msg="Container to stop \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:16.002861 containerd[1945]: time="2025-02-13T19:04:16.002661139Z" level=info msg="Container to stop \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:16.002861 containerd[1945]: time="2025-02-13T19:04:16.002684491Z" level=info msg="Container to stop \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:16.002861 containerd[1945]: time="2025-02-13T19:04:16.002707855Z" level=info msg="Container to stop \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:16.002861 containerd[1945]: time="2025-02-13T19:04:16.002728003Z" level=info msg="Container to stop \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:16.021839 systemd[1]: cri-containerd-f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316.scope: Deactivated successfully. Feb 13 19:04:16.024423 containerd[1945]: time="2025-02-13T19:04:16.024287587Z" level=info msg="TearDown network for sandbox \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" successfully" Feb 13 19:04:16.024423 containerd[1945]: time="2025-02-13T19:04:16.024344119Z" level=info msg="StopPodSandbox for \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" returns successfully" Feb 13 19:04:16.077094 containerd[1945]: time="2025-02-13T19:04:16.076993868Z" level=info msg="shim disconnected" id=f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316 namespace=k8s.io Feb 13 19:04:16.077094 containerd[1945]: time="2025-02-13T19:04:16.077072072Z" level=warning msg="cleaning up after shim disconnected" id=f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316 namespace=k8s.io Feb 13 19:04:16.077094 containerd[1945]: time="2025-02-13T19:04:16.077093684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:16.100057 containerd[1945]: time="2025-02-13T19:04:16.099789956Z" level=info msg="TearDown network for sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" successfully" Feb 13 19:04:16.100057 containerd[1945]: time="2025-02-13T19:04:16.099837428Z" level=info msg="StopPodSandbox for \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" returns successfully" Feb 13 19:04:16.148911 kubelet[3181]: I0213 19:04:16.146687 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f29da52e-11fb-4b73-a8ba-613c7ab48164-cilium-config-path\") pod \"f29da52e-11fb-4b73-a8ba-613c7ab48164\" (UID: \"f29da52e-11fb-4b73-a8ba-613c7ab48164\") " Feb 13 19:04:16.148911 kubelet[3181]: I0213 19:04:16.146750 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-lib-modules\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.148911 kubelet[3181]: I0213 19:04:16.146786 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-kernel\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.148911 kubelet[3181]: I0213 19:04:16.146823 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-xtables-lock\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.148911 kubelet[3181]: I0213 19:04:16.146861 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-config-path\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.148911 kubelet[3181]: I0213 19:04:16.146935 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hostproc\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.149838 kubelet[3181]: I0213 19:04:16.146975 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntpkf\" (UniqueName: \"kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-kube-api-access-ntpkf\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.149838 kubelet[3181]: I0213 19:04:16.147015 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-clustermesh-secrets\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.149838 kubelet[3181]: I0213 19:04:16.147049 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-net\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.149838 kubelet[3181]: I0213 19:04:16.147082 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-etc-cni-netd\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.149838 kubelet[3181]: I0213 19:04:16.147116 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-run\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.149838 kubelet[3181]: I0213 19:04:16.147146 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-bpf-maps\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.150285 kubelet[3181]: I0213 19:04:16.147178 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-cgroup\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.150285 kubelet[3181]: I0213 19:04:16.147219 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hubble-tls\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.150285 kubelet[3181]: I0213 19:04:16.147255 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psmhr\" (UniqueName: \"kubernetes.io/projected/f29da52e-11fb-4b73-a8ba-613c7ab48164-kube-api-access-psmhr\") pod \"f29da52e-11fb-4b73-a8ba-613c7ab48164\" (UID: \"f29da52e-11fb-4b73-a8ba-613c7ab48164\") " Feb 13 19:04:16.150285 kubelet[3181]: I0213 19:04:16.147292 3181 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cni-path\") pod \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\" (UID: \"68ab4ee2-4ed0-4fea-84f5-437f5293bfe6\") " Feb 13 19:04:16.150285 kubelet[3181]: I0213 19:04:16.147384 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cni-path" (OuterVolumeSpecName: "cni-path") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.150285 kubelet[3181]: I0213 19:04:16.147442 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.150596 kubelet[3181]: I0213 19:04:16.147478 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.150596 kubelet[3181]: I0213 19:04:16.147515 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.150859 kubelet[3181]: I0213 19:04:16.150816 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hostproc" (OuterVolumeSpecName: "hostproc") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.156101 kubelet[3181]: I0213 19:04:16.153332 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.156101 kubelet[3181]: I0213 19:04:16.153393 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.156337 kubelet[3181]: I0213 19:04:16.153419 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.156337 kubelet[3181]: I0213 19:04:16.153466 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.156337 kubelet[3181]: I0213 19:04:16.153495 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:04:16.159291 kubelet[3181]: I0213 19:04:16.159231 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f29da52e-11fb-4b73-a8ba-613c7ab48164-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f29da52e-11fb-4b73-a8ba-613c7ab48164" (UID: "f29da52e-11fb-4b73-a8ba-613c7ab48164"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:04:16.161301 kubelet[3181]: I0213 19:04:16.161248 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:04:16.163259 kubelet[3181]: I0213 19:04:16.163188 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:04:16.165366 kubelet[3181]: I0213 19:04:16.165290 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29da52e-11fb-4b73-a8ba-613c7ab48164-kube-api-access-psmhr" (OuterVolumeSpecName: "kube-api-access-psmhr") pod "f29da52e-11fb-4b73-a8ba-613c7ab48164" (UID: "f29da52e-11fb-4b73-a8ba-613c7ab48164"). InnerVolumeSpecName "kube-api-access-psmhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:04:16.166077 kubelet[3181]: I0213 19:04:16.166012 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-kube-api-access-ntpkf" (OuterVolumeSpecName: "kube-api-access-ntpkf") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "kube-api-access-ntpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:04:16.166404 kubelet[3181]: I0213 19:04:16.166350 3181 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" (UID: "68ab4ee2-4ed0-4fea-84f5-437f5293bfe6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:04:16.201665 kubelet[3181]: I0213 19:04:16.201627 3181 scope.go:117] "RemoveContainer" containerID="8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a" Feb 13 19:04:16.207925 containerd[1945]: time="2025-02-13T19:04:16.206859584Z" level=info msg="RemoveContainer for \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\"" Feb 13 19:04:16.218152 systemd[1]: Removed slice kubepods-besteffort-podf29da52e_11fb_4b73_a8ba_613c7ab48164.slice - libcontainer container kubepods-besteffort-podf29da52e_11fb_4b73_a8ba_613c7ab48164.slice. Feb 13 19:04:16.220925 containerd[1945]: time="2025-02-13T19:04:16.219580100Z" level=info msg="RemoveContainer for \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\" returns successfully" Feb 13 19:04:16.224335 kubelet[3181]: I0213 19:04:16.224279 3181 scope.go:117] "RemoveContainer" containerID="8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a" Feb 13 19:04:16.225024 containerd[1945]: time="2025-02-13T19:04:16.224867648Z" level=error msg="ContainerStatus for \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\": not found" Feb 13 19:04:16.225214 kubelet[3181]: E0213 19:04:16.225156 3181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\": not found" containerID="8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a" Feb 13 19:04:16.225442 kubelet[3181]: I0213 19:04:16.225216 3181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a"} err="failed to get container status \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cdcf83b15e2ada982a85a8e7f8d242f4da471191ee79ffdb1ad76d6f2e0056a\": not found" Feb 13 19:04:16.225442 kubelet[3181]: I0213 19:04:16.225347 3181 scope.go:117] "RemoveContainer" containerID="c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20" Feb 13 19:04:16.232226 containerd[1945]: time="2025-02-13T19:04:16.232163096Z" level=info msg="RemoveContainer for \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\"" Feb 13 19:04:16.233039 systemd[1]: Removed slice kubepods-burstable-pod68ab4ee2_4ed0_4fea_84f5_437f5293bfe6.slice - libcontainer container kubepods-burstable-pod68ab4ee2_4ed0_4fea_84f5_437f5293bfe6.slice. Feb 13 19:04:16.233794 systemd[1]: kubepods-burstable-pod68ab4ee2_4ed0_4fea_84f5_437f5293bfe6.slice: Consumed 14.266s CPU time. Feb 13 19:04:16.241659 containerd[1945]: time="2025-02-13T19:04:16.241434296Z" level=info msg="RemoveContainer for \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\" returns successfully" Feb 13 19:04:16.244472 kubelet[3181]: I0213 19:04:16.244289 3181 scope.go:117] "RemoveContainer" containerID="60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249167 3181 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cni-path\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249210 3181 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f29da52e-11fb-4b73-a8ba-613c7ab48164-cilium-config-path\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249232 3181 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-lib-modules\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249254 3181 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-kernel\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249274 3181 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-xtables-lock\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249294 3181 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-config-path\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249314 3181 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hostproc\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.249669 kubelet[3181]: I0213 19:04:16.249333 3181 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ntpkf\" (UniqueName: \"kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-kube-api-access-ntpkf\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249357 3181 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-clustermesh-secrets\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249379 3181 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-host-proc-sys-net\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249399 3181 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-etc-cni-netd\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249418 3181 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-psmhr\" (UniqueName: \"kubernetes.io/projected/f29da52e-11fb-4b73-a8ba-613c7ab48164-kube-api-access-psmhr\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249438 3181 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-run\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249460 3181 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-bpf-maps\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249479 3181 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-cilium-cgroup\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.250704 kubelet[3181]: I0213 19:04:16.249504 3181 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6-hubble-tls\") on node \"ip-172-31-22-173\" DevicePath \"\"" Feb 13 19:04:16.254046 containerd[1945]: time="2025-02-13T19:04:16.253958504Z" level=info msg="RemoveContainer for \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\"" Feb 13 19:04:16.263928 containerd[1945]: time="2025-02-13T19:04:16.261690056Z" level=info msg="RemoveContainer for \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\" returns successfully" Feb 13 19:04:16.264461 kubelet[3181]: I0213 19:04:16.264236 3181 scope.go:117] "RemoveContainer" containerID="50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306" Feb 13 19:04:16.267914 containerd[1945]: time="2025-02-13T19:04:16.267832508Z" level=info msg="RemoveContainer for \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\"" Feb 13 19:04:16.289561 containerd[1945]: time="2025-02-13T19:04:16.287921133Z" level=info msg="RemoveContainer for \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\" returns successfully" Feb 13 19:04:16.289696 kubelet[3181]: I0213 19:04:16.289352 3181 scope.go:117] "RemoveContainer" containerID="f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13" Feb 13 19:04:16.307014 containerd[1945]: time="2025-02-13T19:04:16.306920589Z" level=info msg="RemoveContainer for \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\"" Feb 13 19:04:16.322311 containerd[1945]: time="2025-02-13T19:04:16.322239561Z" level=info msg="RemoveContainer for \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\" returns successfully" Feb 13 19:04:16.323957 kubelet[3181]: I0213 19:04:16.322590 3181 scope.go:117] "RemoveContainer" containerID="c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3" Feb 13 19:04:16.326522 containerd[1945]: time="2025-02-13T19:04:16.326473689Z" level=info msg="RemoveContainer for \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\"" Feb 13 19:04:16.333067 containerd[1945]: time="2025-02-13T19:04:16.333016941Z" level=info msg="RemoveContainer for \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\" returns successfully" Feb 13 19:04:16.333734 kubelet[3181]: I0213 19:04:16.333684 3181 scope.go:117] "RemoveContainer" containerID="c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20" Feb 13 19:04:16.334385 containerd[1945]: time="2025-02-13T19:04:16.334331037Z" level=error msg="ContainerStatus for \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\": not found" Feb 13 19:04:16.334934 kubelet[3181]: E0213 19:04:16.334669 3181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\": not found" containerID="c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20" Feb 13 19:04:16.334934 kubelet[3181]: I0213 19:04:16.334723 3181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20"} err="failed to get container status \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2ef571eee27ce6aceb509342945093a06006d73aee176eb8d3a23fd58746a20\": not found" Feb 13 19:04:16.334934 kubelet[3181]: I0213 19:04:16.334761 3181 scope.go:117] "RemoveContainer" containerID="60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39" Feb 13 19:04:16.335213 containerd[1945]: time="2025-02-13T19:04:16.335125689Z" level=error msg="ContainerStatus for \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\": not found" Feb 13 19:04:16.335399 kubelet[3181]: E0213 19:04:16.335333 3181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\": not found" containerID="60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39" Feb 13 19:04:16.335399 kubelet[3181]: I0213 19:04:16.335381 3181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39"} err="failed to get container status \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\": rpc error: code = NotFound desc = an error occurred when try to find container \"60e04f3cdb186a41bbcd40d650a3b3732c235a1af327a63010f90c75839dcc39\": not found" Feb 13 19:04:16.335509 kubelet[3181]: I0213 19:04:16.335415 3181 scope.go:117] "RemoveContainer" containerID="50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306" Feb 13 19:04:16.335837 containerd[1945]: time="2025-02-13T19:04:16.335794653Z" level=error msg="ContainerStatus for \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\": not found" Feb 13 19:04:16.336260 kubelet[3181]: E0213 19:04:16.336218 3181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\": not found" containerID="50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306" Feb 13 19:04:16.336345 kubelet[3181]: I0213 19:04:16.336268 3181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306"} err="failed to get container status \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\": rpc error: code = NotFound desc = an error occurred when try to find container \"50ead7aeb5b34e923495bb995f9303af31f80d6ac1da591203c4238cf0fcf306\": not found" Feb 13 19:04:16.336345 kubelet[3181]: I0213 19:04:16.336303 3181 scope.go:117] "RemoveContainer" containerID="f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13" Feb 13 19:04:16.336698 containerd[1945]: time="2025-02-13T19:04:16.336623349Z" level=error msg="ContainerStatus for \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\": not found" Feb 13 19:04:16.337146 kubelet[3181]: E0213 19:04:16.336936 3181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\": not found" containerID="f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13" Feb 13 19:04:16.337146 kubelet[3181]: I0213 19:04:16.336981 3181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13"} err="failed to get container status \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6ca86dbede3240ebd908b7d0f6576f97eca886f25687be832b58c16a8c2ae13\": not found" Feb 13 19:04:16.337146 kubelet[3181]: I0213 19:04:16.337011 3181 scope.go:117] "RemoveContainer" containerID="c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3" Feb 13 19:04:16.337506 containerd[1945]: time="2025-02-13T19:04:16.337379013Z" level=error msg="ContainerStatus for \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\": not found" Feb 13 19:04:16.337750 kubelet[3181]: E0213 19:04:16.337711 3181 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\": not found" containerID="c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3" Feb 13 19:04:16.337829 kubelet[3181]: I0213 19:04:16.337779 3181 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3"} err="failed to get container status \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2ea922d4065739ce661e2d2eaf606c2c7f358fbe190b8c1f824ed4ea87e97b3\": not found" Feb 13 19:04:16.768965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316-rootfs.mount: Deactivated successfully. Feb 13 19:04:16.769369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316-shm.mount: Deactivated successfully. Feb 13 19:04:16.769622 systemd[1]: var-lib-kubelet-pods-68ab4ee2\x2d4ed0\x2d4fea\x2d84f5\x2d437f5293bfe6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dntpkf.mount: Deactivated successfully. Feb 13 19:04:16.770000 systemd[1]: var-lib-kubelet-pods-f29da52e\x2d11fb\x2d4b73\x2da8ba\x2d613c7ab48164-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsmhr.mount: Deactivated successfully. Feb 13 19:04:16.770680 systemd[1]: var-lib-kubelet-pods-68ab4ee2\x2d4ed0\x2d4fea\x2d84f5\x2d437f5293bfe6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:04:16.771092 systemd[1]: var-lib-kubelet-pods-68ab4ee2\x2d4ed0\x2d4fea\x2d84f5\x2d437f5293bfe6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:04:16.771926 kubelet[3181]: I0213 19:04:16.771827 3181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" path="/var/lib/kubelet/pods/68ab4ee2-4ed0-4fea-84f5-437f5293bfe6/volumes" Feb 13 19:04:16.775817 kubelet[3181]: I0213 19:04:16.775649 3181 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29da52e-11fb-4b73-a8ba-613c7ab48164" path="/var/lib/kubelet/pods/f29da52e-11fb-4b73-a8ba-613c7ab48164/volumes" Feb 13 19:04:17.680957 sshd[4885]: Connection closed by 147.75.109.163 port 52304 Feb 13 19:04:17.681809 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:17.688326 systemd[1]: sshd@24-172.31.22.173:22-147.75.109.163:52304.service: Deactivated successfully. Feb 13 19:04:17.693676 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:04:17.694525 systemd[1]: session-25.scope: Consumed 2.227s CPU time. Feb 13 19:04:17.696311 systemd-logind[1923]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:04:17.698555 systemd-logind[1923]: Removed session 25. Feb 13 19:04:17.722401 systemd[1]: Started sshd@25-172.31.22.173:22-147.75.109.163:52318.service - OpenSSH per-connection server daemon (147.75.109.163:52318). Feb 13 19:04:17.896849 ntpd[1915]: Deleting interface #12 lxc_health, fe80::241c:7eff:fef3:2996%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Feb 13 19:04:17.897378 ntpd[1915]: 13 Feb 19:04:17 ntpd[1915]: Deleting interface #12 lxc_health, fe80::241c:7eff:fef3:2996%8#123, interface stats: received=0, sent=0, dropped=0, active_time=84 secs Feb 13 19:04:17.920804 sshd[5047]: Accepted publickey for core from 147.75.109.163 port 52318 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:17.923385 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:17.930728 systemd-logind[1923]: New session 26 of user core. Feb 13 19:04:17.944143 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:04:19.955263 kubelet[3181]: E0213 19:04:19.955195 3181 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:20.149919 sshd[5049]: Connection closed by 147.75.109.163 port 52318 Feb 13 19:04:20.150706 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:20.158298 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:04:20.158602 systemd[1]: session-26.scope: Consumed 1.998s CPU time. Feb 13 19:04:20.160097 systemd[1]: sshd@25-172.31.22.173:22-147.75.109.163:52318.service: Deactivated successfully. Feb 13 19:04:20.170359 systemd-logind[1923]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:04:20.203023 systemd[1]: Started sshd@26-172.31.22.173:22-147.75.109.163:59384.service - OpenSSH per-connection server daemon (147.75.109.163:59384). Feb 13 19:04:20.206079 systemd-logind[1923]: Removed session 26. Feb 13 19:04:20.235104 kubelet[3181]: E0213 19:04:20.235035 3181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" containerName="apply-sysctl-overwrites" Feb 13 19:04:20.235104 kubelet[3181]: E0213 19:04:20.235113 3181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f29da52e-11fb-4b73-a8ba-613c7ab48164" containerName="cilium-operator" Feb 13 19:04:20.235317 kubelet[3181]: E0213 19:04:20.235133 3181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" containerName="mount-cgroup" Feb 13 19:04:20.235317 kubelet[3181]: E0213 19:04:20.235150 3181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" containerName="mount-bpf-fs" Feb 13 19:04:20.235317 kubelet[3181]: E0213 19:04:20.235168 3181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" containerName="clean-cilium-state" Feb 13 19:04:20.235317 kubelet[3181]: E0213 19:04:20.235210 3181 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" containerName="cilium-agent" Feb 13 19:04:20.236830 kubelet[3181]: I0213 19:04:20.235605 3181 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ab4ee2-4ed0-4fea-84f5-437f5293bfe6" containerName="cilium-agent" Feb 13 19:04:20.236830 kubelet[3181]: I0213 19:04:20.235646 3181 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29da52e-11fb-4b73-a8ba-613c7ab48164" containerName="cilium-operator" Feb 13 19:04:20.258025 systemd[1]: Created slice kubepods-burstable-podc437eeba_9dd6_4a3b_97e7_47c6fa0edf54.slice - libcontainer container kubepods-burstable-podc437eeba_9dd6_4a3b_97e7_47c6fa0edf54.slice. Feb 13 19:04:20.283902 kubelet[3181]: I0213 19:04:20.282266 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-cni-path\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.283902 kubelet[3181]: I0213 19:04:20.282384 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-cilium-config-path\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.283902 kubelet[3181]: I0213 19:04:20.282606 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-cilium-ipsec-secrets\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.283902 kubelet[3181]: I0213 19:04:20.283083 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8dwb\" (UniqueName: \"kubernetes.io/projected/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-kube-api-access-q8dwb\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.283902 kubelet[3181]: I0213 19:04:20.283390 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-etc-cni-netd\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.284245 kubelet[3181]: I0213 19:04:20.283656 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-clustermesh-secrets\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.284245 kubelet[3181]: I0213 19:04:20.283826 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-hostproc\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.284245 kubelet[3181]: I0213 19:04:20.284018 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-host-proc-sys-kernel\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.284404 kubelet[3181]: I0213 19:04:20.284301 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-cilium-run\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.284551 kubelet[3181]: I0213 19:04:20.284508 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-bpf-maps\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.284959 kubelet[3181]: I0213 19:04:20.284921 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-hubble-tls\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.286454 kubelet[3181]: I0213 19:04:20.285279 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-cilium-cgroup\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.286863 kubelet[3181]: I0213 19:04:20.285505 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-lib-modules\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.287087 kubelet[3181]: I0213 19:04:20.286957 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-xtables-lock\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.287087 kubelet[3181]: I0213 19:04:20.287037 3181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c437eeba-9dd6-4a3b-97e7-47c6fa0edf54-host-proc-sys-net\") pod \"cilium-mxfpl\" (UID: \"c437eeba-9dd6-4a3b-97e7-47c6fa0edf54\") " pod="kube-system/cilium-mxfpl" Feb 13 19:04:20.485774 sshd[5058]: Accepted publickey for core from 147.75.109.163 port 59384 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:20.492302 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:20.509202 systemd-logind[1923]: New session 27 of user core. Feb 13 19:04:20.521740 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:04:20.566816 containerd[1945]: time="2025-02-13T19:04:20.566758694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxfpl,Uid:c437eeba-9dd6-4a3b-97e7-47c6fa0edf54,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:20.610228 containerd[1945]: time="2025-02-13T19:04:20.609742082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:20.610228 containerd[1945]: time="2025-02-13T19:04:20.609849902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:20.610423 containerd[1945]: time="2025-02-13T19:04:20.609933602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:20.611495 containerd[1945]: time="2025-02-13T19:04:20.611231066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:20.650289 sshd[5064]: Connection closed by 147.75.109.163 port 59384 Feb 13 19:04:20.650950 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:20.652951 systemd[1]: Started cri-containerd-dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d.scope - libcontainer container dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d. Feb 13 19:04:20.659579 systemd[1]: sshd@26-172.31.22.173:22-147.75.109.163:59384.service: Deactivated successfully. Feb 13 19:04:20.668175 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:04:20.671862 systemd-logind[1923]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:04:20.691424 systemd[1]: Started sshd@27-172.31.22.173:22-147.75.109.163:59398.service - OpenSSH per-connection server daemon (147.75.109.163:59398). Feb 13 19:04:20.696052 systemd-logind[1923]: Removed session 27. Feb 13 19:04:20.739801 containerd[1945]: time="2025-02-13T19:04:20.739161303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxfpl,Uid:c437eeba-9dd6-4a3b-97e7-47c6fa0edf54,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\"" Feb 13 19:04:20.749544 containerd[1945]: time="2025-02-13T19:04:20.749484459Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:04:20.767971 kubelet[3181]: E0213 19:04:20.767590 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mnhqh" podUID="6be37e89-39a2-4f4e-b98d-70b0ba7c60fb" Feb 13 19:04:20.778370 containerd[1945]: time="2025-02-13T19:04:20.778258779Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59\"" Feb 13 19:04:20.780528 containerd[1945]: time="2025-02-13T19:04:20.779119671Z" level=info msg="StartContainer for \"97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59\"" Feb 13 19:04:20.830223 systemd[1]: Started cri-containerd-97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59.scope - libcontainer container 97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59. Feb 13 19:04:20.879927 containerd[1945]: time="2025-02-13T19:04:20.879086163Z" level=info msg="StartContainer for \"97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59\" returns successfully" Feb 13 19:04:20.894986 systemd[1]: cri-containerd-97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59.scope: Deactivated successfully. Feb 13 19:04:20.899898 sshd[5104]: Accepted publickey for core from 147.75.109.163 port 59398 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:20.904153 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:20.917958 systemd-logind[1923]: New session 28 of user core. Feb 13 19:04:20.922172 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:04:20.960050 containerd[1945]: time="2025-02-13T19:04:20.959975908Z" level=info msg="shim disconnected" id=97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59 namespace=k8s.io Feb 13 19:04:20.960445 containerd[1945]: time="2025-02-13T19:04:20.960412276Z" level=warning msg="cleaning up after shim disconnected" id=97308979b927270e521ec9cd5f561b20bb8bd4f536f86d5423d28629256b6c59 namespace=k8s.io Feb 13 19:04:20.960574 containerd[1945]: time="2025-02-13T19:04:20.960547240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:20.980165 containerd[1945]: time="2025-02-13T19:04:20.980079808Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:04:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:04:21.240156 containerd[1945]: time="2025-02-13T19:04:21.239978425Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:04:21.267040 containerd[1945]: time="2025-02-13T19:04:21.266979325Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751\"" Feb 13 19:04:21.268141 containerd[1945]: time="2025-02-13T19:04:21.268082905Z" level=info msg="StartContainer for \"eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751\"" Feb 13 19:04:21.318180 systemd[1]: Started cri-containerd-eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751.scope - libcontainer container eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751. Feb 13 19:04:21.364569 containerd[1945]: time="2025-02-13T19:04:21.364488434Z" level=info msg="StartContainer for \"eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751\" returns successfully" Feb 13 19:04:21.377481 systemd[1]: cri-containerd-eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751.scope: Deactivated successfully. Feb 13 19:04:21.430508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751-rootfs.mount: Deactivated successfully. Feb 13 19:04:21.439761 containerd[1945]: time="2025-02-13T19:04:21.439645778Z" level=info msg="shim disconnected" id=eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751 namespace=k8s.io Feb 13 19:04:21.440262 containerd[1945]: time="2025-02-13T19:04:21.439934666Z" level=warning msg="cleaning up after shim disconnected" id=eb8e9d12e9b449b37fe2628bcb5a543510ad4be6756d9fadd491aa2fbccde751 namespace=k8s.io Feb 13 19:04:21.440262 containerd[1945]: time="2025-02-13T19:04:21.439958174Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:22.245804 containerd[1945]: time="2025-02-13T19:04:22.245673050Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:04:22.278691 containerd[1945]: time="2025-02-13T19:04:22.278570930Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846\"" Feb 13 19:04:22.280965 containerd[1945]: time="2025-02-13T19:04:22.280506566Z" level=info msg="StartContainer for \"1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846\"" Feb 13 19:04:22.340211 systemd[1]: Started cri-containerd-1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846.scope - libcontainer container 1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846. Feb 13 19:04:22.400850 containerd[1945]: time="2025-02-13T19:04:22.400264755Z" level=info msg="StartContainer for \"1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846\" returns successfully" Feb 13 19:04:22.403368 systemd[1]: cri-containerd-1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846.scope: Deactivated successfully. Feb 13 19:04:22.453314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846-rootfs.mount: Deactivated successfully. Feb 13 19:04:22.457116 containerd[1945]: time="2025-02-13T19:04:22.457023111Z" level=info msg="shim disconnected" id=1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846 namespace=k8s.io Feb 13 19:04:22.457116 containerd[1945]: time="2025-02-13T19:04:22.457106847Z" level=warning msg="cleaning up after shim disconnected" id=1cfbc01fe7c3bc94c02ba224f0402f26044ac55733a944f49acad1745e938846 namespace=k8s.io Feb 13 19:04:22.457354 containerd[1945]: time="2025-02-13T19:04:22.457128099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:22.769577 kubelet[3181]: E0213 19:04:22.767797 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mnhqh" podUID="6be37e89-39a2-4f4e-b98d-70b0ba7c60fb" Feb 13 19:04:23.253281 containerd[1945]: time="2025-02-13T19:04:23.253206207Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:04:23.282055 containerd[1945]: time="2025-02-13T19:04:23.280616847Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4\"" Feb 13 19:04:23.283325 containerd[1945]: time="2025-02-13T19:04:23.283098771Z" level=info msg="StartContainer for \"fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4\"" Feb 13 19:04:23.345232 systemd[1]: Started cri-containerd-fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4.scope - libcontainer container fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4. Feb 13 19:04:23.389630 systemd[1]: cri-containerd-fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4.scope: Deactivated successfully. Feb 13 19:04:23.395344 containerd[1945]: time="2025-02-13T19:04:23.395269348Z" level=info msg="StartContainer for \"fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4\" returns successfully" Feb 13 19:04:23.444399 containerd[1945]: time="2025-02-13T19:04:23.444098812Z" level=info msg="shim disconnected" id=fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4 namespace=k8s.io Feb 13 19:04:23.444399 containerd[1945]: time="2025-02-13T19:04:23.444231460Z" level=warning msg="cleaning up after shim disconnected" id=fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4 namespace=k8s.io Feb 13 19:04:23.444399 containerd[1945]: time="2025-02-13T19:04:23.444254140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:23.450115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb0f5e07be473d39a7ac2c9095690f9ad35cc170bef2a56ecd54a0f2f4da51f4-rootfs.mount: Deactivated successfully. Feb 13 19:04:23.474517 containerd[1945]: time="2025-02-13T19:04:23.474345904Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:04:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:04:24.259345 containerd[1945]: time="2025-02-13T19:04:24.257364004Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:04:24.294243 containerd[1945]: time="2025-02-13T19:04:24.294146812Z" level=info msg="CreateContainer within sandbox \"dd249bfb2cacdcd8fb81bd548fa1c0bb5c0546e8ee4eeccbafe818755f0d709d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"158740169906c55a881a4c3f0dd53d2212297cab498839a05b17da066477bf1f\"" Feb 13 19:04:24.298866 containerd[1945]: time="2025-02-13T19:04:24.297383104Z" level=info msg="StartContainer for \"158740169906c55a881a4c3f0dd53d2212297cab498839a05b17da066477bf1f\"" Feb 13 19:04:24.351269 systemd[1]: Started cri-containerd-158740169906c55a881a4c3f0dd53d2212297cab498839a05b17da066477bf1f.scope - libcontainer container 158740169906c55a881a4c3f0dd53d2212297cab498839a05b17da066477bf1f. Feb 13 19:04:24.404549 containerd[1945]: time="2025-02-13T19:04:24.404478821Z" level=info msg="StartContainer for \"158740169906c55a881a4c3f0dd53d2212297cab498839a05b17da066477bf1f\" returns successfully" Feb 13 19:04:24.688008 containerd[1945]: time="2025-02-13T19:04:24.687912294Z" level=info msg="StopPodSandbox for \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\"" Feb 13 19:04:24.688173 containerd[1945]: time="2025-02-13T19:04:24.688146126Z" level=info msg="TearDown network for sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" successfully" Feb 13 19:04:24.688432 containerd[1945]: time="2025-02-13T19:04:24.688171506Z" level=info msg="StopPodSandbox for \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" returns successfully" Feb 13 19:04:24.689424 containerd[1945]: time="2025-02-13T19:04:24.689361330Z" level=info msg="RemovePodSandbox for \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\"" Feb 13 19:04:24.689771 containerd[1945]: time="2025-02-13T19:04:24.689420022Z" level=info msg="Forcibly stopping sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\"" Feb 13 19:04:24.689937 containerd[1945]: time="2025-02-13T19:04:24.689891634Z" level=info msg="TearDown network for sandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" successfully" Feb 13 19:04:24.697315 containerd[1945]: time="2025-02-13T19:04:24.697165314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:24.697507 containerd[1945]: time="2025-02-13T19:04:24.697338174Z" level=info msg="RemovePodSandbox \"f2667820856f2353ae0e3b3ab275e0ecff12231fa0b40cf15c804772b2ce3316\" returns successfully" Feb 13 19:04:24.698805 containerd[1945]: time="2025-02-13T19:04:24.698747874Z" level=info msg="StopPodSandbox for \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\"" Feb 13 19:04:24.699075 containerd[1945]: time="2025-02-13T19:04:24.699015618Z" level=info msg="TearDown network for sandbox \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" successfully" Feb 13 19:04:24.699152 containerd[1945]: time="2025-02-13T19:04:24.699069630Z" level=info msg="StopPodSandbox for \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" returns successfully" Feb 13 19:04:24.699846 containerd[1945]: time="2025-02-13T19:04:24.699795558Z" level=info msg="RemovePodSandbox for \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\"" Feb 13 19:04:24.699995 containerd[1945]: time="2025-02-13T19:04:24.699849162Z" level=info msg="Forcibly stopping sandbox \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\"" Feb 13 19:04:24.699995 containerd[1945]: time="2025-02-13T19:04:24.699972186Z" level=info msg="TearDown network for sandbox \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" successfully" Feb 13 19:04:24.706583 containerd[1945]: time="2025-02-13T19:04:24.706462986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:24.706736 containerd[1945]: time="2025-02-13T19:04:24.706607478Z" level=info msg="RemovePodSandbox \"22ba0bf0394a4a2863d3d4b146bd2c5d5efde8950931ae0e2e9c6657d655bfc7\" returns successfully" Feb 13 19:04:24.770017 kubelet[3181]: E0213 19:04:24.769474 3181 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mnhqh" podUID="6be37e89-39a2-4f4e-b98d-70b0ba7c60fb" Feb 13 19:04:25.273574 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:04:27.506437 kubelet[3181]: E0213 19:04:27.506355 3181 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42400->127.0.0.1:43497: write tcp 127.0.0.1:42400->127.0.0.1:43497: write: broken pipe Feb 13 19:04:29.450638 systemd-networkd[1837]: lxc_health: Link UP Feb 13 19:04:29.463706 (udev-worker)[5889]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:29.465358 systemd-networkd[1837]: lxc_health: Gained carrier Feb 13 19:04:30.633048 kubelet[3181]: I0213 19:04:30.632959 3181 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mxfpl" podStartSLOduration=10.632936112 podStartE2EDuration="10.632936112s" podCreationTimestamp="2025-02-13 19:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:25.311540537 +0000 UTC m=+120.829851001" watchObservedRunningTime="2025-02-13 19:04:30.632936112 +0000 UTC m=+126.151246564" Feb 13 19:04:31.385132 systemd-networkd[1837]: lxc_health: Gained IPv6LL Feb 13 19:04:33.896965 ntpd[1915]: Listen normally on 15 lxc_health [fe80::c49d:58ff:fec9:a53e%14]:123 Feb 13 19:04:33.897572 ntpd[1915]: 13 Feb 19:04:33 ntpd[1915]: Listen normally on 15 lxc_health [fe80::c49d:58ff:fec9:a53e%14]:123 Feb 13 19:04:36.731387 sshd[5161]: Connection closed by 147.75.109.163 port 59398 Feb 13 19:04:36.732005 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:36.742565 systemd[1]: sshd@27-172.31.22.173:22-147.75.109.163:59398.service: Deactivated successfully. Feb 13 19:04:36.751775 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:04:36.757788 systemd-logind[1923]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:04:36.760499 systemd-logind[1923]: Removed session 28. Feb 13 19:04:50.114430 systemd[1]: cri-containerd-9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f.scope: Deactivated successfully. Feb 13 19:04:50.114945 systemd[1]: cri-containerd-9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f.scope: Consumed 5.706s CPU time, 19.7M memory peak, 0B memory swap peak. Feb 13 19:04:50.155067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f-rootfs.mount: Deactivated successfully. Feb 13 19:04:50.180428 containerd[1945]: time="2025-02-13T19:04:50.180129929Z" level=info msg="shim disconnected" id=9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f namespace=k8s.io Feb 13 19:04:50.180428 containerd[1945]: time="2025-02-13T19:04:50.180233981Z" level=warning msg="cleaning up after shim disconnected" id=9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f namespace=k8s.io Feb 13 19:04:50.180428 containerd[1945]: time="2025-02-13T19:04:50.180254945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:50.342707 kubelet[3181]: I0213 19:04:50.342605 3181 scope.go:117] "RemoveContainer" containerID="9bcf051ade2f1447f87d5a0ff362bcb6067cbd28c8e90803aa00dda0e0f2545f" Feb 13 19:04:50.346361 containerd[1945]: time="2025-02-13T19:04:50.346312146Z" level=info msg="CreateContainer within sandbox \"fe4fdca819cc47d6b155185a4c9a61dc3cb11c588322dc5d3862d9168be08b96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:04:50.372493 containerd[1945]: time="2025-02-13T19:04:50.372118038Z" level=info msg="CreateContainer within sandbox \"fe4fdca819cc47d6b155185a4c9a61dc3cb11c588322dc5d3862d9168be08b96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0a57c716ce5f6bbe07679c104c3a9f3c903dc763672b2a199e9c7010a73f13f4\"" Feb 13 19:04:50.373366 containerd[1945]: time="2025-02-13T19:04:50.372821922Z" level=info msg="StartContainer for \"0a57c716ce5f6bbe07679c104c3a9f3c903dc763672b2a199e9c7010a73f13f4\"" Feb 13 19:04:50.426190 systemd[1]: Started cri-containerd-0a57c716ce5f6bbe07679c104c3a9f3c903dc763672b2a199e9c7010a73f13f4.scope - libcontainer container 0a57c716ce5f6bbe07679c104c3a9f3c903dc763672b2a199e9c7010a73f13f4. Feb 13 19:04:50.499586 containerd[1945]: time="2025-02-13T19:04:50.499403358Z" level=info msg="StartContainer for \"0a57c716ce5f6bbe07679c104c3a9f3c903dc763672b2a199e9c7010a73f13f4\" returns successfully" Feb 13 19:04:56.948457 systemd[1]: cri-containerd-c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0.scope: Deactivated successfully. Feb 13 19:04:56.949725 systemd[1]: cri-containerd-c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0.scope: Consumed 3.435s CPU time, 16.3M memory peak, 0B memory swap peak. Feb 13 19:04:56.985300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0-rootfs.mount: Deactivated successfully. Feb 13 19:04:57.000017 containerd[1945]: time="2025-02-13T19:04:56.999856467Z" level=info msg="shim disconnected" id=c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0 namespace=k8s.io Feb 13 19:04:57.000017 containerd[1945]: time="2025-02-13T19:04:56.999975267Z" level=warning msg="cleaning up after shim disconnected" id=c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0 namespace=k8s.io Feb 13 19:04:57.000017 containerd[1945]: time="2025-02-13T19:04:56.999995787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:57.366060 kubelet[3181]: I0213 19:04:57.366016 3181 scope.go:117] "RemoveContainer" containerID="c51faad6a7fae1e5ca6b965837256395c7295b8e7644a4a3a80fd05ac12c64b0" Feb 13 19:04:57.369590 containerd[1945]: time="2025-02-13T19:04:57.369219481Z" level=info msg="CreateContainer within sandbox \"2f7e2177fa57eb34c3bd9faf3e8ca2343fc210441be2f4b389fea58edfa65ffc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:04:57.399624 containerd[1945]: time="2025-02-13T19:04:57.399441097Z" level=info msg="CreateContainer within sandbox \"2f7e2177fa57eb34c3bd9faf3e8ca2343fc210441be2f4b389fea58edfa65ffc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4081a30ba1eef37a6d57e4ed991f1861cffa056acb248afb125d516def626cba\"" Feb 13 19:04:57.400445 containerd[1945]: time="2025-02-13T19:04:57.400405321Z" level=info msg="StartContainer for \"4081a30ba1eef37a6d57e4ed991f1861cffa056acb248afb125d516def626cba\"" Feb 13 19:04:57.455205 systemd[1]: Started cri-containerd-4081a30ba1eef37a6d57e4ed991f1861cffa056acb248afb125d516def626cba.scope - libcontainer container 4081a30ba1eef37a6d57e4ed991f1861cffa056acb248afb125d516def626cba. Feb 13 19:04:57.518061 containerd[1945]: time="2025-02-13T19:04:57.517768057Z" level=info msg="StartContainer for \"4081a30ba1eef37a6d57e4ed991f1861cffa056acb248afb125d516def626cba\" returns successfully" Feb 13 19:04:57.543183 kubelet[3181]: E0213 19:04:57.543117 3181 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-173?timeout=10s\": context deadline exceeded" Feb 13 19:05:07.544775 kubelet[3181]: E0213 19:05:07.544688 3181 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.173:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-173?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"