Feb 13 18:52:34.227908 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 18:52:34.227961 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025 Feb 13 18:52:34.228010 kernel: KASLR disabled due to lack of seed Feb 13 18:52:34.229133 kernel: efi: EFI v2.7 by EDK II Feb 13 18:52:34.229177 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 18:52:34.229194 kernel: secureboot: Secure boot disabled Feb 13 18:52:34.229212 kernel: ACPI: Early table checksum verification disabled Feb 13 18:52:34.229229 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 18:52:34.229246 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 18:52:34.229262 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 18:52:34.229290 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 18:52:34.229306 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 18:52:34.229322 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 18:52:34.229338 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 18:52:34.229356 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 18:52:34.229379 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 18:52:34.229396 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 18:52:34.229412 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 18:52:34.229429 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 18:52:34.229446 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 18:52:34.229463 kernel: printk: bootconsole [uart0] enabled Feb 13 18:52:34.229479 kernel: NUMA: Failed to initialise from firmware Feb 13 18:52:34.229495 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 18:52:34.229513 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 18:52:34.229530 kernel: Zone ranges: Feb 13 18:52:34.229548 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 18:52:34.229570 kernel: DMA32 empty Feb 13 18:52:34.229587 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 18:52:34.229605 kernel: Movable zone start for each node Feb 13 18:52:34.229622 kernel: Early memory node ranges Feb 13 18:52:34.229638 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 18:52:34.229655 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 18:52:34.229671 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 18:52:34.229688 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 18:52:34.229705 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 18:52:34.229722 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 18:52:34.229739 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 18:52:34.229756 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 18:52:34.229777 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 18:52:34.229795 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 18:52:34.229820 kernel: psci: probing for conduit method from ACPI. Feb 13 18:52:34.229838 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 18:52:34.229857 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 18:52:34.229879 kernel: psci: Trusted OS migration not required Feb 13 18:52:34.229897 kernel: psci: SMC Calling Convention v1.1 Feb 13 18:52:34.229915 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 18:52:34.229932 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 18:52:34.229951 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 18:52:34.229968 kernel: Detected PIPT I-cache on CPU0 Feb 13 18:52:34.229986 kernel: CPU features: detected: GIC system register CPU interface Feb 13 18:52:34.230003 kernel: CPU features: detected: Spectre-v2 Feb 13 18:52:34.230021 kernel: CPU features: detected: Spectre-v3a Feb 13 18:52:34.230140 kernel: CPU features: detected: Spectre-BHB Feb 13 18:52:34.230160 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 18:52:34.230178 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 18:52:34.230205 kernel: alternatives: applying boot alternatives Feb 13 18:52:34.230225 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:52:34.230245 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 18:52:34.230263 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 18:52:34.230281 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 18:52:34.230302 kernel: Fallback order for Node 0: 0 Feb 13 18:52:34.230320 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 18:52:34.230337 kernel: Policy zone: Normal Feb 13 18:52:34.230354 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 18:52:34.230371 kernel: software IO TLB: area num 2. Feb 13 18:52:34.230394 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 18:52:34.230413 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Feb 13 18:52:34.230430 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 18:52:34.230448 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 18:52:34.230466 kernel: rcu: RCU event tracing is enabled. Feb 13 18:52:34.230483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 18:52:34.230501 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 18:52:34.230518 kernel: Tracing variant of Tasks RCU enabled. Feb 13 18:52:34.230536 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 18:52:34.230554 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 18:52:34.230571 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 18:52:34.230593 kernel: GICv3: 96 SPIs implemented Feb 13 18:52:34.230611 kernel: GICv3: 0 Extended SPIs implemented Feb 13 18:52:34.230627 kernel: Root IRQ handler: gic_handle_irq Feb 13 18:52:34.230644 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 18:52:34.230661 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 18:52:34.230679 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 18:52:34.230696 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 18:52:34.230713 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 18:52:34.230731 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 18:52:34.230748 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 18:52:34.230766 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 18:52:34.230783 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 18:52:34.230806 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 18:52:34.230824 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 18:52:34.230841 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 18:52:34.230858 kernel: Console: colour dummy device 80x25 Feb 13 18:52:34.230876 kernel: printk: console [tty1] enabled Feb 13 18:52:34.230894 kernel: ACPI: Core revision 20230628 Feb 13 18:52:34.230912 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 18:52:34.230930 kernel: pid_max: default: 32768 minimum: 301 Feb 13 18:52:34.230947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 18:52:34.230965 kernel: landlock: Up and running. Feb 13 18:52:34.230988 kernel: SELinux: Initializing. Feb 13 18:52:34.231006 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:52:34.232187 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 18:52:34.232278 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:52:34.232299 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 18:52:34.232319 kernel: rcu: Hierarchical SRCU implementation. Feb 13 18:52:34.232340 kernel: rcu: Max phase no-delay instances is 400. Feb 13 18:52:34.232359 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 18:52:34.232392 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 18:52:34.232411 kernel: Remapping and enabling EFI services. Feb 13 18:52:34.232429 kernel: smp: Bringing up secondary CPUs ... Feb 13 18:52:34.232446 kernel: Detected PIPT I-cache on CPU1 Feb 13 18:52:34.232464 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 18:52:34.232482 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 18:52:34.232501 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 18:52:34.232518 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 18:52:34.232536 kernel: SMP: Total of 2 processors activated. Feb 13 18:52:34.232555 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 18:52:34.232579 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 18:52:34.232598 kernel: CPU features: detected: CRC32 instructions Feb 13 18:52:34.232628 kernel: CPU: All CPU(s) started at EL1 Feb 13 18:52:34.232652 kernel: alternatives: applying system-wide alternatives Feb 13 18:52:34.232671 kernel: devtmpfs: initialized Feb 13 18:52:34.232690 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 18:52:34.232709 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 18:52:34.232728 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 18:52:34.232746 kernel: SMBIOS 3.0.0 present. Feb 13 18:52:34.232770 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 18:52:34.232789 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 18:52:34.232808 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 18:52:34.232827 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 18:52:34.232847 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 18:52:34.232865 kernel: audit: initializing netlink subsys (disabled) Feb 13 18:52:34.232885 kernel: audit: type=2000 audit(0.222:1): state=initialized audit_enabled=0 res=1 Feb 13 18:52:34.232910 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 18:52:34.232930 kernel: cpuidle: using governor menu Feb 13 18:52:34.232948 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 18:52:34.232967 kernel: ASID allocator initialised with 65536 entries Feb 13 18:52:34.232985 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 18:52:34.233004 kernel: Serial: AMBA PL011 UART driver Feb 13 18:52:34.233022 kernel: Modules: 17360 pages in range for non-PLT usage Feb 13 18:52:34.233091 kernel: Modules: 508880 pages in range for PLT usage Feb 13 18:52:34.233144 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 18:52:34.233177 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 18:52:34.233197 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 18:52:34.233216 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 18:52:34.233235 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 18:52:34.233253 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 18:52:34.233274 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 18:52:34.233292 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 18:52:34.233312 kernel: ACPI: Added _OSI(Module Device) Feb 13 18:52:34.233330 kernel: ACPI: Added _OSI(Processor Device) Feb 13 18:52:34.233355 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 18:52:34.233375 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 18:52:34.233394 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 18:52:34.233413 kernel: ACPI: Interpreter enabled Feb 13 18:52:34.233432 kernel: ACPI: Using GIC for interrupt routing Feb 13 18:52:34.233450 kernel: ACPI: MCFG table detected, 1 entries Feb 13 18:52:34.233470 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 18:52:34.233825 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 18:52:34.235387 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 18:52:34.235670 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 18:52:34.235893 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 18:52:34.236168 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 18:52:34.236202 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 18:52:34.236221 kernel: acpiphp: Slot [1] registered Feb 13 18:52:34.236240 kernel: acpiphp: Slot [2] registered Feb 13 18:52:34.236259 kernel: acpiphp: Slot [3] registered Feb 13 18:52:34.236294 kernel: acpiphp: Slot [4] registered Feb 13 18:52:34.236314 kernel: acpiphp: Slot [5] registered Feb 13 18:52:34.236332 kernel: acpiphp: Slot [6] registered Feb 13 18:52:34.236350 kernel: acpiphp: Slot [7] registered Feb 13 18:52:34.236369 kernel: acpiphp: Slot [8] registered Feb 13 18:52:34.236387 kernel: acpiphp: Slot [9] registered Feb 13 18:52:34.236406 kernel: acpiphp: Slot [10] registered Feb 13 18:52:34.236425 kernel: acpiphp: Slot [11] registered Feb 13 18:52:34.236444 kernel: acpiphp: Slot [12] registered Feb 13 18:52:34.236463 kernel: acpiphp: Slot [13] registered Feb 13 18:52:34.236488 kernel: acpiphp: Slot [14] registered Feb 13 18:52:34.236507 kernel: acpiphp: Slot [15] registered Feb 13 18:52:34.236526 kernel: acpiphp: Slot [16] registered Feb 13 18:52:34.236549 kernel: acpiphp: Slot [17] registered Feb 13 18:52:34.236593 kernel: acpiphp: Slot [18] registered Feb 13 18:52:34.236658 kernel: acpiphp: Slot [19] registered Feb 13 18:52:34.236689 kernel: acpiphp: Slot [20] registered Feb 13 18:52:34.236709 kernel: acpiphp: Slot [21] registered Feb 13 18:52:34.236728 kernel: acpiphp: Slot [22] registered Feb 13 18:52:34.236755 kernel: acpiphp: Slot [23] registered Feb 13 18:52:34.236773 kernel: acpiphp: Slot [24] registered Feb 13 18:52:34.236792 kernel: acpiphp: Slot [25] registered Feb 13 18:52:34.236811 kernel: acpiphp: Slot [26] registered Feb 13 18:52:34.236830 kernel: acpiphp: Slot [27] registered Feb 13 18:52:34.236848 kernel: acpiphp: Slot [28] registered Feb 13 18:52:34.236866 kernel: acpiphp: Slot [29] registered Feb 13 18:52:34.236884 kernel: acpiphp: Slot [30] registered Feb 13 18:52:34.236902 kernel: acpiphp: Slot [31] registered Feb 13 18:52:34.236920 kernel: PCI host bridge to bus 0000:00 Feb 13 18:52:34.239327 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 18:52:34.239549 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 18:52:34.239734 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 18:52:34.239934 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 18:52:34.240294 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 18:52:34.240571 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 18:52:34.240977 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 18:52:34.241371 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 18:52:34.241628 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 18:52:34.241863 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 18:52:34.243678 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 18:52:34.243952 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 18:52:34.244278 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 18:52:34.244553 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 18:52:34.244800 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 18:52:34.246472 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 18:52:34.246783 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 18:52:34.247155 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 18:52:34.247453 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 18:52:34.247763 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 18:52:34.250266 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 18:52:34.250547 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 18:52:34.250765 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 18:52:34.250796 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 18:52:34.250815 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 18:52:34.250835 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 18:52:34.250854 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 18:52:34.250875 kernel: iommu: Default domain type: Translated Feb 13 18:52:34.250909 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 18:52:34.250928 kernel: efivars: Registered efivars operations Feb 13 18:52:34.250947 kernel: vgaarb: loaded Feb 13 18:52:34.250966 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 18:52:34.250985 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 18:52:34.251004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 18:52:34.251053 kernel: pnp: PnP ACPI init Feb 13 18:52:34.251347 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 18:52:34.251398 kernel: pnp: PnP ACPI: found 1 devices Feb 13 18:52:34.251418 kernel: NET: Registered PF_INET protocol family Feb 13 18:52:34.251437 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 18:52:34.251457 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 18:52:34.251476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 18:52:34.251495 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 18:52:34.251514 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 18:52:34.251532 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 18:52:34.251552 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:52:34.251577 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 18:52:34.251596 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 18:52:34.251615 kernel: PCI: CLS 0 bytes, default 64 Feb 13 18:52:34.251635 kernel: kvm [1]: HYP mode not available Feb 13 18:52:34.251654 kernel: Initialise system trusted keyrings Feb 13 18:52:34.251674 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 18:52:34.251694 kernel: Key type asymmetric registered Feb 13 18:52:34.251716 kernel: Asymmetric key parser 'x509' registered Feb 13 18:52:34.251739 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 18:52:34.251764 kernel: io scheduler mq-deadline registered Feb 13 18:52:34.251783 kernel: io scheduler kyber registered Feb 13 18:52:34.251802 kernel: io scheduler bfq registered Feb 13 18:52:34.252514 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 18:52:34.252561 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 18:52:34.252581 kernel: ACPI: button: Power Button [PWRB] Feb 13 18:52:34.252600 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 18:52:34.252620 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 18:52:34.252650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 18:52:34.252670 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 18:52:34.252939 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 18:52:34.252978 kernel: printk: console [ttyS0] disabled Feb 13 18:52:34.252999 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 18:52:34.253019 kernel: printk: console [ttyS0] enabled Feb 13 18:52:34.253103 kernel: printk: bootconsole [uart0] disabled Feb 13 18:52:34.253124 kernel: thunder_xcv, ver 1.0 Feb 13 18:52:34.253143 kernel: thunder_bgx, ver 1.0 Feb 13 18:52:34.253162 kernel: nicpf, ver 1.0 Feb 13 18:52:34.253194 kernel: nicvf, ver 1.0 Feb 13 18:52:34.253511 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 18:52:34.253729 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:52:33 UTC (1739472753) Feb 13 18:52:34.253757 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 18:52:34.253777 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 18:52:34.253797 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 18:52:34.253815 kernel: watchdog: Hard watchdog permanently disabled Feb 13 18:52:34.253844 kernel: NET: Registered PF_INET6 protocol family Feb 13 18:52:34.253863 kernel: Segment Routing with IPv6 Feb 13 18:52:34.253881 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 18:52:34.253900 kernel: NET: Registered PF_PACKET protocol family Feb 13 18:52:34.253918 kernel: Key type dns_resolver registered Feb 13 18:52:34.253937 kernel: registered taskstats version 1 Feb 13 18:52:34.253956 kernel: Loading compiled-in X.509 certificates Feb 13 18:52:34.253975 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3' Feb 13 18:52:34.253994 kernel: Key type .fscrypt registered Feb 13 18:52:34.254013 kernel: Key type fscrypt-provisioning registered Feb 13 18:52:34.254194 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 18:52:34.254214 kernel: ima: Allocated hash algorithm: sha1 Feb 13 18:52:34.254233 kernel: ima: No architecture policies found Feb 13 18:52:34.254251 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 18:52:34.254271 kernel: clk: Disabling unused clocks Feb 13 18:52:34.254290 kernel: Freeing unused kernel memory: 39936K Feb 13 18:52:34.254309 kernel: Run /init as init process Feb 13 18:52:34.254328 kernel: with arguments: Feb 13 18:52:34.254346 kernel: /init Feb 13 18:52:34.254374 kernel: with environment: Feb 13 18:52:34.254393 kernel: HOME=/ Feb 13 18:52:34.254411 kernel: TERM=linux Feb 13 18:52:34.254429 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 18:52:34.254453 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:52:34.254477 systemd[1]: Detected virtualization amazon. Feb 13 18:52:34.254498 systemd[1]: Detected architecture arm64. Feb 13 18:52:34.254523 systemd[1]: Running in initrd. Feb 13 18:52:34.254543 systemd[1]: No hostname configured, using default hostname. Feb 13 18:52:34.254562 systemd[1]: Hostname set to . Feb 13 18:52:34.254583 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:52:34.254603 systemd[1]: Queued start job for default target initrd.target. Feb 13 18:52:34.254623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:52:34.254643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:52:34.254665 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 18:52:34.254690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:52:34.254711 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 18:52:34.254732 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 18:52:34.254755 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 18:52:34.254776 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 18:52:34.254796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:52:34.254816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:52:34.254840 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:52:34.254860 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:52:34.254880 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:52:34.254900 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:52:34.254920 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:52:34.254941 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:52:34.254963 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 18:52:34.254983 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 18:52:34.255002 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:52:34.255162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:52:34.255198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:52:34.255219 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:52:34.255240 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 18:52:34.255261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:52:34.255282 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 18:52:34.255302 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 18:52:34.255322 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:52:34.255355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:52:34.255377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:52:34.255398 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 18:52:34.255419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:52:34.255504 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 18:52:34.255561 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 18:52:34.255584 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 18:52:34.255605 systemd-journald[251]: Journal started Feb 13 18:52:34.255659 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2dd273d4cc1ac6744b8eb90be7a160) is 8.0M, max 75.3M, 67.3M free. Feb 13 18:52:34.239153 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 18:52:34.263093 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:52:34.274073 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 18:52:34.277537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:52:34.283350 kernel: Bridge firewalling registered Feb 13 18:52:34.277997 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 18:52:34.285842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:52:34.288728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 18:52:34.308753 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:52:34.315163 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:52:34.319371 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:52:34.330599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:52:34.357696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:52:34.381971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:52:34.388365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:52:34.414430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:52:34.422877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:52:34.441441 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 18:52:34.471901 dracut-cmdline[290]: dracut-dracut-053 Feb 13 18:52:34.485221 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b Feb 13 18:52:34.494383 systemd-resolved[287]: Positive Trust Anchors: Feb 13 18:52:34.494407 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:52:34.494468 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:52:34.697109 kernel: SCSI subsystem initialized Feb 13 18:52:34.707070 kernel: Loading iSCSI transport class v2.0-870. Feb 13 18:52:34.719106 kernel: iscsi: registered transport (tcp) Feb 13 18:52:34.743064 kernel: iscsi: registered transport (qla4xxx) Feb 13 18:52:34.743147 kernel: QLogic iSCSI HBA Driver Feb 13 18:52:34.746096 kernel: random: crng init done Feb 13 18:52:34.746507 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 18:52:34.749794 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:52:34.752307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:52:34.841975 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 18:52:34.851340 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 18:52:34.898146 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 18:52:34.898231 kernel: device-mapper: uevent: version 1.0.3 Feb 13 18:52:34.900158 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 18:52:34.972189 kernel: raid6: neonx8 gen() 6486 MB/s Feb 13 18:52:34.989092 kernel: raid6: neonx4 gen() 6405 MB/s Feb 13 18:52:35.006088 kernel: raid6: neonx2 gen() 5349 MB/s Feb 13 18:52:35.023092 kernel: raid6: neonx1 gen() 3899 MB/s Feb 13 18:52:35.040091 kernel: raid6: int64x8 gen() 3606 MB/s Feb 13 18:52:35.057131 kernel: raid6: int64x4 gen() 3670 MB/s Feb 13 18:52:35.074118 kernel: raid6: int64x2 gen() 3561 MB/s Feb 13 18:52:35.091942 kernel: raid6: int64x1 gen() 2717 MB/s Feb 13 18:52:35.092055 kernel: raid6: using algorithm neonx8 gen() 6486 MB/s Feb 13 18:52:35.110019 kernel: raid6: .... xor() 4616 MB/s, rmw enabled Feb 13 18:52:35.110176 kernel: raid6: using neon recovery algorithm Feb 13 18:52:35.119515 kernel: xor: measuring software checksum speed Feb 13 18:52:35.119596 kernel: 8regs : 12768 MB/sec Feb 13 18:52:35.120674 kernel: 32regs : 12958 MB/sec Feb 13 18:52:35.121937 kernel: arm64_neon : 9469 MB/sec Feb 13 18:52:35.122002 kernel: xor: using function: 32regs (12958 MB/sec) Feb 13 18:52:35.211094 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 18:52:35.234391 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:52:35.245563 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:52:35.288732 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 18:52:35.297985 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:52:35.310344 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 18:52:35.352712 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Feb 13 18:52:35.425158 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:52:35.440351 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:52:35.558307 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:52:35.572920 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 18:52:35.637332 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 18:52:35.638776 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:52:35.644417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:52:35.646937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:52:35.672530 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 18:52:35.705895 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:52:35.791152 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 18:52:35.791220 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 18:52:35.823540 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 18:52:35.823839 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 18:52:35.824164 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a6:5e:e7:c4:c5 Feb 13 18:52:35.810668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:52:35.810911 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:52:35.814776 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:52:35.816993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:52:35.817312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:52:35.819630 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:52:35.822583 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:52:35.849732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:52:35.873486 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 18:52:35.873556 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 18:52:35.883282 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 18:52:35.885668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:52:35.895366 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 18:52:35.906244 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 18:52:35.906292 kernel: GPT:9289727 != 16777215 Feb 13 18:52:35.906318 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 18:52:35.906343 kernel: GPT:9289727 != 16777215 Feb 13 18:52:35.906367 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 18:52:35.906392 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 18:52:35.941402 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:52:36.006347 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (518) Feb 13 18:52:36.006510 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (545) Feb 13 18:52:36.063570 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 18:52:36.159787 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 18:52:36.176455 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 18:52:36.181979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 18:52:36.200932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 18:52:36.217725 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 18:52:36.230789 disk-uuid[662]: Primary Header is updated. Feb 13 18:52:36.230789 disk-uuid[662]: Secondary Entries is updated. Feb 13 18:52:36.230789 disk-uuid[662]: Secondary Header is updated. Feb 13 18:52:36.243090 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 18:52:37.261149 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 18:52:37.263898 disk-uuid[663]: The operation has completed successfully. Feb 13 18:52:37.483145 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 18:52:37.483793 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 18:52:37.547403 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 18:52:37.572230 sh[924]: Success Feb 13 18:52:37.600277 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 18:52:37.740953 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 18:52:37.748279 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 18:52:37.759293 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 18:52:37.809349 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8 Feb 13 18:52:37.809499 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:52:37.809553 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 18:52:37.812454 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 18:52:37.812550 kernel: BTRFS info (device dm-0): using free space tree Feb 13 18:52:37.841098 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 18:52:37.856210 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 18:52:37.860647 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 18:52:37.871349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 18:52:37.882303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 18:52:37.919340 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:52:37.920133 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:52:37.920175 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 18:52:37.931708 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 18:52:37.945316 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 18:52:37.947450 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:52:37.958746 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 18:52:37.971424 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 18:52:38.125435 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:52:38.145711 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:52:38.180522 ignition[1035]: Ignition 2.20.0 Feb 13 18:52:38.180560 ignition[1035]: Stage: fetch-offline Feb 13 18:52:38.181471 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:52:38.181497 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:52:38.181980 ignition[1035]: Ignition finished successfully Feb 13 18:52:38.193767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:52:38.209818 systemd-networkd[1123]: lo: Link UP Feb 13 18:52:38.209848 systemd-networkd[1123]: lo: Gained carrier Feb 13 18:52:38.214422 systemd-networkd[1123]: Enumeration completed Feb 13 18:52:38.216189 systemd-networkd[1123]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:52:38.216197 systemd-networkd[1123]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:52:38.216784 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:52:38.224340 systemd-networkd[1123]: eth0: Link UP Feb 13 18:52:38.224347 systemd-networkd[1123]: eth0: Gained carrier Feb 13 18:52:38.224366 systemd-networkd[1123]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:52:38.236289 systemd[1]: Reached target network.target - Network. Feb 13 18:52:38.244319 systemd-networkd[1123]: eth0: DHCPv4 address 172.31.27.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 18:52:38.249486 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 18:52:38.276600 ignition[1126]: Ignition 2.20.0 Feb 13 18:52:38.276630 ignition[1126]: Stage: fetch Feb 13 18:52:38.278286 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:52:38.278314 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:52:38.279408 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:52:38.292422 ignition[1126]: PUT result: OK Feb 13 18:52:38.295474 ignition[1126]: parsed url from cmdline: "" Feb 13 18:52:38.295497 ignition[1126]: no config URL provided Feb 13 18:52:38.295514 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 18:52:38.295542 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Feb 13 18:52:38.295577 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:52:38.297341 ignition[1126]: PUT result: OK Feb 13 18:52:38.297425 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 18:52:38.299957 ignition[1126]: GET result: OK Feb 13 18:52:38.301706 ignition[1126]: parsing config with SHA512: da2a1667ae006c107023c1fe2c79f958eb820a2a0073e9960cace8965bfefd38ecae3ea435eab09bf1fb03975b183b21771ccd2814e55d0f47fa67c95d42c51c Feb 13 18:52:38.313619 unknown[1126]: fetched base config from "system" Feb 13 18:52:38.314155 ignition[1126]: fetch: fetch complete Feb 13 18:52:38.313656 unknown[1126]: fetched base config from "system" Feb 13 18:52:38.314168 ignition[1126]: fetch: fetch passed Feb 13 18:52:38.313671 unknown[1126]: fetched user config from "aws" Feb 13 18:52:38.314281 ignition[1126]: Ignition finished successfully Feb 13 18:52:38.326176 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 18:52:38.339646 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 18:52:38.366114 ignition[1134]: Ignition 2.20.0 Feb 13 18:52:38.366152 ignition[1134]: Stage: kargs Feb 13 18:52:38.367473 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:52:38.367505 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:52:38.367690 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:52:38.369631 ignition[1134]: PUT result: OK Feb 13 18:52:38.379184 ignition[1134]: kargs: kargs passed Feb 13 18:52:38.379316 ignition[1134]: Ignition finished successfully Feb 13 18:52:38.384753 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 18:52:38.397649 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 18:52:38.427151 ignition[1140]: Ignition 2.20.0 Feb 13 18:52:38.427181 ignition[1140]: Stage: disks Feb 13 18:52:38.428193 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Feb 13 18:52:38.428248 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:52:38.428525 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:52:38.430360 ignition[1140]: PUT result: OK Feb 13 18:52:38.441524 ignition[1140]: disks: disks passed Feb 13 18:52:38.441743 ignition[1140]: Ignition finished successfully Feb 13 18:52:38.446400 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 18:52:38.452362 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 18:52:38.454953 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 18:52:38.459233 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:52:38.461443 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:52:38.463409 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:52:38.481311 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 18:52:38.531143 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 18:52:38.540982 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 18:52:38.557320 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 18:52:38.638088 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none. Feb 13 18:52:38.639455 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 18:52:38.642292 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 18:52:38.660258 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:52:38.670232 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 18:52:38.675605 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 18:52:38.675721 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 18:52:38.675779 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:52:38.702064 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1167) Feb 13 18:52:38.703508 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 18:52:38.710929 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:52:38.710970 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:52:38.710996 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 18:52:38.717162 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 18:52:38.720280 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 18:52:38.727136 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:52:38.811012 initrd-setup-root[1191]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 18:52:38.822622 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Feb 13 18:52:38.833207 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 18:52:38.841994 initrd-setup-root[1212]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 18:52:38.993182 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 18:52:39.003311 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 18:52:39.022540 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 18:52:39.037935 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 18:52:39.040287 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:52:39.081686 ignition[1281]: INFO : Ignition 2.20.0 Feb 13 18:52:39.082092 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 18:52:39.083006 ignition[1281]: INFO : Stage: mount Feb 13 18:52:39.084756 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:52:39.084756 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:52:39.084756 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:52:39.094306 ignition[1281]: INFO : PUT result: OK Feb 13 18:52:39.099383 ignition[1281]: INFO : mount: mount passed Feb 13 18:52:39.101323 ignition[1281]: INFO : Ignition finished successfully Feb 13 18:52:39.104067 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 18:52:39.123603 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 18:52:39.150227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 18:52:39.179055 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1294) Feb 13 18:52:39.183049 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0 Feb 13 18:52:39.183100 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 18:52:39.184219 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 18:52:39.190087 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 18:52:39.193972 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 18:52:39.234207 ignition[1311]: INFO : Ignition 2.20.0 Feb 13 18:52:39.234207 ignition[1311]: INFO : Stage: files Feb 13 18:52:39.234207 ignition[1311]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:52:39.234207 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:52:39.234207 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:52:39.259293 ignition[1311]: INFO : PUT result: OK Feb 13 18:52:39.261231 ignition[1311]: DEBUG : files: compiled without relabeling support, skipping Feb 13 18:52:39.263427 ignition[1311]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 18:52:39.263427 ignition[1311]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 18:52:39.270722 ignition[1311]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 18:52:39.273858 ignition[1311]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 18:52:39.276796 unknown[1311]: wrote ssh authorized keys file for user: core Feb 13 18:52:39.280095 ignition[1311]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 18:52:39.283491 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 18:52:39.287378 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 18:52:39.287378 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:52:39.287378 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 18:52:39.287378 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 18:52:39.287378 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 18:52:39.287378 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 18:52:39.287378 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 18:52:39.798918 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 18:52:39.938013 systemd-networkd[1123]: eth0: Gained IPv6LL Feb 13 18:52:40.213672 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 18:52:40.219209 ignition[1311]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:52:40.219209 ignition[1311]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 18:52:40.219209 ignition[1311]: INFO : files: files passed Feb 13 18:52:40.219209 ignition[1311]: INFO : Ignition finished successfully Feb 13 18:52:40.220090 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 18:52:40.245601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 18:52:40.259475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 18:52:40.263331 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 18:52:40.263514 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 18:52:40.300499 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:52:40.300499 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:52:40.307792 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 18:52:40.313567 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:52:40.318622 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 18:52:40.330316 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 18:52:40.379975 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 18:52:40.380434 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 18:52:40.385508 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 18:52:40.388930 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 18:52:40.390918 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 18:52:40.409272 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 18:52:40.432861 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:52:40.445441 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 18:52:40.476445 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:52:40.479281 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:52:40.483537 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 18:52:40.485752 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 18:52:40.485984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 18:52:40.496047 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 18:52:40.498450 systemd[1]: Stopped target basic.target - Basic System. Feb 13 18:52:40.501753 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 18:52:40.504213 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 18:52:40.507814 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 18:52:40.510114 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 18:52:40.512224 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 18:52:40.514748 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 18:52:40.519221 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 18:52:40.519600 systemd[1]: Stopped target swap.target - Swaps. Feb 13 18:52:40.519835 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 18:52:40.520453 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 18:52:40.539704 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:52:40.542493 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:52:40.548655 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 18:52:40.550875 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:52:40.553387 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 18:52:40.553635 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 18:52:40.562157 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 18:52:40.562460 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 18:52:40.566921 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 18:52:40.567158 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 18:52:40.583585 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 18:52:40.587441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 18:52:40.587723 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:52:40.609267 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 18:52:40.615326 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 18:52:40.617288 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:52:40.628277 ignition[1363]: INFO : Ignition 2.20.0 Feb 13 18:52:40.628277 ignition[1363]: INFO : Stage: umount Feb 13 18:52:40.628277 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 18:52:40.634012 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 18:52:40.634012 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 18:52:40.634012 ignition[1363]: INFO : PUT result: OK Feb 13 18:52:40.642355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 18:52:40.642597 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 18:52:40.660948 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 18:52:40.663207 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 18:52:40.672773 ignition[1363]: INFO : umount: umount passed Feb 13 18:52:40.672773 ignition[1363]: INFO : Ignition finished successfully Feb 13 18:52:40.677942 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 18:52:40.678170 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 18:52:40.691731 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 18:52:40.692189 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 18:52:40.698930 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 18:52:40.701291 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 18:52:40.709873 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 18:52:40.709983 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 18:52:40.712602 systemd[1]: Stopped target network.target - Network. Feb 13 18:52:40.715618 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 18:52:40.716302 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 18:52:40.719866 systemd[1]: Stopped target paths.target - Path Units. Feb 13 18:52:40.721942 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 18:52:40.727691 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:52:40.732966 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 18:52:40.735749 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 18:52:40.738019 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 18:52:40.738229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 18:52:40.753268 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 18:52:40.753392 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 18:52:40.755774 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 18:52:40.755990 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 18:52:40.758458 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 18:52:40.758615 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 18:52:40.768749 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 18:52:40.771315 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 18:52:40.775125 systemd-networkd[1123]: eth0: DHCPv6 lease lost Feb 13 18:52:40.781851 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 18:52:40.784526 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 18:52:40.785102 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 18:52:40.797682 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 18:52:40.800233 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 18:52:40.815404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 18:52:40.815560 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:52:40.828723 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 18:52:40.834308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 18:52:40.834542 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 18:52:40.838252 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:52:40.838353 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:52:40.838716 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 18:52:40.838790 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 18:52:40.839011 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 18:52:40.839625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:52:40.875373 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:52:40.881832 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 18:52:40.882050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 18:52:40.901820 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 18:52:40.904278 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 18:52:40.910824 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 18:52:40.912762 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:52:40.921335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 18:52:40.921461 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 18:52:40.927661 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 18:52:40.927743 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:52:40.930762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 18:52:40.932062 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 18:52:40.940951 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 18:52:40.941135 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 18:52:40.943704 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 18:52:40.943793 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 18:52:40.959356 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 18:52:40.963936 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 18:52:40.964139 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:52:40.967019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 18:52:40.967163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:52:40.970772 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 18:52:40.971145 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 18:52:41.009767 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 18:52:41.010269 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 18:52:41.018779 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 18:52:41.027369 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 18:52:41.047321 systemd[1]: Switching root. Feb 13 18:52:41.083977 systemd-journald[251]: Journal stopped Feb 13 18:52:42.997606 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 18:52:42.997778 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 18:52:42.997827 kernel: SELinux: policy capability open_perms=1 Feb 13 18:52:42.997860 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 18:52:42.997891 kernel: SELinux: policy capability always_check_network=0 Feb 13 18:52:42.997933 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 18:52:42.997968 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 18:52:42.998000 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 18:52:43.000662 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 18:52:43.000752 kernel: audit: type=1403 audit(1739472761.337:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 18:52:43.000804 systemd[1]: Successfully loaded SELinux policy in 48.438ms. Feb 13 18:52:43.000860 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.558ms. Feb 13 18:52:43.000891 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 18:52:43.004142 systemd[1]: Detected virtualization amazon. Feb 13 18:52:43.004203 systemd[1]: Detected architecture arm64. Feb 13 18:52:43.004236 systemd[1]: Detected first boot. Feb 13 18:52:43.004268 systemd[1]: Initializing machine ID from VM UUID. Feb 13 18:52:43.004302 zram_generator::config[1407]: No configuration found. Feb 13 18:52:43.004352 systemd[1]: Populated /etc with preset unit settings. Feb 13 18:52:43.004396 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 18:52:43.004433 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 18:52:43.004467 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 18:52:43.004501 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 18:52:43.004534 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 18:52:43.004572 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 18:52:43.004607 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 18:52:43.004637 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 18:52:43.004674 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 18:52:43.004712 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 18:52:43.004746 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 18:52:43.004775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 18:52:43.004805 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 18:52:43.004834 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 18:52:43.004887 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 18:52:43.004922 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 18:52:43.004957 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 18:52:43.004990 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 18:52:43.005055 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 18:52:43.005091 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 18:52:43.005123 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 18:52:43.005155 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 18:52:43.005186 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 18:52:43.005218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 18:52:43.005254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 18:52:43.005284 systemd[1]: Reached target slices.target - Slice Units. Feb 13 18:52:43.005316 systemd[1]: Reached target swap.target - Swaps. Feb 13 18:52:43.005345 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 18:52:43.005375 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 18:52:43.005406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 18:52:43.005441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 18:52:43.005471 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 18:52:43.005501 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 18:52:43.005533 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 18:52:43.005582 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 18:52:43.005619 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 18:52:43.005649 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 18:52:43.005681 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 18:52:43.005710 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 18:52:43.005741 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 18:52:43.005771 systemd[1]: Reached target machines.target - Containers. Feb 13 18:52:43.005800 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 18:52:43.005836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:52:43.005869 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 18:52:43.005898 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 18:52:43.005931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:52:43.005961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:52:43.005990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:52:43.006020 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 18:52:43.009227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:52:43.009271 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 18:52:43.009313 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 18:52:43.009346 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 18:52:43.009378 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 18:52:43.009410 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 18:52:43.009439 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 18:52:43.009473 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 18:52:43.009504 kernel: fuse: init (API version 7.39) Feb 13 18:52:43.009537 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 18:52:43.009574 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 18:52:43.009615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 18:52:43.009645 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 18:52:43.009674 systemd[1]: Stopped verity-setup.service. Feb 13 18:52:43.009703 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 18:52:43.009734 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 18:52:43.009763 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 18:52:43.009800 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 18:52:43.009829 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 18:52:43.009862 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 18:52:43.009892 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 18:52:43.009921 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 18:52:43.009951 kernel: ACPI: bus type drm_connector registered Feb 13 18:52:43.009979 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 18:52:43.010012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:52:43.010064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:52:43.010098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:52:43.010128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:52:43.010160 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 18:52:43.010189 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 18:52:43.010220 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 18:52:43.010249 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:52:43.010279 kernel: loop: module loaded Feb 13 18:52:43.010313 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:52:43.010345 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 18:52:43.010379 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:52:43.010410 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:52:43.010439 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 18:52:43.010476 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 18:52:43.010507 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 18:52:43.010537 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 18:52:43.010566 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 18:52:43.010595 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 18:52:43.010624 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 18:52:43.010699 systemd-journald[1485]: Collecting audit messages is disabled. Feb 13 18:52:43.010752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 18:52:43.010786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:52:43.010814 systemd-journald[1485]: Journal started Feb 13 18:52:43.010868 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec2dd273d4cc1ac6744b8eb90be7a160) is 8.0M, max 75.3M, 67.3M free. Feb 13 18:52:43.032020 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 18:52:43.032214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:52:42.325128 systemd[1]: Queued start job for default target multi-user.target. Feb 13 18:52:42.351685 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 18:52:43.041606 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 18:52:43.041694 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:52:42.352536 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 18:52:43.065696 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 18:52:43.065778 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 18:52:43.074138 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 18:52:43.077834 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 18:52:43.080444 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 18:52:43.083482 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 18:52:43.087124 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 18:52:43.139969 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 18:52:43.169976 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 18:52:43.172813 kernel: loop0: detected capacity change from 0 to 53784 Feb 13 18:52:43.186336 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 18:52:43.192294 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 18:52:43.205368 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:52:43.214549 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 18:52:43.292396 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec2dd273d4cc1ac6744b8eb90be7a160 is 132.815ms for 893 entries. Feb 13 18:52:43.292396 systemd-journald[1485]: System Journal (/var/log/journal/ec2dd273d4cc1ac6744b8eb90be7a160) is 8.0M, max 195.6M, 187.6M free. Feb 13 18:52:43.467280 systemd-journald[1485]: Received client request to flush runtime journal. Feb 13 18:52:43.467365 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 18:52:43.467400 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 18:52:43.324836 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 18:52:43.334462 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 18:52:43.401365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:52:43.422211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 18:52:43.428329 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 18:52:43.450010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 18:52:43.457366 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 18:52:43.481806 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 18:52:43.518462 udevadm[1553]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 18:52:43.559607 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 18:52:43.559642 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 18:52:43.565097 kernel: loop2: detected capacity change from 0 to 116784 Feb 13 18:52:43.586521 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 18:52:43.637088 kernel: loop3: detected capacity change from 0 to 113552 Feb 13 18:52:43.696090 kernel: loop4: detected capacity change from 0 to 53784 Feb 13 18:52:43.729096 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 18:52:43.775066 kernel: loop6: detected capacity change from 0 to 116784 Feb 13 18:52:43.804069 kernel: loop7: detected capacity change from 0 to 113552 Feb 13 18:52:43.827448 (sd-merge)[1561]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 18:52:43.829343 (sd-merge)[1561]: Merged extensions into '/usr'. Feb 13 18:52:43.841739 systemd[1]: Reloading requested from client PID 1517 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 18:52:43.841772 systemd[1]: Reloading... Feb 13 18:52:44.019075 zram_generator::config[1590]: No configuration found. Feb 13 18:52:44.033065 ldconfig[1511]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 18:52:44.318522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:52:44.442004 systemd[1]: Reloading finished in 599 ms. Feb 13 18:52:44.479536 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 18:52:44.482245 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 18:52:44.485064 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 18:52:44.501386 systemd[1]: Starting ensure-sysext.service... Feb 13 18:52:44.518299 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 18:52:44.526326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 18:52:44.541234 systemd[1]: Reloading requested from client PID 1640 ('systemctl') (unit ensure-sysext.service)... Feb 13 18:52:44.541274 systemd[1]: Reloading... Feb 13 18:52:44.599321 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 18:52:44.599892 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 18:52:44.606475 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 18:52:44.607353 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Feb 13 18:52:44.607527 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. Feb 13 18:52:44.610805 systemd-udevd[1642]: Using default interface naming scheme 'v255'. Feb 13 18:52:44.617323 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:52:44.617356 systemd-tmpfiles[1641]: Skipping /boot Feb 13 18:52:44.666786 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 18:52:44.666819 systemd-tmpfiles[1641]: Skipping /boot Feb 13 18:52:44.739382 zram_generator::config[1669]: No configuration found. Feb 13 18:52:44.921277 (udev-worker)[1674]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:52:45.089330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:52:45.157122 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1721) Feb 13 18:52:45.366930 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 18:52:45.367585 systemd[1]: Reloading finished in 825 ms. Feb 13 18:52:45.399876 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 18:52:45.425133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 18:52:45.479790 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:52:45.491320 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 18:52:45.500707 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 18:52:45.513253 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 18:52:45.532539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 18:52:45.542945 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 18:52:45.644914 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 18:52:45.660454 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:52:45.670541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:52:45.684533 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:52:45.697542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:52:45.699868 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:52:45.711259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 18:52:45.721587 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 18:52:45.729515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 18:52:45.736169 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 18:52:45.740017 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 18:52:45.744842 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 18:52:45.750016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:52:45.751393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:52:45.759627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:52:45.760731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:52:45.778629 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:52:45.780016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:52:45.812676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 18:52:45.822317 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 18:52:45.827914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 18:52:45.832537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 18:52:45.836560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 18:52:45.841989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 18:52:45.842919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 18:52:45.843563 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 18:52:45.850581 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 18:52:45.864810 systemd[1]: Finished ensure-sysext.service. Feb 13 18:52:45.877512 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 18:52:45.878897 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 18:52:45.895736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 18:52:45.910097 lvm[1872]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:52:45.912230 augenrules[1883]: No rules Feb 13 18:52:45.920929 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:52:45.929439 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:52:45.945821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 18:52:45.956392 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 18:52:45.957461 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 18:52:45.981011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 18:52:45.981556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 18:52:45.984322 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 18:52:45.986123 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 18:52:45.990945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 18:52:45.994737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 18:52:45.996139 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 18:52:45.998927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 18:52:46.012478 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 18:52:46.013798 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 18:52:46.035153 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 18:52:46.046182 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 18:52:46.063091 lvm[1895]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 18:52:46.073001 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 18:52:46.131649 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 18:52:46.177682 systemd-networkd[1836]: lo: Link UP Feb 13 18:52:46.178647 systemd-networkd[1836]: lo: Gained carrier Feb 13 18:52:46.181630 systemd-networkd[1836]: Enumeration completed Feb 13 18:52:46.181837 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 18:52:46.184364 systemd-networkd[1836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:52:46.184372 systemd-networkd[1836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 18:52:46.187372 systemd-networkd[1836]: eth0: Link UP Feb 13 18:52:46.188091 systemd-networkd[1836]: eth0: Gained carrier Feb 13 18:52:46.188323 systemd-networkd[1836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 18:52:46.196499 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 18:52:46.203206 systemd-networkd[1836]: eth0: DHCPv4 address 172.31.27.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 18:52:46.217237 systemd-resolved[1839]: Positive Trust Anchors: Feb 13 18:52:46.217755 systemd-resolved[1839]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 18:52:46.217917 systemd-resolved[1839]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 18:52:46.226110 systemd-resolved[1839]: Defaulting to hostname 'linux'. Feb 13 18:52:46.229671 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 18:52:46.232000 systemd[1]: Reached target network.target - Network. Feb 13 18:52:46.233862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 18:52:46.236225 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 18:52:46.238595 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 18:52:46.241210 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 18:52:46.244330 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 18:52:46.247185 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 18:52:46.249625 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 18:52:46.252156 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 18:52:46.252446 systemd[1]: Reached target paths.target - Path Units. Feb 13 18:52:46.255082 systemd[1]: Reached target timers.target - Timer Units. Feb 13 18:52:46.259541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 18:52:46.264657 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 18:52:46.273950 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 18:52:46.277464 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 18:52:46.279975 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 18:52:46.282114 systemd[1]: Reached target basic.target - Basic System. Feb 13 18:52:46.284296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:52:46.284383 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 18:52:46.291286 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 18:52:46.306865 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 18:52:46.317608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 18:52:46.326400 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 18:52:46.345219 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 18:52:46.349676 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 18:52:46.361219 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 18:52:46.370664 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 18:52:46.377461 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 18:52:46.383736 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 18:52:46.417429 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 18:52:46.441217 jq[1913]: false Feb 13 18:52:46.431597 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 18:52:46.437304 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 18:52:46.438566 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 18:52:46.460397 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 18:52:46.469665 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 18:52:46.474119 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 18:52:46.478143 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 18:52:46.506662 update_engine[1923]: I20250213 18:52:46.505499 1923 main.cc:92] Flatcar Update Engine starting Feb 13 18:52:46.520195 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 18:52:46.524935 dbus-daemon[1912]: [system] SELinux support is enabled Feb 13 18:52:46.537004 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 18:52:46.554728 dbus-daemon[1912]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1836 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 18:52:46.549970 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 18:52:46.550532 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 18:52:46.561163 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 18:52:46.568257 update_engine[1923]: I20250213 18:52:46.565960 1923 update_check_scheduler.cc:74] Next update check in 9m11s Feb 13 18:52:46.561258 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 18:52:46.563962 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 18:52:46.564020 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 18:52:46.578103 jq[1925]: true Feb 13 18:52:46.575720 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 18:52:46.583749 systemd[1]: Started update-engine.service - Update Engine. Feb 13 18:52:46.606418 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 18:52:46.615206 extend-filesystems[1914]: Found loop4 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found loop5 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found loop6 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found loop7 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found nvme0n1 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found nvme0n1p1 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found nvme0n1p2 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found nvme0n1p3 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found usr Feb 13 18:52:46.615206 extend-filesystems[1914]: Found nvme0n1p4 Feb 13 18:52:46.615206 extend-filesystems[1914]: Found nvme0n1p6 Feb 13 18:52:46.685218 extend-filesystems[1914]: Found nvme0n1p7 Feb 13 18:52:46.685218 extend-filesystems[1914]: Found nvme0n1p9 Feb 13 18:52:46.685218 extend-filesystems[1914]: Checking size of /dev/nvme0n1p9 Feb 13 18:52:46.634796 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 18:52:46.702573 jq[1943]: true Feb 13 18:52:46.639726 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 18:52:46.641238 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 18:52:46.724364 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:01:18 UTC 2025 (1): Starting Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:01:18 UTC 2025 (1): Starting Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: ---------------------------------------------------- Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: corporation. Support and training for ntp-4 are Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: available at https://www.nwtime.org/support Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: ---------------------------------------------------- Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: proto: precision = 0.096 usec (-23) Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: basedate set to 2025-02-01 Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: gps base set to 2025-02-02 (week 2352) Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 18:52:46.742402 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 18:52:46.724426 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 18:52:46.746018 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 18:52:46.746018 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Listen normally on 3 eth0 172.31.27.136:123 Feb 13 18:52:46.746018 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Listen normally on 4 lo [::1]:123 Feb 13 18:52:46.746018 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: bind(21) AF_INET6 fe80::4a6:5eff:fee7:c4c5%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 18:52:46.746018 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: unable to create socket on eth0 (5) for fe80::4a6:5eff:fee7:c4c5%2#123 Feb 13 18:52:46.746018 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: failed to init interface for address fe80::4a6:5eff:fee7:c4c5%2 Feb 13 18:52:46.746018 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Feb 13 18:52:46.724446 ntpd[1917]: ---------------------------------------------------- Feb 13 18:52:46.724464 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Feb 13 18:52:46.724483 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 18:52:46.724501 ntpd[1917]: corporation. Support and training for ntp-4 are Feb 13 18:52:46.724518 ntpd[1917]: available at https://www.nwtime.org/support Feb 13 18:52:46.724537 ntpd[1917]: ---------------------------------------------------- Feb 13 18:52:46.730770 ntpd[1917]: proto: precision = 0.096 usec (-23) Feb 13 18:52:46.734252 ntpd[1917]: basedate set to 2025-02-01 Feb 13 18:52:46.753525 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:52:46.753525 ntpd[1917]: 13 Feb 18:52:46 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:52:46.750339 (ntainerd)[1949]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 18:52:46.734310 ntpd[1917]: gps base set to 2025-02-02 (week 2352) Feb 13 18:52:46.739317 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 18:52:46.739415 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 18:52:46.743507 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 18:52:46.743588 ntpd[1917]: Listen normally on 3 eth0 172.31.27.136:123 Feb 13 18:52:46.743662 ntpd[1917]: Listen normally on 4 lo [::1]:123 Feb 13 18:52:46.743745 ntpd[1917]: bind(21) AF_INET6 fe80::4a6:5eff:fee7:c4c5%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 18:52:46.743784 ntpd[1917]: unable to create socket on eth0 (5) for fe80::4a6:5eff:fee7:c4c5%2#123 Feb 13 18:52:46.743811 ntpd[1917]: failed to init interface for address fe80::4a6:5eff:fee7:c4c5%2 Feb 13 18:52:46.743867 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Feb 13 18:52:46.751562 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:52:46.751631 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 18:52:46.806210 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 18:52:46.811132 extend-filesystems[1914]: Resized partition /dev/nvme0n1p9 Feb 13 18:52:46.819070 extend-filesystems[1964]: resize2fs 1.47.1 (20-May-2024) Feb 13 18:52:46.833373 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 18:52:46.850164 coreos-metadata[1911]: Feb 13 18:52:46.849 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 18:52:46.850164 coreos-metadata[1911]: Feb 13 18:52:46.850 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 18:52:46.854399 coreos-metadata[1911]: Feb 13 18:52:46.851 INFO Fetch successful Feb 13 18:52:46.854399 coreos-metadata[1911]: Feb 13 18:52:46.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 18:52:46.857762 coreos-metadata[1911]: Feb 13 18:52:46.857 INFO Fetch successful Feb 13 18:52:46.857762 coreos-metadata[1911]: Feb 13 18:52:46.857 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 18:52:46.857762 coreos-metadata[1911]: Feb 13 18:52:46.857 INFO Fetch successful Feb 13 18:52:46.857762 coreos-metadata[1911]: Feb 13 18:52:46.857 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 18:52:46.860705 coreos-metadata[1911]: Feb 13 18:52:46.859 INFO Fetch successful Feb 13 18:52:46.860705 coreos-metadata[1911]: Feb 13 18:52:46.860 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 18:52:46.866600 coreos-metadata[1911]: Feb 13 18:52:46.862 INFO Fetch failed with 404: resource not found Feb 13 18:52:46.866600 coreos-metadata[1911]: Feb 13 18:52:46.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 18:52:46.867297 coreos-metadata[1911]: Feb 13 18:52:46.866 INFO Fetch successful Feb 13 18:52:46.867297 coreos-metadata[1911]: Feb 13 18:52:46.866 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 18:52:46.868360 coreos-metadata[1911]: Feb 13 18:52:46.867 INFO Fetch successful Feb 13 18:52:46.868360 coreos-metadata[1911]: Feb 13 18:52:46.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 18:52:46.868360 coreos-metadata[1911]: Feb 13 18:52:46.868 INFO Fetch successful Feb 13 18:52:46.868360 coreos-metadata[1911]: Feb 13 18:52:46.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 18:52:46.874693 coreos-metadata[1911]: Feb 13 18:52:46.874 INFO Fetch successful Feb 13 18:52:46.874693 coreos-metadata[1911]: Feb 13 18:52:46.874 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 18:52:46.874693 coreos-metadata[1911]: Feb 13 18:52:46.874 INFO Fetch successful Feb 13 18:52:47.014077 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 18:52:47.015215 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 18:52:47.017750 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 18:52:47.038439 extend-filesystems[1964]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 18:52:47.038439 extend-filesystems[1964]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 18:52:47.038439 extend-filesystems[1964]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 18:52:47.046246 extend-filesystems[1914]: Resized filesystem in /dev/nvme0n1p9 Feb 13 18:52:47.039921 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 18:52:47.042319 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 18:52:47.069409 bash[1981]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:52:47.085142 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 18:52:47.096172 systemd-logind[1922]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 18:52:47.126951 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1697) Feb 13 18:52:47.096230 systemd-logind[1922]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 18:52:47.103915 systemd-logind[1922]: New seat seat0. Feb 13 18:52:47.127419 systemd[1]: Starting sshkeys.service... Feb 13 18:52:47.129235 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 18:52:47.181657 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 18:52:47.192340 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 18:52:47.269538 locksmithd[1946]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 18:52:47.399384 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 18:52:47.399782 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 18:52:47.407407 dbus-daemon[1912]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1944 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 18:52:47.439384 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 18:52:47.459120 coreos-metadata[2003]: Feb 13 18:52:47.457 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 18:52:47.464071 coreos-metadata[2003]: Feb 13 18:52:47.462 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 18:52:47.464071 coreos-metadata[2003]: Feb 13 18:52:47.462 INFO Fetch successful Feb 13 18:52:47.464071 coreos-metadata[2003]: Feb 13 18:52:47.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 18:52:47.464071 coreos-metadata[2003]: Feb 13 18:52:47.463 INFO Fetch successful Feb 13 18:52:47.467768 unknown[2003]: wrote ssh authorized keys file for user: core Feb 13 18:52:47.521542 polkitd[2049]: Started polkitd version 121 Feb 13 18:52:47.539887 update-ssh-keys[2064]: Updated "/home/core/.ssh/authorized_keys" Feb 13 18:52:47.539878 polkitd[2049]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 18:52:47.541151 polkitd[2049]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 18:52:47.544014 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 18:52:47.557123 systemd[1]: Finished sshkeys.service. Feb 13 18:52:47.567604 polkitd[2049]: Finished loading, compiling and executing 2 rules Feb 13 18:52:47.575265 dbus-daemon[1912]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 18:52:47.577469 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 18:52:47.581220 polkitd[2049]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 18:52:47.588123 containerd[1949]: time="2025-02-13T18:52:47.580942389Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 18:52:47.656386 systemd-hostnamed[1944]: Hostname set to (transient) Feb 13 18:52:47.656850 systemd-resolved[1839]: System hostname changed to 'ip-172-31-27-136'. Feb 13 18:52:47.725289 ntpd[1917]: bind(24) AF_INET6 fe80::4a6:5eff:fee7:c4c5%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 18:52:47.727621 ntpd[1917]: 13 Feb 18:52:47 ntpd[1917]: bind(24) AF_INET6 fe80::4a6:5eff:fee7:c4c5%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 18:52:47.727621 ntpd[1917]: 13 Feb 18:52:47 ntpd[1917]: unable to create socket on eth0 (6) for fe80::4a6:5eff:fee7:c4c5%2#123 Feb 13 18:52:47.727621 ntpd[1917]: 13 Feb 18:52:47 ntpd[1917]: failed to init interface for address fe80::4a6:5eff:fee7:c4c5%2 Feb 13 18:52:47.725405 ntpd[1917]: unable to create socket on eth0 (6) for fe80::4a6:5eff:fee7:c4c5%2#123 Feb 13 18:52:47.725434 ntpd[1917]: failed to init interface for address fe80::4a6:5eff:fee7:c4c5%2 Feb 13 18:52:47.790609 containerd[1949]: time="2025-02-13T18:52:47.790118806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:52:47.794072 containerd[1949]: time="2025-02-13T18:52:47.793368862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:52:47.794072 containerd[1949]: time="2025-02-13T18:52:47.793435078Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 18:52:47.794072 containerd[1949]: time="2025-02-13T18:52:47.793473562Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 18:52:47.794072 containerd[1949]: time="2025-02-13T18:52:47.793797262Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 18:52:47.794072 containerd[1949]: time="2025-02-13T18:52:47.793835662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 18:52:47.794072 containerd[1949]: time="2025-02-13T18:52:47.793965958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:52:47.794072 containerd[1949]: time="2025-02-13T18:52:47.793994974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:52:47.794873 containerd[1949]: time="2025-02-13T18:52:47.794814047Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:52:47.795018 containerd[1949]: time="2025-02-13T18:52:47.794987975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 18:52:47.796245 containerd[1949]: time="2025-02-13T18:52:47.795098483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:52:47.796245 containerd[1949]: time="2025-02-13T18:52:47.795127499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 18:52:47.796245 containerd[1949]: time="2025-02-13T18:52:47.795333779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:52:47.796245 containerd[1949]: time="2025-02-13T18:52:47.795764567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 18:52:47.796586 containerd[1949]: time="2025-02-13T18:52:47.796018103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 18:52:47.796716 containerd[1949]: time="2025-02-13T18:52:47.796682471Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 18:52:47.797081 containerd[1949]: time="2025-02-13T18:52:47.797015159Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 18:52:47.797342 containerd[1949]: time="2025-02-13T18:52:47.797303507Z" level=info msg="metadata content store policy set" policy=shared Feb 13 18:52:47.804661 containerd[1949]: time="2025-02-13T18:52:47.804603083Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 18:52:47.804922 containerd[1949]: time="2025-02-13T18:52:47.804883595Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 18:52:47.805142 containerd[1949]: time="2025-02-13T18:52:47.805108823Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.805268399Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.805314707Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.805620299Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806175707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806465279Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806504915Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806541647Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806636135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806669219Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806710343Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806743091Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806777627Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806812979Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807091 containerd[1949]: time="2025-02-13T18:52:47.806844035Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807724 containerd[1949]: time="2025-02-13T18:52:47.806872511Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 18:52:47.807724 containerd[1949]: time="2025-02-13T18:52:47.806915027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.807724 containerd[1949]: time="2025-02-13T18:52:47.806947895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.807724 containerd[1949]: time="2025-02-13T18:52:47.806979059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.807724 containerd[1949]: time="2025-02-13T18:52:47.807009659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.808109 containerd[1949]: time="2025-02-13T18:52:47.808057259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.808264 containerd[1949]: time="2025-02-13T18:52:47.808229231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.808385 containerd[1949]: time="2025-02-13T18:52:47.808357007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.808495 containerd[1949]: time="2025-02-13T18:52:47.808467815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.808606 containerd[1949]: time="2025-02-13T18:52:47.808578071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.808733 containerd[1949]: time="2025-02-13T18:52:47.808702907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.808910 containerd[1949]: time="2025-02-13T18:52:47.808879931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.809069 containerd[1949]: time="2025-02-13T18:52:47.808997771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.809211 containerd[1949]: time="2025-02-13T18:52:47.809177627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.809341 containerd[1949]: time="2025-02-13T18:52:47.809310983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 18:52:47.809483 containerd[1949]: time="2025-02-13T18:52:47.809451599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.809613 containerd[1949]: time="2025-02-13T18:52:47.809582375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.809730 containerd[1949]: time="2025-02-13T18:52:47.809700347Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 18:52:47.810137 containerd[1949]: time="2025-02-13T18:52:47.810065783Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 18:52:47.811081 containerd[1949]: time="2025-02-13T18:52:47.810297503Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 18:52:47.811081 containerd[1949]: time="2025-02-13T18:52:47.810336575Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 18:52:47.811081 containerd[1949]: time="2025-02-13T18:52:47.810366275Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 18:52:47.811081 containerd[1949]: time="2025-02-13T18:52:47.810391019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.811081 containerd[1949]: time="2025-02-13T18:52:47.810422963Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 18:52:47.811081 containerd[1949]: time="2025-02-13T18:52:47.810447443Z" level=info msg="NRI interface is disabled by configuration." Feb 13 18:52:47.811081 containerd[1949]: time="2025-02-13T18:52:47.810493799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 18:52:47.811696 containerd[1949]: time="2025-02-13T18:52:47.811568591Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 18:52:47.812089 containerd[1949]: time="2025-02-13T18:52:47.812045231Z" level=info msg="Connect containerd service" Feb 13 18:52:47.813082 containerd[1949]: time="2025-02-13T18:52:47.812266127Z" level=info msg="using legacy CRI server" Feb 13 18:52:47.813082 containerd[1949]: time="2025-02-13T18:52:47.812299271Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 18:52:47.813082 containerd[1949]: time="2025-02-13T18:52:47.812584955Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 18:52:47.814529 containerd[1949]: time="2025-02-13T18:52:47.814462943Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:52:47.814940 containerd[1949]: time="2025-02-13T18:52:47.814858379Z" level=info msg="Start subscribing containerd event" Feb 13 18:52:47.815046 containerd[1949]: time="2025-02-13T18:52:47.814950095Z" level=info msg="Start recovering state" Feb 13 18:52:47.815248 containerd[1949]: time="2025-02-13T18:52:47.815187407Z" level=info msg="Start event monitor" Feb 13 18:52:47.815248 containerd[1949]: time="2025-02-13T18:52:47.815231639Z" level=info msg="Start snapshots syncer" Feb 13 18:52:47.815387 containerd[1949]: time="2025-02-13T18:52:47.815258663Z" level=info msg="Start cni network conf syncer for default" Feb 13 18:52:47.815387 containerd[1949]: time="2025-02-13T18:52:47.815278187Z" level=info msg="Start streaming server" Feb 13 18:52:47.815752 containerd[1949]: time="2025-02-13T18:52:47.815696855Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 18:52:47.816848 containerd[1949]: time="2025-02-13T18:52:47.816208763Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 18:52:47.816848 containerd[1949]: time="2025-02-13T18:52:47.819981035Z" level=info msg="containerd successfully booted in 0.248167s" Feb 13 18:52:47.816609 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 18:52:48.001220 systemd-networkd[1836]: eth0: Gained IPv6LL Feb 13 18:52:48.008226 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 18:52:48.012732 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 18:52:48.026631 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 18:52:48.040875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:52:48.053624 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 18:52:48.149157 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 18:52:48.154725 amazon-ssm-agent[2114]: Initializing new seelog logger Feb 13 18:52:48.155827 amazon-ssm-agent[2114]: New Seelog Logger Creation Complete Feb 13 18:52:48.157267 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.157267 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.157267 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 processing appconfig overrides Feb 13 18:52:48.158018 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.158186 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.158421 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 processing appconfig overrides Feb 13 18:52:48.158852 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.158961 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.159347 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 processing appconfig overrides Feb 13 18:52:48.160529 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO Proxy environment variables: Feb 13 18:52:48.168118 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.168118 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 18:52:48.168118 amazon-ssm-agent[2114]: 2025/02/13 18:52:48 processing appconfig overrides Feb 13 18:52:48.261149 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO https_proxy: Feb 13 18:52:48.362207 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO http_proxy: Feb 13 18:52:48.460656 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO no_proxy: Feb 13 18:52:48.559460 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO Checking if agent identity type OnPrem can be assumed Feb 13 18:52:48.657770 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO Checking if agent identity type EC2 can be assumed Feb 13 18:52:48.759076 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO Agent will take identity from EC2 Feb 13 18:52:48.857157 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 18:52:48.956596 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 18:52:49.055933 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 18:52:49.110655 sshd_keygen[1945]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 18:52:49.156076 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 18:52:49.201128 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 18:52:49.217647 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 18:52:49.229639 systemd[1]: Started sshd@0-172.31.27.136:22-139.178.68.195:60070.service - OpenSSH per-connection server daemon (139.178.68.195:60070). Feb 13 18:52:49.258003 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 18:52:49.278011 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 18:52:49.278776 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 18:52:49.294545 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 18:52:49.345139 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 18:52:49.356993 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 18:52:49.360355 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 18:52:49.364950 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 18:52:49.368927 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 18:52:49.462927 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 18:52:49.516222 sshd[2142]: Accepted publickey for core from 139.178.68.195 port 60070 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:52:49.525209 sshd-session[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:52:49.554153 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 18:52:49.560166 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [Registrar] Starting registrar module Feb 13 18:52:49.570777 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 18:52:49.585748 systemd-logind[1922]: New session 1 of user core. Feb 13 18:52:49.625271 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 18:52:49.640807 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 18:52:49.660577 amazon-ssm-agent[2114]: 2025-02-13 18:52:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 18:52:49.671817 (systemd)[2153]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 18:52:49.961200 systemd[2153]: Queued start job for default target default.target. Feb 13 18:52:49.970372 systemd[2153]: Created slice app.slice - User Application Slice. Feb 13 18:52:49.970447 systemd[2153]: Reached target paths.target - Paths. Feb 13 18:52:49.970481 systemd[2153]: Reached target timers.target - Timers. Feb 13 18:52:49.974872 systemd[2153]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 18:52:50.009575 systemd[2153]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 18:52:50.009869 systemd[2153]: Reached target sockets.target - Sockets. Feb 13 18:52:50.009905 systemd[2153]: Reached target basic.target - Basic System. Feb 13 18:52:50.010001 systemd[2153]: Reached target default.target - Main User Target. Feb 13 18:52:50.010114 systemd[2153]: Startup finished in 315ms. Feb 13 18:52:50.010286 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 18:52:50.025469 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 18:52:50.201715 systemd[1]: Started sshd@1-172.31.27.136:22-139.178.68.195:60084.service - OpenSSH per-connection server daemon (139.178.68.195:60084). Feb 13 18:52:50.207594 amazon-ssm-agent[2114]: 2025-02-13 18:52:50 INFO [EC2Identity] EC2 registration was successful. Feb 13 18:52:50.244703 amazon-ssm-agent[2114]: 2025-02-13 18:52:50 INFO [CredentialRefresher] credentialRefresher has started Feb 13 18:52:50.244703 amazon-ssm-agent[2114]: 2025-02-13 18:52:50 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 18:52:50.244703 amazon-ssm-agent[2114]: 2025-02-13 18:52:50 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 18:52:50.307426 amazon-ssm-agent[2114]: 2025-02-13 18:52:50 INFO [CredentialRefresher] Next credential rotation will be in 30.1749893725 minutes Feb 13 18:52:50.413568 sshd[2164]: Accepted publickey for core from 139.178.68.195 port 60084 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:52:50.417538 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:52:50.429435 systemd-logind[1922]: New session 2 of user core. Feb 13 18:52:50.432508 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 18:52:50.438717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:52:50.446544 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 18:52:50.449194 systemd[1]: Startup finished in 1.131s (kernel) + 7.549s (initrd) + 9.158s (userspace) = 17.839s. Feb 13 18:52:50.460439 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 18:52:50.476519 agetty[2150]: failed to open credentials directory Feb 13 18:52:50.482896 agetty[2149]: failed to open credentials directory Feb 13 18:52:50.577973 sshd[2172]: Connection closed by 139.178.68.195 port 60084 Feb 13 18:52:50.577754 sshd-session[2164]: pam_unix(sshd:session): session closed for user core Feb 13 18:52:50.586731 systemd-logind[1922]: Session 2 logged out. Waiting for processes to exit. Feb 13 18:52:50.588357 systemd[1]: sshd@1-172.31.27.136:22-139.178.68.195:60084.service: Deactivated successfully. Feb 13 18:52:50.591730 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 18:52:50.594740 systemd-logind[1922]: Removed session 2. Feb 13 18:52:50.613590 systemd[1]: Started sshd@2-172.31.27.136:22-139.178.68.195:60090.service - OpenSSH per-connection server daemon (139.178.68.195:60090). Feb 13 18:52:50.725207 ntpd[1917]: Listen normally on 7 eth0 [fe80::4a6:5eff:fee7:c4c5%2]:123 Feb 13 18:52:50.726848 ntpd[1917]: 13 Feb 18:52:50 ntpd[1917]: Listen normally on 7 eth0 [fe80::4a6:5eff:fee7:c4c5%2]:123 Feb 13 18:52:50.802152 sshd[2181]: Accepted publickey for core from 139.178.68.195 port 60090 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:52:50.804875 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:52:50.817111 systemd-logind[1922]: New session 3 of user core. Feb 13 18:52:50.823365 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 18:52:50.943833 sshd[2187]: Connection closed by 139.178.68.195 port 60090 Feb 13 18:52:50.944800 sshd-session[2181]: pam_unix(sshd:session): session closed for user core Feb 13 18:52:50.953412 systemd[1]: sshd@2-172.31.27.136:22-139.178.68.195:60090.service: Deactivated successfully. Feb 13 18:52:50.958466 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 18:52:50.962350 systemd-logind[1922]: Session 3 logged out. Waiting for processes to exit. Feb 13 18:52:50.965244 systemd-logind[1922]: Removed session 3. Feb 13 18:52:50.984627 systemd[1]: Started sshd@3-172.31.27.136:22-139.178.68.195:60102.service - OpenSSH per-connection server daemon (139.178.68.195:60102). Feb 13 18:52:51.179571 sshd[2192]: Accepted publickey for core from 139.178.68.195 port 60102 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:52:51.182771 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:52:51.193639 systemd-logind[1922]: New session 4 of user core. Feb 13 18:52:51.206320 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 18:52:51.273934 amazon-ssm-agent[2114]: 2025-02-13 18:52:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 18:52:51.344091 sshd[2194]: Connection closed by 139.178.68.195 port 60102 Feb 13 18:52:51.345409 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Feb 13 18:52:51.360150 systemd[1]: sshd@3-172.31.27.136:22-139.178.68.195:60102.service: Deactivated successfully. Feb 13 18:52:51.365093 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 18:52:51.367816 systemd-logind[1922]: Session 4 logged out. Waiting for processes to exit. Feb 13 18:52:51.374275 amazon-ssm-agent[2114]: 2025-02-13 18:52:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2196) started Feb 13 18:52:51.394668 systemd[1]: Started sshd@4-172.31.27.136:22-139.178.68.195:60114.service - OpenSSH per-connection server daemon (139.178.68.195:60114). Feb 13 18:52:51.397565 systemd-logind[1922]: Removed session 4. Feb 13 18:52:51.475177 amazon-ssm-agent[2114]: 2025-02-13 18:52:51 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 18:52:51.624881 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 60114 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:52:51.630916 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:52:51.644496 systemd-logind[1922]: New session 5 of user core. Feb 13 18:52:51.654461 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 18:52:51.799770 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 18:52:51.800740 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:52:51.820575 sudo[2215]: pam_unix(sudo:session): session closed for user root Feb 13 18:52:51.844864 sshd[2214]: Connection closed by 139.178.68.195 port 60114 Feb 13 18:52:51.844671 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Feb 13 18:52:51.852361 systemd[1]: sshd@4-172.31.27.136:22-139.178.68.195:60114.service: Deactivated successfully. Feb 13 18:52:51.856841 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 18:52:51.860864 systemd-logind[1922]: Session 5 logged out. Waiting for processes to exit. Feb 13 18:52:51.864423 systemd-logind[1922]: Removed session 5. Feb 13 18:52:51.885580 systemd[1]: Started sshd@5-172.31.27.136:22-139.178.68.195:60118.service - OpenSSH per-connection server daemon (139.178.68.195:60118). Feb 13 18:52:51.908473 kubelet[2170]: E0213 18:52:51.908384 2170 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 18:52:51.914253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 18:52:51.914597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 18:52:51.918206 systemd[1]: kubelet.service: Consumed 1.438s CPU time. Feb 13 18:52:52.079766 sshd[2220]: Accepted publickey for core from 139.178.68.195 port 60118 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:52:52.081541 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:52:52.091483 systemd-logind[1922]: New session 6 of user core. Feb 13 18:52:52.103366 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 18:52:52.210921 sudo[2226]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 18:52:52.211906 sudo[2226]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:52:52.219631 sudo[2226]: pam_unix(sudo:session): session closed for user root Feb 13 18:52:52.232596 sudo[2225]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 18:52:52.234188 sudo[2225]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:52:52.261726 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 18:52:52.309675 augenrules[2248]: No rules Feb 13 18:52:52.311804 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 18:52:52.312210 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 18:52:52.314572 sudo[2225]: pam_unix(sudo:session): session closed for user root Feb 13 18:52:52.338863 sshd[2224]: Connection closed by 139.178.68.195 port 60118 Feb 13 18:52:52.339347 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Feb 13 18:52:52.345906 systemd[1]: sshd@5-172.31.27.136:22-139.178.68.195:60118.service: Deactivated successfully. Feb 13 18:52:52.349740 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 18:52:52.351020 systemd-logind[1922]: Session 6 logged out. Waiting for processes to exit. Feb 13 18:52:52.353822 systemd-logind[1922]: Removed session 6. Feb 13 18:52:52.380531 systemd[1]: Started sshd@6-172.31.27.136:22-139.178.68.195:60134.service - OpenSSH per-connection server daemon (139.178.68.195:60134). Feb 13 18:52:52.562583 sshd[2256]: Accepted publickey for core from 139.178.68.195 port 60134 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s Feb 13 18:52:52.565524 sshd-session[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 18:52:52.573828 systemd-logind[1922]: New session 7 of user core. Feb 13 18:52:52.577327 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 18:52:52.681956 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 18:52:52.683378 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 18:52:53.859643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:52:53.859991 systemd[1]: kubelet.service: Consumed 1.438s CPU time. Feb 13 18:52:53.869723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:52:53.921466 systemd[1]: Reloading requested from client PID 2295 ('systemctl') (unit session-7.scope)... Feb 13 18:52:53.921502 systemd[1]: Reloading... Feb 13 18:52:54.170267 zram_generator::config[2338]: No configuration found. Feb 13 18:52:54.423338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 18:52:54.598094 systemd[1]: Reloading finished in 675 ms. Feb 13 18:52:54.677674 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 18:52:54.677864 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 18:52:54.679276 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:52:54.687666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 18:52:55.016102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 18:52:55.031516 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 18:52:55.113775 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:52:55.113775 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 18:52:55.113775 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 18:52:55.115799 kubelet[2397]: I0213 18:52:55.115718 2397 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 18:52:56.531964 kubelet[2397]: I0213 18:52:56.531901 2397 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 18:52:56.531964 kubelet[2397]: I0213 18:52:56.531947 2397 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 18:52:56.532605 kubelet[2397]: I0213 18:52:56.532401 2397 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 18:52:56.563384 kubelet[2397]: I0213 18:52:56.563116 2397 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 18:52:56.578386 kubelet[2397]: I0213 18:52:56.578318 2397 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 18:52:56.650388 kubelet[2397]: I0213 18:52:56.649455 2397 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 18:52:56.650388 kubelet[2397]: I0213 18:52:56.649547 2397 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.27.136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 18:52:56.650388 kubelet[2397]: I0213 18:52:56.649906 2397 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 18:52:56.650388 kubelet[2397]: I0213 18:52:56.649926 2397 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 18:52:56.650388 kubelet[2397]: I0213 18:52:56.650237 2397 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:52:56.654491 kubelet[2397]: I0213 18:52:56.654300 2397 kubelet.go:400] "Attempting to sync node with API server" Feb 13 18:52:56.654491 kubelet[2397]: I0213 18:52:56.654382 2397 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 18:52:56.654750 kubelet[2397]: I0213 18:52:56.654675 2397 kubelet.go:312] "Adding apiserver pod source" Feb 13 18:52:56.654946 kubelet[2397]: E0213 18:52:56.654894 2397 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:56.656043 kubelet[2397]: I0213 18:52:56.655978 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 18:52:56.658047 kubelet[2397]: E0213 18:52:56.657986 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:56.661172 kubelet[2397]: I0213 18:52:56.660428 2397 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 18:52:56.661172 kubelet[2397]: I0213 18:52:56.660808 2397 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 18:52:56.661172 kubelet[2397]: W0213 18:52:56.660878 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 18:52:56.662788 kubelet[2397]: I0213 18:52:56.662751 2397 server.go:1264] "Started kubelet" Feb 13 18:52:56.670852 kubelet[2397]: I0213 18:52:56.670786 2397 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 18:52:56.676075 kubelet[2397]: I0213 18:52:56.673957 2397 server.go:455] "Adding debug handlers to kubelet server" Feb 13 18:52:56.676075 kubelet[2397]: I0213 18:52:56.671092 2397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 18:52:56.676269 kubelet[2397]: I0213 18:52:56.676237 2397 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 18:52:56.678068 kubelet[2397]: I0213 18:52:56.678009 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 18:52:56.690925 kubelet[2397]: I0213 18:52:56.690869 2397 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 18:52:56.692734 kubelet[2397]: I0213 18:52:56.692700 2397 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 18:52:56.696108 kubelet[2397]: I0213 18:52:56.696073 2397 reconciler.go:26] "Reconciler: start to sync state" Feb 13 18:52:56.701500 kubelet[2397]: E0213 18:52:56.701446 2397 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 18:52:56.702898 kubelet[2397]: I0213 18:52:56.702861 2397 factory.go:221] Registration of the containerd container factory successfully Feb 13 18:52:56.703141 kubelet[2397]: I0213 18:52:56.703120 2397 factory.go:221] Registration of the systemd container factory successfully Feb 13 18:52:56.703465 kubelet[2397]: I0213 18:52:56.703434 2397 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 18:52:56.716059 kubelet[2397]: E0213 18:52:56.714121 2397 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.27.136\" not found" node="172.31.27.136" Feb 13 18:52:56.739626 kubelet[2397]: I0213 18:52:56.739583 2397 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 18:52:56.739793 kubelet[2397]: I0213 18:52:56.739721 2397 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 18:52:56.739793 kubelet[2397]: I0213 18:52:56.739755 2397 state_mem.go:36] "Initialized new in-memory state store" Feb 13 18:52:56.743308 kubelet[2397]: I0213 18:52:56.743109 2397 policy_none.go:49] "None policy: Start" Feb 13 18:52:56.746172 kubelet[2397]: I0213 18:52:56.745190 2397 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 18:52:56.746172 kubelet[2397]: I0213 18:52:56.745253 2397 state_mem.go:35] "Initializing new in-memory state store" Feb 13 18:52:56.764898 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 18:52:56.780923 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 18:52:56.791804 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 18:52:56.794743 kubelet[2397]: I0213 18:52:56.793787 2397 kubelet_node_status.go:73] "Attempting to register node" node="172.31.27.136" Feb 13 18:52:56.807000 kubelet[2397]: I0213 18:52:56.806949 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 18:52:56.809783 kubelet[2397]: I0213 18:52:56.809736 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 18:52:56.809949 kubelet[2397]: I0213 18:52:56.809814 2397 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 18:52:56.809949 kubelet[2397]: I0213 18:52:56.809845 2397 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 18:52:56.809949 kubelet[2397]: E0213 18:52:56.809911 2397 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 18:52:56.810523 kubelet[2397]: I0213 18:52:56.810479 2397 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 18:52:56.814063 kubelet[2397]: I0213 18:52:56.812395 2397 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 18:52:56.814063 kubelet[2397]: I0213 18:52:56.812680 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 18:52:56.824531 kubelet[2397]: I0213 18:52:56.824478 2397 kubelet_node_status.go:76] "Successfully registered node" node="172.31.27.136" Feb 13 18:52:56.876621 kubelet[2397]: I0213 18:52:56.876558 2397 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 18:52:56.877441 containerd[1949]: time="2025-02-13T18:52:56.877237813Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 18:52:56.878464 kubelet[2397]: I0213 18:52:56.878421 2397 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 18:52:57.056108 sudo[2259]: pam_unix(sudo:session): session closed for user root Feb 13 18:52:57.080111 sshd[2258]: Connection closed by 139.178.68.195 port 60134 Feb 13 18:52:57.080983 sshd-session[2256]: pam_unix(sshd:session): session closed for user core Feb 13 18:52:57.088584 systemd[1]: sshd@6-172.31.27.136:22-139.178.68.195:60134.service: Deactivated successfully. Feb 13 18:52:57.092584 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 18:52:57.096981 systemd-logind[1922]: Session 7 logged out. Waiting for processes to exit. Feb 13 18:52:57.099563 systemd-logind[1922]: Removed session 7. Feb 13 18:52:57.536159 kubelet[2397]: I0213 18:52:57.535260 2397 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 18:52:57.536159 kubelet[2397]: W0213 18:52:57.535491 2397 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 18:52:57.536159 kubelet[2397]: W0213 18:52:57.535827 2397 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 18:52:57.536159 kubelet[2397]: W0213 18:52:57.535874 2397 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 18:52:57.658574 kubelet[2397]: I0213 18:52:57.658460 2397 apiserver.go:52] "Watching apiserver" Feb 13 18:52:57.658574 kubelet[2397]: E0213 18:52:57.658493 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:57.673095 kubelet[2397]: I0213 18:52:57.672318 2397 topology_manager.go:215] "Topology Admit Handler" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" podNamespace="kube-system" podName="cilium-5c74x" Feb 13 18:52:57.673095 kubelet[2397]: I0213 18:52:57.672617 2397 topology_manager.go:215] "Topology Admit Handler" podUID="177edd61-7f3e-4e5b-a69a-916aa2174ca8" podNamespace="kube-system" podName="kube-proxy-crqlc" Feb 13 18:52:57.690288 systemd[1]: Created slice kubepods-besteffort-pod177edd61_7f3e_4e5b_a69a_916aa2174ca8.slice - libcontainer container kubepods-besteffort-pod177edd61_7f3e_4e5b_a69a_916aa2174ca8.slice. Feb 13 18:52:57.693617 kubelet[2397]: I0213 18:52:57.693542 2397 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 18:52:57.704349 kubelet[2397]: I0213 18:52:57.704253 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-hostproc\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704349 kubelet[2397]: I0213 18:52:57.704315 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-cgroup\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704562 kubelet[2397]: I0213 18:52:57.704359 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-kernel\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704562 kubelet[2397]: I0213 18:52:57.704396 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/177edd61-7f3e-4e5b-a69a-916aa2174ca8-lib-modules\") pod \"kube-proxy-crqlc\" (UID: \"177edd61-7f3e-4e5b-a69a-916aa2174ca8\") " pod="kube-system/kube-proxy-crqlc" Feb 13 18:52:57.704562 kubelet[2397]: I0213 18:52:57.704431 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-run\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704562 kubelet[2397]: I0213 18:52:57.704465 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-bpf-maps\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704562 kubelet[2397]: I0213 18:52:57.704498 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/177edd61-7f3e-4e5b-a69a-916aa2174ca8-kube-proxy\") pod \"kube-proxy-crqlc\" (UID: \"177edd61-7f3e-4e5b-a69a-916aa2174ca8\") " pod="kube-system/kube-proxy-crqlc" Feb 13 18:52:57.704562 kubelet[2397]: I0213 18:52:57.704532 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-etc-cni-netd\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704838 kubelet[2397]: I0213 18:52:57.704602 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-lib-modules\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704838 kubelet[2397]: I0213 18:52:57.704645 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-xtables-lock\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704838 kubelet[2397]: I0213 18:52:57.704682 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/208579d7-1b30-431c-b822-6d4bb139f1a5-clustermesh-secrets\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704838 kubelet[2397]: I0213 18:52:57.704739 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-config-path\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704838 kubelet[2397]: I0213 18:52:57.704780 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-hubble-tls\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.704838 kubelet[2397]: I0213 18:52:57.704819 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/177edd61-7f3e-4e5b-a69a-916aa2174ca8-xtables-lock\") pod \"kube-proxy-crqlc\" (UID: \"177edd61-7f3e-4e5b-a69a-916aa2174ca8\") " pod="kube-system/kube-proxy-crqlc" Feb 13 18:52:57.705217 kubelet[2397]: I0213 18:52:57.704855 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cni-path\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.705217 kubelet[2397]: I0213 18:52:57.704916 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-net\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.705217 kubelet[2397]: I0213 18:52:57.704952 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6q8v\" (UniqueName: \"kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-kube-api-access-z6q8v\") pod \"cilium-5c74x\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " pod="kube-system/cilium-5c74x" Feb 13 18:52:57.705217 kubelet[2397]: I0213 18:52:57.704992 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qll56\" (UniqueName: \"kubernetes.io/projected/177edd61-7f3e-4e5b-a69a-916aa2174ca8-kube-api-access-qll56\") pod \"kube-proxy-crqlc\" (UID: \"177edd61-7f3e-4e5b-a69a-916aa2174ca8\") " pod="kube-system/kube-proxy-crqlc" Feb 13 18:52:57.711369 systemd[1]: Created slice kubepods-burstable-pod208579d7_1b30_431c_b822_6d4bb139f1a5.slice - libcontainer container kubepods-burstable-pod208579d7_1b30_431c_b822_6d4bb139f1a5.slice. Feb 13 18:52:58.005440 containerd[1949]: time="2025-02-13T18:52:58.005072012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crqlc,Uid:177edd61-7f3e-4e5b-a69a-916aa2174ca8,Namespace:kube-system,Attempt:0,}" Feb 13 18:52:58.020144 containerd[1949]: time="2025-02-13T18:52:58.020082333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5c74x,Uid:208579d7-1b30-431c-b822-6d4bb139f1a5,Namespace:kube-system,Attempt:0,}" Feb 13 18:52:58.614135 containerd[1949]: time="2025-02-13T18:52:58.613747221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:58.620359 containerd[1949]: time="2025-02-13T18:52:58.620278618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 18:52:58.621346 containerd[1949]: time="2025-02-13T18:52:58.621218795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:58.623766 containerd[1949]: time="2025-02-13T18:52:58.623673098Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:58.627116 containerd[1949]: time="2025-02-13T18:52:58.625390172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 18:52:58.634299 containerd[1949]: time="2025-02-13T18:52:58.634193980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 18:52:58.636002 containerd[1949]: time="2025-02-13T18:52:58.635922436Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 630.734962ms" Feb 13 18:52:58.638495 containerd[1949]: time="2025-02-13T18:52:58.638401098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 617.889863ms" Feb 13 18:52:58.658715 kubelet[2397]: E0213 18:52:58.658618 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:52:58.823146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983664344.mount: Deactivated successfully. Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.849818332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.849970063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.849996693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.850174153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.850534057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.850642904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.850670830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:58.850869 containerd[1949]: time="2025-02-13T18:52:58.850794744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:52:58.978370 systemd[1]: Started cri-containerd-1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371.scope - libcontainer container 1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371. Feb 13 18:52:58.983658 systemd[1]: Started cri-containerd-53ad3d015cb7fb85580df4b998eb05e319a066665ec85e97f4526b9d28753297.scope - libcontainer container 53ad3d015cb7fb85580df4b998eb05e319a066665ec85e97f4526b9d28753297. Feb 13 18:52:59.042516 containerd[1949]: time="2025-02-13T18:52:59.042443365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5c74x,Uid:208579d7-1b30-431c-b822-6d4bb139f1a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\"" Feb 13 18:52:59.055383 containerd[1949]: time="2025-02-13T18:52:59.055295993Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 18:52:59.070709 containerd[1949]: time="2025-02-13T18:52:59.070634030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crqlc,Uid:177edd61-7f3e-4e5b-a69a-916aa2174ca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"53ad3d015cb7fb85580df4b998eb05e319a066665ec85e97f4526b9d28753297\"" Feb 13 18:52:59.658942 kubelet[2397]: E0213 18:52:59.658865 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:00.660596 kubelet[2397]: E0213 18:53:00.660245 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:01.660799 kubelet[2397]: E0213 18:53:01.660725 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:02.660970 kubelet[2397]: E0213 18:53:02.660856 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:03.661914 kubelet[2397]: E0213 18:53:03.661850 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:04.662120 kubelet[2397]: E0213 18:53:04.662069 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:04.903457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730006760.mount: Deactivated successfully. Feb 13 18:53:05.663237 kubelet[2397]: E0213 18:53:05.663116 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:06.664230 kubelet[2397]: E0213 18:53:06.664094 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:07.217768 containerd[1949]: time="2025-02-13T18:53:07.216370167Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:07.219426 containerd[1949]: time="2025-02-13T18:53:07.219345314Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 18:53:07.222174 containerd[1949]: time="2025-02-13T18:53:07.222077195Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:07.226415 containerd[1949]: time="2025-02-13T18:53:07.226093442Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.170721956s" Feb 13 18:53:07.226415 containerd[1949]: time="2025-02-13T18:53:07.226153316Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 18:53:07.229813 containerd[1949]: time="2025-02-13T18:53:07.229337007Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 18:53:07.233072 containerd[1949]: time="2025-02-13T18:53:07.232922035Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 18:53:07.252408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146229945.mount: Deactivated successfully. Feb 13 18:53:07.257471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170599902.mount: Deactivated successfully. Feb 13 18:53:07.269463 containerd[1949]: time="2025-02-13T18:53:07.269387235Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\"" Feb 13 18:53:07.272889 containerd[1949]: time="2025-02-13T18:53:07.271190728Z" level=info msg="StartContainer for \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\"" Feb 13 18:53:07.325361 systemd[1]: Started cri-containerd-5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7.scope - libcontainer container 5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7. Feb 13 18:53:07.379947 containerd[1949]: time="2025-02-13T18:53:07.379879066Z" level=info msg="StartContainer for \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\" returns successfully" Feb 13 18:53:07.396621 systemd[1]: cri-containerd-5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7.scope: Deactivated successfully. Feb 13 18:53:07.666160 kubelet[2397]: E0213 18:53:07.665211 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:08.247327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7-rootfs.mount: Deactivated successfully. Feb 13 18:53:08.665578 kubelet[2397]: E0213 18:53:08.665399 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:08.763631 containerd[1949]: time="2025-02-13T18:53:08.763459640Z" level=info msg="shim disconnected" id=5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7 namespace=k8s.io Feb 13 18:53:08.763631 containerd[1949]: time="2025-02-13T18:53:08.763561331Z" level=warning msg="cleaning up after shim disconnected" id=5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7 namespace=k8s.io Feb 13 18:53:08.763631 containerd[1949]: time="2025-02-13T18:53:08.763582113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:53:08.916411 containerd[1949]: time="2025-02-13T18:53:08.916145202Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 18:53:08.951616 containerd[1949]: time="2025-02-13T18:53:08.951431448Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\"" Feb 13 18:53:08.952843 containerd[1949]: time="2025-02-13T18:53:08.952789567Z" level=info msg="StartContainer for \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\"" Feb 13 18:53:09.019353 systemd[1]: Started cri-containerd-a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730.scope - libcontainer container a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730. Feb 13 18:53:09.081632 containerd[1949]: time="2025-02-13T18:53:09.081537458Z" level=info msg="StartContainer for \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\" returns successfully" Feb 13 18:53:09.112306 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 18:53:09.113741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:53:09.113863 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:53:09.123919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 18:53:09.124577 systemd[1]: cri-containerd-a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730.scope: Deactivated successfully. Feb 13 18:53:09.181376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 18:53:09.253049 containerd[1949]: time="2025-02-13T18:53:09.252953700Z" level=info msg="shim disconnected" id=a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730 namespace=k8s.io Feb 13 18:53:09.253562 containerd[1949]: time="2025-02-13T18:53:09.253514633Z" level=warning msg="cleaning up after shim disconnected" id=a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730 namespace=k8s.io Feb 13 18:53:09.253562 containerd[1949]: time="2025-02-13T18:53:09.253554060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:53:09.254348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730-rootfs.mount: Deactivated successfully. Feb 13 18:53:09.666749 kubelet[2397]: E0213 18:53:09.666572 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:09.841586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519877304.mount: Deactivated successfully. Feb 13 18:53:09.922746 containerd[1949]: time="2025-02-13T18:53:09.922579194Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 18:53:09.960944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3523896933.mount: Deactivated successfully. Feb 13 18:53:09.969608 containerd[1949]: time="2025-02-13T18:53:09.969549620Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\"" Feb 13 18:53:09.971160 containerd[1949]: time="2025-02-13T18:53:09.970985706Z" level=info msg="StartContainer for \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\"" Feb 13 18:53:10.054131 systemd[1]: Started cri-containerd-3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde.scope - libcontainer container 3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde. Feb 13 18:53:10.133457 containerd[1949]: time="2025-02-13T18:53:10.133219387Z" level=info msg="StartContainer for \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\" returns successfully" Feb 13 18:53:10.141944 systemd[1]: cri-containerd-3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde.scope: Deactivated successfully. Feb 13 18:53:10.249201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde-rootfs.mount: Deactivated successfully. Feb 13 18:53:10.339576 containerd[1949]: time="2025-02-13T18:53:10.339225684Z" level=info msg="shim disconnected" id=3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde namespace=k8s.io Feb 13 18:53:10.339576 containerd[1949]: time="2025-02-13T18:53:10.339315201Z" level=warning msg="cleaning up after shim disconnected" id=3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde namespace=k8s.io Feb 13 18:53:10.339576 containerd[1949]: time="2025-02-13T18:53:10.339336800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:53:10.588251 containerd[1949]: time="2025-02-13T18:53:10.586843145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:10.588423 containerd[1949]: time="2025-02-13T18:53:10.588392243Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 18:53:10.589323 containerd[1949]: time="2025-02-13T18:53:10.589240467Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:10.593570 containerd[1949]: time="2025-02-13T18:53:10.593491287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:10.595218 containerd[1949]: time="2025-02-13T18:53:10.595012820Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 3.365608255s" Feb 13 18:53:10.595218 containerd[1949]: time="2025-02-13T18:53:10.595091747Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 18:53:10.599468 containerd[1949]: time="2025-02-13T18:53:10.599402718Z" level=info msg="CreateContainer within sandbox \"53ad3d015cb7fb85580df4b998eb05e319a066665ec85e97f4526b9d28753297\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 18:53:10.623813 containerd[1949]: time="2025-02-13T18:53:10.623749601Z" level=info msg="CreateContainer within sandbox \"53ad3d015cb7fb85580df4b998eb05e319a066665ec85e97f4526b9d28753297\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d59a8a9d2b1ef79189809e7ce9bc1f89d257c55ecfbf63f2ef9ce294380759f0\"" Feb 13 18:53:10.625152 containerd[1949]: time="2025-02-13T18:53:10.625079050Z" level=info msg="StartContainer for \"d59a8a9d2b1ef79189809e7ce9bc1f89d257c55ecfbf63f2ef9ce294380759f0\"" Feb 13 18:53:10.667302 kubelet[2397]: E0213 18:53:10.667020 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:10.685568 systemd[1]: Started cri-containerd-d59a8a9d2b1ef79189809e7ce9bc1f89d257c55ecfbf63f2ef9ce294380759f0.scope - libcontainer container d59a8a9d2b1ef79189809e7ce9bc1f89d257c55ecfbf63f2ef9ce294380759f0. Feb 13 18:53:10.747987 containerd[1949]: time="2025-02-13T18:53:10.747756935Z" level=info msg="StartContainer for \"d59a8a9d2b1ef79189809e7ce9bc1f89d257c55ecfbf63f2ef9ce294380759f0\" returns successfully" Feb 13 18:53:10.927194 containerd[1949]: time="2025-02-13T18:53:10.927012831Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 18:53:10.949734 containerd[1949]: time="2025-02-13T18:53:10.949629446Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\"" Feb 13 18:53:10.950773 containerd[1949]: time="2025-02-13T18:53:10.950708725Z" level=info msg="StartContainer for \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\"" Feb 13 18:53:10.966527 kubelet[2397]: I0213 18:53:10.966155 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-crqlc" podStartSLOduration=3.442850264 podStartE2EDuration="14.966101426s" podCreationTimestamp="2025-02-13 18:52:56 +0000 UTC" firstStartedPulling="2025-02-13 18:52:59.07369167 +0000 UTC m=+4.035035435" lastFinishedPulling="2025-02-13 18:53:10.596942832 +0000 UTC m=+15.558286597" observedRunningTime="2025-02-13 18:53:10.965429342 +0000 UTC m=+15.926773095" watchObservedRunningTime="2025-02-13 18:53:10.966101426 +0000 UTC m=+15.927445275" Feb 13 18:53:11.002240 systemd[1]: Started cri-containerd-4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f.scope - libcontainer container 4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f. Feb 13 18:53:11.055661 systemd[1]: cri-containerd-4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f.scope: Deactivated successfully. Feb 13 18:53:11.065451 containerd[1949]: time="2025-02-13T18:53:11.063505264Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod208579d7_1b30_431c_b822_6d4bb139f1a5.slice/cri-containerd-4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f.scope/cgroup.events\": no such file or directory" Feb 13 18:53:11.065451 containerd[1949]: time="2025-02-13T18:53:11.064868233Z" level=info msg="StartContainer for \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\" returns successfully" Feb 13 18:53:11.197347 containerd[1949]: time="2025-02-13T18:53:11.196572589Z" level=info msg="shim disconnected" id=4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f namespace=k8s.io Feb 13 18:53:11.197347 containerd[1949]: time="2025-02-13T18:53:11.196806286Z" level=warning msg="cleaning up after shim disconnected" id=4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f namespace=k8s.io Feb 13 18:53:11.197347 containerd[1949]: time="2025-02-13T18:53:11.196832639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:53:11.668151 kubelet[2397]: E0213 18:53:11.667957 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:11.949285 containerd[1949]: time="2025-02-13T18:53:11.948548631Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 18:53:11.973843 containerd[1949]: time="2025-02-13T18:53:11.973765481Z" level=info msg="CreateContainer within sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\"" Feb 13 18:53:11.975122 containerd[1949]: time="2025-02-13T18:53:11.974692945Z" level=info msg="StartContainer for \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\"" Feb 13 18:53:12.035348 systemd[1]: Started cri-containerd-adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27.scope - libcontainer container adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27. Feb 13 18:53:12.089857 containerd[1949]: time="2025-02-13T18:53:12.089721867Z" level=info msg="StartContainer for \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\" returns successfully" Feb 13 18:53:12.309472 kubelet[2397]: I0213 18:53:12.309313 2397 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 18:53:12.668883 kubelet[2397]: E0213 18:53:12.668583 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:12.998307 kernel: Initializing XFRM netlink socket Feb 13 18:53:13.669281 kubelet[2397]: E0213 18:53:13.669206 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:14.669904 kubelet[2397]: E0213 18:53:14.669835 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:14.837678 (udev-worker)[3085]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:53:14.840160 (udev-worker)[2885]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:53:14.843486 systemd-networkd[1836]: cilium_host: Link UP Feb 13 18:53:14.843791 systemd-networkd[1836]: cilium_net: Link UP Feb 13 18:53:14.844200 systemd-networkd[1836]: cilium_net: Gained carrier Feb 13 18:53:14.844528 systemd-networkd[1836]: cilium_host: Gained carrier Feb 13 18:53:15.029431 systemd-networkd[1836]: cilium_vxlan: Link UP Feb 13 18:53:15.029447 systemd-networkd[1836]: cilium_vxlan: Gained carrier Feb 13 18:53:15.152070 kubelet[2397]: I0213 18:53:15.151681 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5c74x" podStartSLOduration=10.97452656 podStartE2EDuration="19.151656839s" podCreationTimestamp="2025-02-13 18:52:56 +0000 UTC" firstStartedPulling="2025-02-13 18:52:59.051708876 +0000 UTC m=+4.013052653" lastFinishedPulling="2025-02-13 18:53:07.228839167 +0000 UTC m=+12.190182932" observedRunningTime="2025-02-13 18:53:12.980331684 +0000 UTC m=+17.941675449" watchObservedRunningTime="2025-02-13 18:53:15.151656839 +0000 UTC m=+20.113000604" Feb 13 18:53:15.153765 kubelet[2397]: I0213 18:53:15.153689 2397 topology_manager.go:215] "Topology Admit Handler" podUID="191683f1-2e27-49cd-a5d5-252b86fc9e64" podNamespace="default" podName="nginx-deployment-85f456d6dd-h4z9v" Feb 13 18:53:15.170557 systemd[1]: Created slice kubepods-besteffort-pod191683f1_2e27_49cd_a5d5_252b86fc9e64.slice - libcontainer container kubepods-besteffort-pod191683f1_2e27_49cd_a5d5_252b86fc9e64.slice. Feb 13 18:53:15.233058 kubelet[2397]: I0213 18:53:15.232889 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7h99\" (UniqueName: \"kubernetes.io/projected/191683f1-2e27-49cd-a5d5-252b86fc9e64-kube-api-access-v7h99\") pod \"nginx-deployment-85f456d6dd-h4z9v\" (UID: \"191683f1-2e27-49cd-a5d5-252b86fc9e64\") " pod="default/nginx-deployment-85f456d6dd-h4z9v" Feb 13 18:53:15.457783 systemd-networkd[1836]: cilium_net: Gained IPv6LL Feb 13 18:53:15.478132 containerd[1949]: time="2025-02-13T18:53:15.477614134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-h4z9v,Uid:191683f1-2e27-49cd-a5d5-252b86fc9e64,Namespace:default,Attempt:0,}" Feb 13 18:53:15.521305 systemd-networkd[1836]: cilium_host: Gained IPv6LL Feb 13 18:53:15.554083 kernel: NET: Registered PF_ALG protocol family Feb 13 18:53:15.670919 kubelet[2397]: E0213 18:53:15.670820 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:16.353321 systemd-networkd[1836]: cilium_vxlan: Gained IPv6LL Feb 13 18:53:16.655735 kubelet[2397]: E0213 18:53:16.655564 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:16.671179 kubelet[2397]: E0213 18:53:16.671111 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:16.873832 (udev-worker)[2886]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:53:16.876335 systemd-networkd[1836]: lxc_health: Link UP Feb 13 18:53:16.888912 systemd-networkd[1836]: lxc_health: Gained carrier Feb 13 18:53:17.548485 systemd-networkd[1836]: lxcc51c6037a800: Link UP Feb 13 18:53:17.557101 kernel: eth0: renamed from tmp99ae8 Feb 13 18:53:17.565961 systemd-networkd[1836]: lxcc51c6037a800: Gained carrier Feb 13 18:53:17.671884 kubelet[2397]: E0213 18:53:17.671805 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:17.694117 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 18:53:18.337302 systemd-networkd[1836]: lxc_health: Gained IPv6LL Feb 13 18:53:18.672894 kubelet[2397]: E0213 18:53:18.672600 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:19.553237 systemd-networkd[1836]: lxcc51c6037a800: Gained IPv6LL Feb 13 18:53:19.673694 kubelet[2397]: E0213 18:53:19.673555 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:20.674211 kubelet[2397]: E0213 18:53:20.674139 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:21.675314 kubelet[2397]: E0213 18:53:21.675243 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:21.725299 ntpd[1917]: Listen normally on 8 cilium_host 192.168.1.75:123 Feb 13 18:53:21.725460 ntpd[1917]: Listen normally on 9 cilium_net [fe80::e40f:aff:fee8:c16a%3]:123 Feb 13 18:53:21.725952 ntpd[1917]: 13 Feb 18:53:21 ntpd[1917]: Listen normally on 8 cilium_host 192.168.1.75:123 Feb 13 18:53:21.725952 ntpd[1917]: 13 Feb 18:53:21 ntpd[1917]: Listen normally on 9 cilium_net [fe80::e40f:aff:fee8:c16a%3]:123 Feb 13 18:53:21.725952 ntpd[1917]: 13 Feb 18:53:21 ntpd[1917]: Listen normally on 10 cilium_host [fe80::6419:c8ff:fe80:d0fe%4]:123 Feb 13 18:53:21.725952 ntpd[1917]: 13 Feb 18:53:21 ntpd[1917]: Listen normally on 11 cilium_vxlan [fe80::d83a:3ff:fe7e:d724%5]:123 Feb 13 18:53:21.725952 ntpd[1917]: 13 Feb 18:53:21 ntpd[1917]: Listen normally on 12 lxc_health [fe80::b8d8:17ff:fe81:1c76%7]:123 Feb 13 18:53:21.725952 ntpd[1917]: 13 Feb 18:53:21 ntpd[1917]: Listen normally on 13 lxcc51c6037a800 [fe80::4fb:afff:fe4d:c34%9]:123 Feb 13 18:53:21.725551 ntpd[1917]: Listen normally on 10 cilium_host [fe80::6419:c8ff:fe80:d0fe%4]:123 Feb 13 18:53:21.725627 ntpd[1917]: Listen normally on 11 cilium_vxlan [fe80::d83a:3ff:fe7e:d724%5]:123 Feb 13 18:53:21.725693 ntpd[1917]: Listen normally on 12 lxc_health [fe80::b8d8:17ff:fe81:1c76%7]:123 Feb 13 18:53:21.725784 ntpd[1917]: Listen normally on 13 lxcc51c6037a800 [fe80::4fb:afff:fe4d:c34%9]:123 Feb 13 18:53:22.676442 kubelet[2397]: E0213 18:53:22.676374 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:23.677596 kubelet[2397]: E0213 18:53:23.677507 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:23.723088 kubelet[2397]: I0213 18:53:23.721910 2397 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 18:53:24.678353 kubelet[2397]: E0213 18:53:24.678295 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:25.679154 kubelet[2397]: E0213 18:53:25.679066 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:25.985635 containerd[1949]: time="2025-02-13T18:53:25.984850492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:53:25.985635 containerd[1949]: time="2025-02-13T18:53:25.984955725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:53:25.985635 containerd[1949]: time="2025-02-13T18:53:25.984990890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:25.986379 containerd[1949]: time="2025-02-13T18:53:25.985335270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:26.028367 systemd[1]: Started cri-containerd-99ae822a5f8c892671cd36559bd36eb6d916ed4bb82852e2ff69909596c80e86.scope - libcontainer container 99ae822a5f8c892671cd36559bd36eb6d916ed4bb82852e2ff69909596c80e86. Feb 13 18:53:26.089536 containerd[1949]: time="2025-02-13T18:53:26.089115315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-h4z9v,Uid:191683f1-2e27-49cd-a5d5-252b86fc9e64,Namespace:default,Attempt:0,} returns sandbox id \"99ae822a5f8c892671cd36559bd36eb6d916ed4bb82852e2ff69909596c80e86\"" Feb 13 18:53:26.093832 containerd[1949]: time="2025-02-13T18:53:26.093368885Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 18:53:26.680139 kubelet[2397]: E0213 18:53:26.680083 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:27.681573 kubelet[2397]: E0213 18:53:27.681527 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:28.683919 kubelet[2397]: E0213 18:53:28.683715 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:29.249525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523215435.mount: Deactivated successfully. Feb 13 18:53:29.684665 kubelet[2397]: E0213 18:53:29.684509 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:30.631290 containerd[1949]: time="2025-02-13T18:53:30.630701853Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 18:53:30.631290 containerd[1949]: time="2025-02-13T18:53:30.630880178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:30.637077 containerd[1949]: time="2025-02-13T18:53:30.636876611Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:30.638814 containerd[1949]: time="2025-02-13T18:53:30.638736497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:30.641111 containerd[1949]: time="2025-02-13T18:53:30.640896400Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 4.54746979s" Feb 13 18:53:30.641111 containerd[1949]: time="2025-02-13T18:53:30.640952096Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 18:53:30.645664 containerd[1949]: time="2025-02-13T18:53:30.645606594Z" level=info msg="CreateContainer within sandbox \"99ae822a5f8c892671cd36559bd36eb6d916ed4bb82852e2ff69909596c80e86\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 18:53:30.666720 containerd[1949]: time="2025-02-13T18:53:30.666588928Z" level=info msg="CreateContainer within sandbox \"99ae822a5f8c892671cd36559bd36eb6d916ed4bb82852e2ff69909596c80e86\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"8e9b48161e947ceb335199b7f22bb76df60b38cff38502c572e60edd1fe4306b\"" Feb 13 18:53:30.667895 containerd[1949]: time="2025-02-13T18:53:30.667393666Z" level=info msg="StartContainer for \"8e9b48161e947ceb335199b7f22bb76df60b38cff38502c572e60edd1fe4306b\"" Feb 13 18:53:30.685159 kubelet[2397]: E0213 18:53:30.684954 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:30.722428 systemd[1]: Started cri-containerd-8e9b48161e947ceb335199b7f22bb76df60b38cff38502c572e60edd1fe4306b.scope - libcontainer container 8e9b48161e947ceb335199b7f22bb76df60b38cff38502c572e60edd1fe4306b. Feb 13 18:53:30.770407 containerd[1949]: time="2025-02-13T18:53:30.770018146Z" level=info msg="StartContainer for \"8e9b48161e947ceb335199b7f22bb76df60b38cff38502c572e60edd1fe4306b\" returns successfully" Feb 13 18:53:31.036832 kubelet[2397]: I0213 18:53:31.035892 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-h4z9v" podStartSLOduration=11.485608906 podStartE2EDuration="16.035864016s" podCreationTimestamp="2025-02-13 18:53:15 +0000 UTC" firstStartedPulling="2025-02-13 18:53:26.092653603 +0000 UTC m=+31.053997357" lastFinishedPulling="2025-02-13 18:53:30.642908702 +0000 UTC m=+35.604252467" observedRunningTime="2025-02-13 18:53:31.035452918 +0000 UTC m=+35.996796696" watchObservedRunningTime="2025-02-13 18:53:31.035864016 +0000 UTC m=+35.997207781" Feb 13 18:53:31.611639 update_engine[1923]: I20250213 18:53:31.611547 1923 update_attempter.cc:509] Updating boot flags... Feb 13 18:53:31.686358 kubelet[2397]: E0213 18:53:31.685389 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:31.690172 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3596) Feb 13 18:53:31.965169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3596) Feb 13 18:53:32.212095 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3596) Feb 13 18:53:32.686070 kubelet[2397]: E0213 18:53:32.685986 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:33.686709 kubelet[2397]: E0213 18:53:33.686640 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:34.687198 kubelet[2397]: E0213 18:53:34.687140 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:35.687481 kubelet[2397]: E0213 18:53:35.687413 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:36.469801 kubelet[2397]: I0213 18:53:36.469711 2397 topology_manager.go:215] "Topology Admit Handler" podUID="9f9ac552-41c5-4405-b101-f328c00bc159" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 18:53:36.481898 systemd[1]: Created slice kubepods-besteffort-pod9f9ac552_41c5_4405_b101_f328c00bc159.slice - libcontainer container kubepods-besteffort-pod9f9ac552_41c5_4405_b101_f328c00bc159.slice. Feb 13 18:53:36.586001 kubelet[2397]: I0213 18:53:36.585879 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9f9ac552-41c5-4405-b101-f328c00bc159-data\") pod \"nfs-server-provisioner-0\" (UID: \"9f9ac552-41c5-4405-b101-f328c00bc159\") " pod="default/nfs-server-provisioner-0" Feb 13 18:53:36.586001 kubelet[2397]: I0213 18:53:36.585947 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99b5\" (UniqueName: \"kubernetes.io/projected/9f9ac552-41c5-4405-b101-f328c00bc159-kube-api-access-h99b5\") pod \"nfs-server-provisioner-0\" (UID: \"9f9ac552-41c5-4405-b101-f328c00bc159\") " pod="default/nfs-server-provisioner-0" Feb 13 18:53:36.655213 kubelet[2397]: E0213 18:53:36.655141 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:36.687655 kubelet[2397]: E0213 18:53:36.687591 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:36.787658 containerd[1949]: time="2025-02-13T18:53:36.787501638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9f9ac552-41c5-4405-b101-f328c00bc159,Namespace:default,Attempt:0,}" Feb 13 18:53:36.841104 (udev-worker)[3852]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:53:36.843274 systemd-networkd[1836]: lxc1a1433e7afeb: Link UP Feb 13 18:53:36.853173 kernel: eth0: renamed from tmp9da2f Feb 13 18:53:36.861981 (udev-worker)[3853]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:53:36.864251 systemd-networkd[1836]: lxc1a1433e7afeb: Gained carrier Feb 13 18:53:37.254576 containerd[1949]: time="2025-02-13T18:53:37.254380584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:53:37.255153 containerd[1949]: time="2025-02-13T18:53:37.254661644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:53:37.256101 containerd[1949]: time="2025-02-13T18:53:37.255678000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:37.256101 containerd[1949]: time="2025-02-13T18:53:37.255867179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:37.304389 systemd[1]: Started cri-containerd-9da2f0b7e692586e6c9f211b868d21d61ee9e0515371f6dc2d99fe27c0d8d653.scope - libcontainer container 9da2f0b7e692586e6c9f211b868d21d61ee9e0515371f6dc2d99fe27c0d8d653. Feb 13 18:53:37.363158 containerd[1949]: time="2025-02-13T18:53:37.363018784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9f9ac552-41c5-4405-b101-f328c00bc159,Namespace:default,Attempt:0,} returns sandbox id \"9da2f0b7e692586e6c9f211b868d21d61ee9e0515371f6dc2d99fe27c0d8d653\"" Feb 13 18:53:37.367111 containerd[1949]: time="2025-02-13T18:53:37.367001439Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 18:53:37.688151 kubelet[2397]: E0213 18:53:37.688017 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:37.710706 systemd[1]: run-containerd-runc-k8s.io-9da2f0b7e692586e6c9f211b868d21d61ee9e0515371f6dc2d99fe27c0d8d653-runc.TMeAHP.mount: Deactivated successfully. Feb 13 18:53:38.562677 systemd-networkd[1836]: lxc1a1433e7afeb: Gained IPv6LL Feb 13 18:53:38.688245 kubelet[2397]: E0213 18:53:38.688193 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:39.688529 kubelet[2397]: E0213 18:53:39.688476 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:40.085316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount455169220.mount: Deactivated successfully. Feb 13 18:53:40.689691 kubelet[2397]: E0213 18:53:40.689639 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:40.725434 ntpd[1917]: Listen normally on 14 lxc1a1433e7afeb [fe80::dc77:eaff:fee8:b341%11]:123 Feb 13 18:53:40.726160 ntpd[1917]: 13 Feb 18:53:40 ntpd[1917]: Listen normally on 14 lxc1a1433e7afeb [fe80::dc77:eaff:fee8:b341%11]:123 Feb 13 18:53:41.690260 kubelet[2397]: E0213 18:53:41.690215 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:42.692147 kubelet[2397]: E0213 18:53:42.691598 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:43.450083 containerd[1949]: time="2025-02-13T18:53:43.449993533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:43.452680 containerd[1949]: time="2025-02-13T18:53:43.452585821Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Feb 13 18:53:43.454312 containerd[1949]: time="2025-02-13T18:53:43.454195849Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:43.458692 containerd[1949]: time="2025-02-13T18:53:43.458610047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:43.460870 containerd[1949]: time="2025-02-13T18:53:43.460790385Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 6.093309535s" Feb 13 18:53:43.463068 containerd[1949]: time="2025-02-13T18:53:43.461426691Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 18:53:43.469005 containerd[1949]: time="2025-02-13T18:53:43.468932614Z" level=info msg="CreateContainer within sandbox \"9da2f0b7e692586e6c9f211b868d21d61ee9e0515371f6dc2d99fe27c0d8d653\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 18:53:43.488716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2666510611.mount: Deactivated successfully. Feb 13 18:53:43.498385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665024146.mount: Deactivated successfully. Feb 13 18:53:43.505785 containerd[1949]: time="2025-02-13T18:53:43.505724461Z" level=info msg="CreateContainer within sandbox \"9da2f0b7e692586e6c9f211b868d21d61ee9e0515371f6dc2d99fe27c0d8d653\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5d73d8c04ad8baa79a7cb6f5610f981a8a06b9937a8b3f87316a136f52742248\"" Feb 13 18:53:43.507067 containerd[1949]: time="2025-02-13T18:53:43.506995140Z" level=info msg="StartContainer for \"5d73d8c04ad8baa79a7cb6f5610f981a8a06b9937a8b3f87316a136f52742248\"" Feb 13 18:53:43.561369 systemd[1]: Started cri-containerd-5d73d8c04ad8baa79a7cb6f5610f981a8a06b9937a8b3f87316a136f52742248.scope - libcontainer container 5d73d8c04ad8baa79a7cb6f5610f981a8a06b9937a8b3f87316a136f52742248. Feb 13 18:53:43.607821 containerd[1949]: time="2025-02-13T18:53:43.607751006Z" level=info msg="StartContainer for \"5d73d8c04ad8baa79a7cb6f5610f981a8a06b9937a8b3f87316a136f52742248\" returns successfully" Feb 13 18:53:43.692136 kubelet[2397]: E0213 18:53:43.692049 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:44.081131 kubelet[2397]: I0213 18:53:44.080543 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.981281566 podStartE2EDuration="8.0805209s" podCreationTimestamp="2025-02-13 18:53:36 +0000 UTC" firstStartedPulling="2025-02-13 18:53:37.366326617 +0000 UTC m=+42.327670383" lastFinishedPulling="2025-02-13 18:53:43.465565952 +0000 UTC m=+48.426909717" observedRunningTime="2025-02-13 18:53:44.080324674 +0000 UTC m=+49.041668463" watchObservedRunningTime="2025-02-13 18:53:44.0805209 +0000 UTC m=+49.041864677" Feb 13 18:53:44.692702 kubelet[2397]: E0213 18:53:44.692640 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:45.693244 kubelet[2397]: E0213 18:53:45.693180 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:46.694373 kubelet[2397]: E0213 18:53:46.694311 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:47.695346 kubelet[2397]: E0213 18:53:47.695258 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:48.695623 kubelet[2397]: E0213 18:53:48.695560 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:49.696380 kubelet[2397]: E0213 18:53:49.696316 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:50.697563 kubelet[2397]: E0213 18:53:50.697469 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:51.698246 kubelet[2397]: E0213 18:53:51.698177 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:52.698581 kubelet[2397]: E0213 18:53:52.698499 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:53.219347 kubelet[2397]: I0213 18:53:53.219230 2397 topology_manager.go:215] "Topology Admit Handler" podUID="f1a8e6cf-7cbe-40a1-900c-666a4d0dbb85" podNamespace="default" podName="test-pod-1" Feb 13 18:53:53.233605 systemd[1]: Created slice kubepods-besteffort-podf1a8e6cf_7cbe_40a1_900c_666a4d0dbb85.slice - libcontainer container kubepods-besteffort-podf1a8e6cf_7cbe_40a1_900c_666a4d0dbb85.slice. Feb 13 18:53:53.297615 kubelet[2397]: I0213 18:53:53.297547 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8l6\" (UniqueName: \"kubernetes.io/projected/f1a8e6cf-7cbe-40a1-900c-666a4d0dbb85-kube-api-access-vx8l6\") pod \"test-pod-1\" (UID: \"f1a8e6cf-7cbe-40a1-900c-666a4d0dbb85\") " pod="default/test-pod-1" Feb 13 18:53:53.297994 kubelet[2397]: I0213 18:53:53.297956 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed9b29e6-821d-400a-a2a3-fb99506afdc2\" (UniqueName: \"kubernetes.io/nfs/f1a8e6cf-7cbe-40a1-900c-666a4d0dbb85-pvc-ed9b29e6-821d-400a-a2a3-fb99506afdc2\") pod \"test-pod-1\" (UID: \"f1a8e6cf-7cbe-40a1-900c-666a4d0dbb85\") " pod="default/test-pod-1" Feb 13 18:53:53.435068 kernel: FS-Cache: Loaded Feb 13 18:53:53.477814 kernel: RPC: Registered named UNIX socket transport module. Feb 13 18:53:53.477944 kernel: RPC: Registered udp transport module. Feb 13 18:53:53.477987 kernel: RPC: Registered tcp transport module. Feb 13 18:53:53.478801 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 18:53:53.479778 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 18:53:53.699135 kubelet[2397]: E0213 18:53:53.699067 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:53.786376 kernel: NFS: Registering the id_resolver key type Feb 13 18:53:53.786644 kernel: Key type id_resolver registered Feb 13 18:53:53.786687 kernel: Key type id_legacy registered Feb 13 18:53:53.828683 nfsidmap[4036]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 18:53:53.837841 nfsidmap[4037]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 13 18:53:54.140902 containerd[1949]: time="2025-02-13T18:53:54.140785988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f1a8e6cf-7cbe-40a1-900c-666a4d0dbb85,Namespace:default,Attempt:0,}" Feb 13 18:53:54.189835 systemd-networkd[1836]: lxcf12c946f59f2: Link UP Feb 13 18:53:54.198176 kernel: eth0: renamed from tmp56dec Feb 13 18:53:54.197313 (udev-worker)[4023]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:53:54.207885 systemd-networkd[1836]: lxcf12c946f59f2: Gained carrier Feb 13 18:53:54.568305 containerd[1949]: time="2025-02-13T18:53:54.567290725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:53:54.568544 containerd[1949]: time="2025-02-13T18:53:54.568016800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:53:54.568544 containerd[1949]: time="2025-02-13T18:53:54.568088812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:54.568544 containerd[1949]: time="2025-02-13T18:53:54.568349114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:53:54.613349 systemd[1]: Started cri-containerd-56decf33e047f7373348c50e0daeed1eab8d303cd98c405f10344ac11cb7d956.scope - libcontainer container 56decf33e047f7373348c50e0daeed1eab8d303cd98c405f10344ac11cb7d956. Feb 13 18:53:54.672616 containerd[1949]: time="2025-02-13T18:53:54.672560054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f1a8e6cf-7cbe-40a1-900c-666a4d0dbb85,Namespace:default,Attempt:0,} returns sandbox id \"56decf33e047f7373348c50e0daeed1eab8d303cd98c405f10344ac11cb7d956\"" Feb 13 18:53:54.675942 containerd[1949]: time="2025-02-13T18:53:54.675891251Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 18:53:54.700419 kubelet[2397]: E0213 18:53:54.700265 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:55.032525 containerd[1949]: time="2025-02-13T18:53:55.032293518Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:53:55.033512 containerd[1949]: time="2025-02-13T18:53:55.033438578Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 18:53:55.039768 containerd[1949]: time="2025-02-13T18:53:55.039706058Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 363.732794ms" Feb 13 18:53:55.039768 containerd[1949]: time="2025-02-13T18:53:55.039764695Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 18:53:55.043531 containerd[1949]: time="2025-02-13T18:53:55.043356254Z" level=info msg="CreateContainer within sandbox \"56decf33e047f7373348c50e0daeed1eab8d303cd98c405f10344ac11cb7d956\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 18:53:55.062477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365279469.mount: Deactivated successfully. Feb 13 18:53:55.068098 containerd[1949]: time="2025-02-13T18:53:55.068005893Z" level=info msg="CreateContainer within sandbox \"56decf33e047f7373348c50e0daeed1eab8d303cd98c405f10344ac11cb7d956\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d2354061a81814f482c72760d875fadf9be98673135b57700369424165e9a2f3\"" Feb 13 18:53:55.069549 containerd[1949]: time="2025-02-13T18:53:55.068702385Z" level=info msg="StartContainer for \"d2354061a81814f482c72760d875fadf9be98673135b57700369424165e9a2f3\"" Feb 13 18:53:55.117474 systemd[1]: Started cri-containerd-d2354061a81814f482c72760d875fadf9be98673135b57700369424165e9a2f3.scope - libcontainer container d2354061a81814f482c72760d875fadf9be98673135b57700369424165e9a2f3. Feb 13 18:53:55.174816 containerd[1949]: time="2025-02-13T18:53:55.174747266Z" level=info msg="StartContainer for \"d2354061a81814f482c72760d875fadf9be98673135b57700369424165e9a2f3\" returns successfully" Feb 13 18:53:55.521392 systemd-networkd[1836]: lxcf12c946f59f2: Gained IPv6LL Feb 13 18:53:55.700885 kubelet[2397]: E0213 18:53:55.700812 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:56.138351 kubelet[2397]: I0213 18:53:56.138266 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.772076746 podStartE2EDuration="19.13824354s" podCreationTimestamp="2025-02-13 18:53:37 +0000 UTC" firstStartedPulling="2025-02-13 18:53:54.674741725 +0000 UTC m=+59.636085490" lastFinishedPulling="2025-02-13 18:53:55.040908519 +0000 UTC m=+60.002252284" observedRunningTime="2025-02-13 18:53:56.137709609 +0000 UTC m=+61.099053398" watchObservedRunningTime="2025-02-13 18:53:56.13824354 +0000 UTC m=+61.099587293" Feb 13 18:53:56.654807 kubelet[2397]: E0213 18:53:56.654739 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:56.702012 kubelet[2397]: E0213 18:53:56.701925 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:57.702819 kubelet[2397]: E0213 18:53:57.702714 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:57.725597 ntpd[1917]: Listen normally on 15 lxcf12c946f59f2 [fe80::d0e4:9dff:fed3:2dda%13]:123 Feb 13 18:53:57.726150 ntpd[1917]: 13 Feb 18:53:57 ntpd[1917]: Listen normally on 15 lxcf12c946f59f2 [fe80::d0e4:9dff:fed3:2dda%13]:123 Feb 13 18:53:58.703269 kubelet[2397]: E0213 18:53:58.703207 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:53:59.704188 kubelet[2397]: E0213 18:53:59.704088 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:00.704810 kubelet[2397]: E0213 18:54:00.704742 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:01.705467 kubelet[2397]: E0213 18:54:01.705400 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:02.706423 kubelet[2397]: E0213 18:54:02.706361 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:03.254734 containerd[1949]: time="2025-02-13T18:54:03.254662225Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 18:54:03.267477 containerd[1949]: time="2025-02-13T18:54:03.267347980Z" level=info msg="StopContainer for \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\" with timeout 2 (s)" Feb 13 18:54:03.268258 containerd[1949]: time="2025-02-13T18:54:03.268205101Z" level=info msg="Stop container \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\" with signal terminated" Feb 13 18:54:03.281387 systemd-networkd[1836]: lxc_health: Link DOWN Feb 13 18:54:03.281407 systemd-networkd[1836]: lxc_health: Lost carrier Feb 13 18:54:03.304973 systemd[1]: cri-containerd-adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27.scope: Deactivated successfully. Feb 13 18:54:03.305841 systemd[1]: cri-containerd-adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27.scope: Consumed 14.910s CPU time. Feb 13 18:54:03.346382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27-rootfs.mount: Deactivated successfully. Feb 13 18:54:03.608106 containerd[1949]: time="2025-02-13T18:54:03.607621278Z" level=info msg="shim disconnected" id=adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27 namespace=k8s.io Feb 13 18:54:03.608106 containerd[1949]: time="2025-02-13T18:54:03.607771678Z" level=warning msg="cleaning up after shim disconnected" id=adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27 namespace=k8s.io Feb 13 18:54:03.608106 containerd[1949]: time="2025-02-13T18:54:03.607794669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:54:03.633404 containerd[1949]: time="2025-02-13T18:54:03.633228948Z" level=info msg="StopContainer for \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\" returns successfully" Feb 13 18:54:03.634633 containerd[1949]: time="2025-02-13T18:54:03.634491091Z" level=info msg="StopPodSandbox for \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\"" Feb 13 18:54:03.634633 containerd[1949]: time="2025-02-13T18:54:03.634585002Z" level=info msg="Container to stop \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 18:54:03.634633 containerd[1949]: time="2025-02-13T18:54:03.634614176Z" level=info msg="Container to stop \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 18:54:03.634633 containerd[1949]: time="2025-02-13T18:54:03.634635943Z" level=info msg="Container to stop \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 18:54:03.634633 containerd[1949]: time="2025-02-13T18:54:03.634657398Z" level=info msg="Container to stop \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 18:54:03.635227 containerd[1949]: time="2025-02-13T18:54:03.634677688Z" level=info msg="Container to stop \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 18:54:03.639685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371-shm.mount: Deactivated successfully. Feb 13 18:54:03.649436 systemd[1]: cri-containerd-1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371.scope: Deactivated successfully. Feb 13 18:54:03.688502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371-rootfs.mount: Deactivated successfully. Feb 13 18:54:03.692115 containerd[1949]: time="2025-02-13T18:54:03.691797650Z" level=info msg="shim disconnected" id=1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371 namespace=k8s.io Feb 13 18:54:03.692115 containerd[1949]: time="2025-02-13T18:54:03.691882701Z" level=warning msg="cleaning up after shim disconnected" id=1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371 namespace=k8s.io Feb 13 18:54:03.692115 containerd[1949]: time="2025-02-13T18:54:03.691903855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:54:03.706758 kubelet[2397]: E0213 18:54:03.706629 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:03.717116 containerd[1949]: time="2025-02-13T18:54:03.716809090Z" level=info msg="TearDown network for sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" successfully" Feb 13 18:54:03.717116 containerd[1949]: time="2025-02-13T18:54:03.716877260Z" level=info msg="StopPodSandbox for \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" returns successfully" Feb 13 18:54:03.765928 kubelet[2397]: I0213 18:54:03.765823 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-cgroup\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.765928 kubelet[2397]: I0213 18:54:03.765852 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.765928 kubelet[2397]: I0213 18:54:03.765905 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/208579d7-1b30-431c-b822-6d4bb139f1a5-clustermesh-secrets\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766310 kubelet[2397]: I0213 18:54:03.765952 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6q8v\" (UniqueName: \"kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-kube-api-access-z6q8v\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766310 kubelet[2397]: I0213 18:54:03.765991 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-etc-cni-netd\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766310 kubelet[2397]: I0213 18:54:03.766049 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-lib-modules\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766310 kubelet[2397]: I0213 18:54:03.766090 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cni-path\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766310 kubelet[2397]: I0213 18:54:03.766134 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-xtables-lock\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766310 kubelet[2397]: I0213 18:54:03.766176 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-config-path\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766660 kubelet[2397]: I0213 18:54:03.766223 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-hubble-tls\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766660 kubelet[2397]: I0213 18:54:03.766259 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-net\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766660 kubelet[2397]: I0213 18:54:03.766302 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-hostproc\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766660 kubelet[2397]: I0213 18:54:03.766335 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-kernel\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766660 kubelet[2397]: I0213 18:54:03.766373 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-run\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766660 kubelet[2397]: I0213 18:54:03.766408 2397 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-bpf-maps\") pod \"208579d7-1b30-431c-b822-6d4bb139f1a5\" (UID: \"208579d7-1b30-431c-b822-6d4bb139f1a5\") " Feb 13 18:54:03.766979 kubelet[2397]: I0213 18:54:03.766477 2397 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-cgroup\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.766979 kubelet[2397]: I0213 18:54:03.766551 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772135 kubelet[2397]: I0213 18:54:03.770276 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772135 kubelet[2397]: I0213 18:54:03.770386 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772135 kubelet[2397]: I0213 18:54:03.770449 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cni-path" (OuterVolumeSpecName: "cni-path") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772135 kubelet[2397]: I0213 18:54:03.770521 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772135 kubelet[2397]: I0213 18:54:03.771172 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772518 kubelet[2397]: I0213 18:54:03.771266 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-hostproc" (OuterVolumeSpecName: "hostproc") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772518 kubelet[2397]: I0213 18:54:03.771347 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.772518 kubelet[2397]: I0213 18:54:03.771431 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 18:54:03.780080 kubelet[2397]: I0213 18:54:03.777613 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 18:54:03.781870 systemd[1]: var-lib-kubelet-pods-208579d7\x2d1b30\x2d431c\x2db822\x2d6d4bb139f1a5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 18:54:03.783806 kubelet[2397]: I0213 18:54:03.783696 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/208579d7-1b30-431c-b822-6d4bb139f1a5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 18:54:03.786105 kubelet[2397]: I0213 18:54:03.785964 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 18:54:03.786105 kubelet[2397]: I0213 18:54:03.785995 2397 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-kube-api-access-z6q8v" (OuterVolumeSpecName: "kube-api-access-z6q8v") pod "208579d7-1b30-431c-b822-6d4bb139f1a5" (UID: "208579d7-1b30-431c-b822-6d4bb139f1a5"). InnerVolumeSpecName "kube-api-access-z6q8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867091 2397 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-run\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867135 2397 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-bpf-maps\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867157 2397 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-hubble-tls\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867177 2397 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-net\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867203 2397 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-hostproc\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867223 2397 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-host-proc-sys-kernel\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867242 2397 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z6q8v\" (UniqueName: \"kubernetes.io/projected/208579d7-1b30-431c-b822-6d4bb139f1a5-kube-api-access-z6q8v\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.867578 kubelet[2397]: I0213 18:54:03.867263 2397 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/208579d7-1b30-431c-b822-6d4bb139f1a5-clustermesh-secrets\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.868153 kubelet[2397]: I0213 18:54:03.867283 2397 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-lib-modules\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.868153 kubelet[2397]: I0213 18:54:03.867301 2397 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-cni-path\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.868153 kubelet[2397]: I0213 18:54:03.867320 2397 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-etc-cni-netd\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.868153 kubelet[2397]: I0213 18:54:03.867340 2397 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/208579d7-1b30-431c-b822-6d4bb139f1a5-cilium-config-path\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:03.868153 kubelet[2397]: I0213 18:54:03.867358 2397 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/208579d7-1b30-431c-b822-6d4bb139f1a5-xtables-lock\") on node \"172.31.27.136\" DevicePath \"\"" Feb 13 18:54:04.124218 kubelet[2397]: I0213 18:54:04.123265 2397 scope.go:117] "RemoveContainer" containerID="adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27" Feb 13 18:54:04.128661 containerd[1949]: time="2025-02-13T18:54:04.127933984Z" level=info msg="RemoveContainer for \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\"" Feb 13 18:54:04.134240 containerd[1949]: time="2025-02-13T18:54:04.134190383Z" level=info msg="RemoveContainer for \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\" returns successfully" Feb 13 18:54:04.135794 kubelet[2397]: I0213 18:54:04.135722 2397 scope.go:117] "RemoveContainer" containerID="4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f" Feb 13 18:54:04.136790 systemd[1]: Removed slice kubepods-burstable-pod208579d7_1b30_431c_b822_6d4bb139f1a5.slice - libcontainer container kubepods-burstable-pod208579d7_1b30_431c_b822_6d4bb139f1a5.slice. Feb 13 18:54:04.137588 systemd[1]: kubepods-burstable-pod208579d7_1b30_431c_b822_6d4bb139f1a5.slice: Consumed 15.068s CPU time. Feb 13 18:54:04.141139 containerd[1949]: time="2025-02-13T18:54:04.141082968Z" level=info msg="RemoveContainer for \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\"" Feb 13 18:54:04.146100 containerd[1949]: time="2025-02-13T18:54:04.145972988Z" level=info msg="RemoveContainer for \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\" returns successfully" Feb 13 18:54:04.147869 kubelet[2397]: I0213 18:54:04.146635 2397 scope.go:117] "RemoveContainer" containerID="3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde" Feb 13 18:54:04.149442 containerd[1949]: time="2025-02-13T18:54:04.149394878Z" level=info msg="RemoveContainer for \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\"" Feb 13 18:54:04.153558 containerd[1949]: time="2025-02-13T18:54:04.153488563Z" level=info msg="RemoveContainer for \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\" returns successfully" Feb 13 18:54:04.154218 kubelet[2397]: I0213 18:54:04.154177 2397 scope.go:117] "RemoveContainer" containerID="a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730" Feb 13 18:54:04.156548 containerd[1949]: time="2025-02-13T18:54:04.156494866Z" level=info msg="RemoveContainer for \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\"" Feb 13 18:54:04.161114 containerd[1949]: time="2025-02-13T18:54:04.160910481Z" level=info msg="RemoveContainer for \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\" returns successfully" Feb 13 18:54:04.161386 kubelet[2397]: I0213 18:54:04.161343 2397 scope.go:117] "RemoveContainer" containerID="5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7" Feb 13 18:54:04.164487 containerd[1949]: time="2025-02-13T18:54:04.164433821Z" level=info msg="RemoveContainer for \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\"" Feb 13 18:54:04.168573 containerd[1949]: time="2025-02-13T18:54:04.168399992Z" level=info msg="RemoveContainer for \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\" returns successfully" Feb 13 18:54:04.168917 kubelet[2397]: I0213 18:54:04.168872 2397 scope.go:117] "RemoveContainer" containerID="adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27" Feb 13 18:54:04.169415 containerd[1949]: time="2025-02-13T18:54:04.169271111Z" level=error msg="ContainerStatus for \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\": not found" Feb 13 18:54:04.169551 kubelet[2397]: E0213 18:54:04.169502 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\": not found" containerID="adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27" Feb 13 18:54:04.169741 kubelet[2397]: I0213 18:54:04.169558 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27"} err="failed to get container status \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\": rpc error: code = NotFound desc = an error occurred when try to find container \"adf1c8887a8a6a826c15718868738e37295b0d17483f772f1a4c1d468e554b27\": not found" Feb 13 18:54:04.169741 kubelet[2397]: I0213 18:54:04.169710 2397 scope.go:117] "RemoveContainer" containerID="4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f" Feb 13 18:54:04.170703 containerd[1949]: time="2025-02-13T18:54:04.170573354Z" level=error msg="ContainerStatus for \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\": not found" Feb 13 18:54:04.171252 kubelet[2397]: E0213 18:54:04.170978 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\": not found" containerID="4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f" Feb 13 18:54:04.171252 kubelet[2397]: I0213 18:54:04.171068 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f"} err="failed to get container status \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b0401ec8871739e2e2aee3043a3598af3c53e22c32ebd047ffd66a79943ad4f\": not found" Feb 13 18:54:04.171252 kubelet[2397]: I0213 18:54:04.171110 2397 scope.go:117] "RemoveContainer" containerID="3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde" Feb 13 18:54:04.171716 containerd[1949]: time="2025-02-13T18:54:04.171528419Z" level=error msg="ContainerStatus for \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\": not found" Feb 13 18:54:04.172055 kubelet[2397]: E0213 18:54:04.171998 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\": not found" containerID="3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde" Feb 13 18:54:04.172148 kubelet[2397]: I0213 18:54:04.172101 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde"} err="failed to get container status \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b66573f22feed74b144a7de012c304e0d23bd68c0c7e74ad8e1b384d2c3ddde\": not found" Feb 13 18:54:04.172209 kubelet[2397]: I0213 18:54:04.172168 2397 scope.go:117] "RemoveContainer" containerID="a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730" Feb 13 18:54:04.172572 containerd[1949]: time="2025-02-13T18:54:04.172525121Z" level=error msg="ContainerStatus for \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\": not found" Feb 13 18:54:04.172954 kubelet[2397]: E0213 18:54:04.172915 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\": not found" containerID="a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730" Feb 13 18:54:04.173089 kubelet[2397]: I0213 18:54:04.172963 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730"} err="failed to get container status \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\": rpc error: code = NotFound desc = an error occurred when try to find container \"a368d2b922b35d1ff7126b05341cc5612e482c71f05665a27036ee8caeb79730\": not found" Feb 13 18:54:04.173089 kubelet[2397]: I0213 18:54:04.172999 2397 scope.go:117] "RemoveContainer" containerID="5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7" Feb 13 18:54:04.173350 containerd[1949]: time="2025-02-13T18:54:04.173287994Z" level=error msg="ContainerStatus for \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\": not found" Feb 13 18:54:04.173620 kubelet[2397]: E0213 18:54:04.173542 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\": not found" containerID="5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7" Feb 13 18:54:04.173620 kubelet[2397]: I0213 18:54:04.173583 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7"} err="failed to get container status \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5854e9bbebb8aabc47be27b70184c518d42f8f3d1130eaa8a68d381f9d6d88c7\": not found" Feb 13 18:54:04.234652 systemd[1]: var-lib-kubelet-pods-208579d7\x2d1b30\x2d431c\x2db822\x2d6d4bb139f1a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6q8v.mount: Deactivated successfully. Feb 13 18:54:04.234837 systemd[1]: var-lib-kubelet-pods-208579d7\x2d1b30\x2d431c\x2db822\x2d6d4bb139f1a5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 18:54:04.706928 kubelet[2397]: E0213 18:54:04.706854 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:04.815121 kubelet[2397]: I0213 18:54:04.815059 2397 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" path="/var/lib/kubelet/pods/208579d7-1b30-431c-b822-6d4bb139f1a5/volumes" Feb 13 18:54:05.707310 kubelet[2397]: E0213 18:54:05.707235 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:05.725456 ntpd[1917]: Deleting interface #12 lxc_health, fe80::b8d8:17ff:fe81:1c76%7#123, interface stats: received=0, sent=0, dropped=0, active_time=44 secs Feb 13 18:54:05.725966 ntpd[1917]: 13 Feb 18:54:05 ntpd[1917]: Deleting interface #12 lxc_health, fe80::b8d8:17ff:fe81:1c76%7#123, interface stats: received=0, sent=0, dropped=0, active_time=44 secs Feb 13 18:54:06.707692 kubelet[2397]: E0213 18:54:06.707624 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:06.838637 kubelet[2397]: E0213 18:54:06.838580 2397 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 18:54:07.708729 kubelet[2397]: E0213 18:54:07.708663 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:08.212299 kubelet[2397]: I0213 18:54:08.212183 2397 topology_manager.go:215] "Topology Admit Handler" podUID="39c8ea67-0667-45b1-8116-09648134a9ec" podNamespace="kube-system" podName="cilium-operator-599987898-79vvf" Feb 13 18:54:08.212299 kubelet[2397]: E0213 18:54:08.212255 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" containerName="mount-bpf-fs" Feb 13 18:54:08.212299 kubelet[2397]: E0213 18:54:08.212275 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" containerName="clean-cilium-state" Feb 13 18:54:08.212299 kubelet[2397]: E0213 18:54:08.212290 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" containerName="cilium-agent" Feb 13 18:54:08.212299 kubelet[2397]: E0213 18:54:08.212305 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" containerName="mount-cgroup" Feb 13 18:54:08.213151 kubelet[2397]: E0213 18:54:08.212319 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" containerName="apply-sysctl-overwrites" Feb 13 18:54:08.213151 kubelet[2397]: I0213 18:54:08.212355 2397 memory_manager.go:354] "RemoveStaleState removing state" podUID="208579d7-1b30-431c-b822-6d4bb139f1a5" containerName="cilium-agent" Feb 13 18:54:08.215098 kubelet[2397]: I0213 18:54:08.214539 2397 topology_manager.go:215] "Topology Admit Handler" podUID="5f6339d5-4a5b-4d5d-925e-ac6337c1cad7" podNamespace="kube-system" podName="cilium-ntsg6" Feb 13 18:54:08.225739 systemd[1]: Created slice kubepods-besteffort-pod39c8ea67_0667_45b1_8116_09648134a9ec.slice - libcontainer container kubepods-besteffort-pod39c8ea67_0667_45b1_8116_09648134a9ec.slice. Feb 13 18:54:08.237501 systemd[1]: Created slice kubepods-burstable-pod5f6339d5_4a5b_4d5d_925e_ac6337c1cad7.slice - libcontainer container kubepods-burstable-pod5f6339d5_4a5b_4d5d_925e_ac6337c1cad7.slice. Feb 13 18:54:08.274163 kubelet[2397]: W0213 18:54:08.274108 2397 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.27.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.274548 kubelet[2397]: E0213 18:54:08.274174 2397 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.27.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.274548 kubelet[2397]: W0213 18:54:08.274328 2397 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.27.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.274548 kubelet[2397]: E0213 18:54:08.274356 2397 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.27.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.274548 kubelet[2397]: W0213 18:54:08.274335 2397 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.27.136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.274548 kubelet[2397]: E0213 18:54:08.274386 2397 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.27.136" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.274842 kubelet[2397]: W0213 18:54:08.274396 2397 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.27.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.274842 kubelet[2397]: E0213 18:54:08.274424 2397 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.27.136" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.27.136' and this object Feb 13 18:54:08.294463 kubelet[2397]: I0213 18:54:08.294346 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-xtables-lock\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.294463 kubelet[2397]: I0213 18:54:08.294451 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-hubble-tls\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.294934 kubelet[2397]: I0213 18:54:08.294495 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqtf\" (UniqueName: \"kubernetes.io/projected/39c8ea67-0667-45b1-8116-09648134a9ec-kube-api-access-vfqtf\") pod \"cilium-operator-599987898-79vvf\" (UID: \"39c8ea67-0667-45b1-8116-09648134a9ec\") " pod="kube-system/cilium-operator-599987898-79vvf" Feb 13 18:54:08.294934 kubelet[2397]: I0213 18:54:08.294539 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-cilium-run\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.294934 kubelet[2397]: I0213 18:54:08.294572 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-bpf-maps\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.294934 kubelet[2397]: I0213 18:54:08.294605 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-hostproc\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.294934 kubelet[2397]: I0213 18:54:08.294643 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-cilium-cgroup\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295239 kubelet[2397]: I0213 18:54:08.294677 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmtz\" (UniqueName: \"kubernetes.io/projected/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-kube-api-access-nzmtz\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295239 kubelet[2397]: I0213 18:54:08.294711 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/39c8ea67-0667-45b1-8116-09648134a9ec-cilium-config-path\") pod \"cilium-operator-599987898-79vvf\" (UID: \"39c8ea67-0667-45b1-8116-09648134a9ec\") " pod="kube-system/cilium-operator-599987898-79vvf" Feb 13 18:54:08.295239 kubelet[2397]: I0213 18:54:08.294747 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-cni-path\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295505 kubelet[2397]: I0213 18:54:08.295416 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-etc-cni-netd\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295594 kubelet[2397]: I0213 18:54:08.295504 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-clustermesh-secrets\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295594 kubelet[2397]: I0213 18:54:08.295549 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-cilium-config-path\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295707 kubelet[2397]: I0213 18:54:08.295612 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-host-proc-sys-kernel\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295707 kubelet[2397]: I0213 18:54:08.295648 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-lib-modules\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295815 kubelet[2397]: I0213 18:54:08.295711 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-cilium-ipsec-secrets\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.295815 kubelet[2397]: I0213 18:54:08.295776 2397 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-host-proc-sys-net\") pod \"cilium-ntsg6\" (UID: \"5f6339d5-4a5b-4d5d-925e-ac6337c1cad7\") " pod="kube-system/cilium-ntsg6" Feb 13 18:54:08.514477 kubelet[2397]: I0213 18:54:08.514308 2397 setters.go:580] "Node became not ready" node="172.31.27.136" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T18:54:08Z","lastTransitionTime":"2025-02-13T18:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 18:54:08.709169 kubelet[2397]: E0213 18:54:08.709098 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:09.134807 containerd[1949]: time="2025-02-13T18:54:09.134354583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-79vvf,Uid:39c8ea67-0667-45b1-8116-09648134a9ec,Namespace:kube-system,Attempt:0,}" Feb 13 18:54:09.174595 containerd[1949]: time="2025-02-13T18:54:09.174216952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:54:09.174595 containerd[1949]: time="2025-02-13T18:54:09.174331838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:54:09.174595 containerd[1949]: time="2025-02-13T18:54:09.174371421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:54:09.176268 containerd[1949]: time="2025-02-13T18:54:09.176171097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:54:09.226876 systemd[1]: Started cri-containerd-d8c8ef8a7b4eabd6833bde77ac58ec686fe8fa9c150496af072c011392ceb5b7.scope - libcontainer container d8c8ef8a7b4eabd6833bde77ac58ec686fe8fa9c150496af072c011392ceb5b7. Feb 13 18:54:09.298966 containerd[1949]: time="2025-02-13T18:54:09.298637376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-79vvf,Uid:39c8ea67-0667-45b1-8116-09648134a9ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8c8ef8a7b4eabd6833bde77ac58ec686fe8fa9c150496af072c011392ceb5b7\"" Feb 13 18:54:09.303004 containerd[1949]: time="2025-02-13T18:54:09.302869731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 18:54:09.398310 kubelet[2397]: E0213 18:54:09.397633 2397 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 18:54:09.398310 kubelet[2397]: E0213 18:54:09.397754 2397 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-clustermesh-secrets podName:5f6339d5-4a5b-4d5d-925e-ac6337c1cad7 nodeName:}" failed. No retries permitted until 2025-02-13 18:54:09.897724576 +0000 UTC m=+74.859068341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-clustermesh-secrets") pod "cilium-ntsg6" (UID: "5f6339d5-4a5b-4d5d-925e-ac6337c1cad7") : failed to sync secret cache: timed out waiting for the condition Feb 13 18:54:09.398310 kubelet[2397]: E0213 18:54:09.398132 2397 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 18:54:09.398310 kubelet[2397]: E0213 18:54:09.398159 2397 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-ntsg6: failed to sync secret cache: timed out waiting for the condition Feb 13 18:54:09.398310 kubelet[2397]: E0213 18:54:09.398242 2397 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-hubble-tls podName:5f6339d5-4a5b-4d5d-925e-ac6337c1cad7 nodeName:}" failed. No retries permitted until 2025-02-13 18:54:09.898217482 +0000 UTC m=+74.859561235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/5f6339d5-4a5b-4d5d-925e-ac6337c1cad7-hubble-tls") pod "cilium-ntsg6" (UID: "5f6339d5-4a5b-4d5d-925e-ac6337c1cad7") : failed to sync secret cache: timed out waiting for the condition Feb 13 18:54:09.474148 systemd[1]: run-containerd-runc-k8s.io-d8c8ef8a7b4eabd6833bde77ac58ec686fe8fa9c150496af072c011392ceb5b7-runc.3o3UVV.mount: Deactivated successfully. Feb 13 18:54:09.710233 kubelet[2397]: E0213 18:54:09.710065 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:10.050697 containerd[1949]: time="2025-02-13T18:54:10.050487715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ntsg6,Uid:5f6339d5-4a5b-4d5d-925e-ac6337c1cad7,Namespace:kube-system,Attempt:0,}" Feb 13 18:54:10.091878 containerd[1949]: time="2025-02-13T18:54:10.091581768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 18:54:10.091878 containerd[1949]: time="2025-02-13T18:54:10.091821732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 18:54:10.091878 containerd[1949]: time="2025-02-13T18:54:10.091864761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:54:10.092388 containerd[1949]: time="2025-02-13T18:54:10.092073317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 18:54:10.142578 systemd[1]: Started cri-containerd-bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9.scope - libcontainer container bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9. Feb 13 18:54:10.185141 containerd[1949]: time="2025-02-13T18:54:10.185089299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ntsg6,Uid:5f6339d5-4a5b-4d5d-925e-ac6337c1cad7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\"" Feb 13 18:54:10.191215 containerd[1949]: time="2025-02-13T18:54:10.191127777Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 18:54:10.206820 containerd[1949]: time="2025-02-13T18:54:10.206600029Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849\"" Feb 13 18:54:10.207900 containerd[1949]: time="2025-02-13T18:54:10.207716251Z" level=info msg="StartContainer for \"fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849\"" Feb 13 18:54:10.258376 systemd[1]: Started cri-containerd-fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849.scope - libcontainer container fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849. Feb 13 18:54:10.306121 containerd[1949]: time="2025-02-13T18:54:10.305120041Z" level=info msg="StartContainer for \"fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849\" returns successfully" Feb 13 18:54:10.321696 systemd[1]: cri-containerd-fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849.scope: Deactivated successfully. Feb 13 18:54:10.372427 containerd[1949]: time="2025-02-13T18:54:10.372164499Z" level=info msg="shim disconnected" id=fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849 namespace=k8s.io Feb 13 18:54:10.372427 containerd[1949]: time="2025-02-13T18:54:10.372368337Z" level=warning msg="cleaning up after shim disconnected" id=fe24a653be43db0a550c22f0131cc91acca94fc2de7cf19cc7427541b2c89849 namespace=k8s.io Feb 13 18:54:10.373268 containerd[1949]: time="2025-02-13T18:54:10.372397223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:54:10.556501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364051317.mount: Deactivated successfully. Feb 13 18:54:10.710360 kubelet[2397]: E0213 18:54:10.710247 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:11.156452 containerd[1949]: time="2025-02-13T18:54:11.156289895Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 18:54:11.159716 containerd[1949]: time="2025-02-13T18:54:11.159254093Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:54:11.162738 containerd[1949]: time="2025-02-13T18:54:11.162662740Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 18:54:11.163807 containerd[1949]: time="2025-02-13T18:54:11.163731118Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 18:54:11.168989 containerd[1949]: time="2025-02-13T18:54:11.168908994Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.865888347s" Feb 13 18:54:11.169264 containerd[1949]: time="2025-02-13T18:54:11.169082444Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 18:54:11.183845 containerd[1949]: time="2025-02-13T18:54:11.183477062Z" level=info msg="CreateContainer within sandbox \"d8c8ef8a7b4eabd6833bde77ac58ec686fe8fa9c150496af072c011392ceb5b7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 18:54:11.191311 containerd[1949]: time="2025-02-13T18:54:11.191246229Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe\"" Feb 13 18:54:11.192464 containerd[1949]: time="2025-02-13T18:54:11.191947811Z" level=info msg="StartContainer for \"a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe\"" Feb 13 18:54:11.205194 containerd[1949]: time="2025-02-13T18:54:11.204750650Z" level=info msg="CreateContainer within sandbox \"d8c8ef8a7b4eabd6833bde77ac58ec686fe8fa9c150496af072c011392ceb5b7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d27775a8ef3ea0ad10f8f7e0eaee79f46da0e1b183cb239d08f0a825c7f1eea8\"" Feb 13 18:54:11.206266 containerd[1949]: time="2025-02-13T18:54:11.206213269Z" level=info msg="StartContainer for \"d27775a8ef3ea0ad10f8f7e0eaee79f46da0e1b183cb239d08f0a825c7f1eea8\"" Feb 13 18:54:11.255160 systemd[1]: Started cri-containerd-a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe.scope - libcontainer container a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe. Feb 13 18:54:11.268365 systemd[1]: Started cri-containerd-d27775a8ef3ea0ad10f8f7e0eaee79f46da0e1b183cb239d08f0a825c7f1eea8.scope - libcontainer container d27775a8ef3ea0ad10f8f7e0eaee79f46da0e1b183cb239d08f0a825c7f1eea8. Feb 13 18:54:11.335565 containerd[1949]: time="2025-02-13T18:54:11.335486362Z" level=info msg="StartContainer for \"a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe\" returns successfully" Feb 13 18:54:11.348474 containerd[1949]: time="2025-02-13T18:54:11.348405466Z" level=info msg="StartContainer for \"d27775a8ef3ea0ad10f8f7e0eaee79f46da0e1b183cb239d08f0a825c7f1eea8\" returns successfully" Feb 13 18:54:11.353512 systemd[1]: cri-containerd-a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe.scope: Deactivated successfully. Feb 13 18:54:11.481318 containerd[1949]: time="2025-02-13T18:54:11.480225284Z" level=info msg="shim disconnected" id=a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe namespace=k8s.io Feb 13 18:54:11.481318 containerd[1949]: time="2025-02-13T18:54:11.480329424Z" level=warning msg="cleaning up after shim disconnected" id=a7c534e8611206b9410f678b1555576a950f925604299215b571714c029584fe namespace=k8s.io Feb 13 18:54:11.481318 containerd[1949]: time="2025-02-13T18:54:11.480350867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:54:11.711327 kubelet[2397]: E0213 18:54:11.711228 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:11.840741 kubelet[2397]: E0213 18:54:11.840588 2397 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 18:54:12.163410 containerd[1949]: time="2025-02-13T18:54:12.163330419Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 18:54:12.195918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288754480.mount: Deactivated successfully. Feb 13 18:54:12.203654 containerd[1949]: time="2025-02-13T18:54:12.203574771Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba\"" Feb 13 18:54:12.208437 containerd[1949]: time="2025-02-13T18:54:12.205550515Z" level=info msg="StartContainer for \"d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba\"" Feb 13 18:54:12.231960 kubelet[2397]: I0213 18:54:12.231872 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-79vvf" podStartSLOduration=2.3602636 podStartE2EDuration="4.231853344s" podCreationTimestamp="2025-02-13 18:54:08 +0000 UTC" firstStartedPulling="2025-02-13 18:54:09.301670368 +0000 UTC m=+74.263014133" lastFinishedPulling="2025-02-13 18:54:11.173260112 +0000 UTC m=+76.134603877" observedRunningTime="2025-02-13 18:54:12.176196217 +0000 UTC m=+77.137539982" watchObservedRunningTime="2025-02-13 18:54:12.231853344 +0000 UTC m=+77.193197121" Feb 13 18:54:12.273361 systemd[1]: Started cri-containerd-d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba.scope - libcontainer container d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba. Feb 13 18:54:12.336184 containerd[1949]: time="2025-02-13T18:54:12.334696992Z" level=info msg="StartContainer for \"d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba\" returns successfully" Feb 13 18:54:12.341500 systemd[1]: cri-containerd-d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba.scope: Deactivated successfully. Feb 13 18:54:12.382304 containerd[1949]: time="2025-02-13T18:54:12.382209765Z" level=info msg="shim disconnected" id=d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba namespace=k8s.io Feb 13 18:54:12.382304 containerd[1949]: time="2025-02-13T18:54:12.382297097Z" level=warning msg="cleaning up after shim disconnected" id=d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba namespace=k8s.io Feb 13 18:54:12.382676 containerd[1949]: time="2025-02-13T18:54:12.382318396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:54:12.472907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d46d8ee49c5b1f7bb6b390f609c8043a75c4b27ed28c1037f39af3709b53a7ba-rootfs.mount: Deactivated successfully. Feb 13 18:54:12.711891 kubelet[2397]: E0213 18:54:12.711822 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:13.170981 containerd[1949]: time="2025-02-13T18:54:13.170846452Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 18:54:13.194790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287349090.mount: Deactivated successfully. Feb 13 18:54:13.196778 containerd[1949]: time="2025-02-13T18:54:13.196452285Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655\"" Feb 13 18:54:13.197886 containerd[1949]: time="2025-02-13T18:54:13.197839602Z" level=info msg="StartContainer for \"091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655\"" Feb 13 18:54:13.261422 systemd[1]: Started cri-containerd-091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655.scope - libcontainer container 091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655. Feb 13 18:54:13.304536 systemd[1]: cri-containerd-091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655.scope: Deactivated successfully. Feb 13 18:54:13.309264 containerd[1949]: time="2025-02-13T18:54:13.308937820Z" level=info msg="StartContainer for \"091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655\" returns successfully" Feb 13 18:54:13.346018 containerd[1949]: time="2025-02-13T18:54:13.345839606Z" level=info msg="shim disconnected" id=091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655 namespace=k8s.io Feb 13 18:54:13.346018 containerd[1949]: time="2025-02-13T18:54:13.345923456Z" level=warning msg="cleaning up after shim disconnected" id=091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655 namespace=k8s.io Feb 13 18:54:13.346018 containerd[1949]: time="2025-02-13T18:54:13.345945079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 18:54:13.474814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-091548fe155e3e45ddd0365c472f6d10560d8b4964620c22f77e0dd5275c8655-rootfs.mount: Deactivated successfully. Feb 13 18:54:13.712622 kubelet[2397]: E0213 18:54:13.712549 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:14.177117 containerd[1949]: time="2025-02-13T18:54:14.176858230Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 18:54:14.203021 containerd[1949]: time="2025-02-13T18:54:14.202944038Z" level=info msg="CreateContainer within sandbox \"bd04faaec68228c57236fed8e230d124b31da0c7997bb5e335a836ee187837b9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28ac1b45fbcde6cffe83954332340ea87c0d9bccd1967624e5e59ccc6030246e\"" Feb 13 18:54:14.204163 containerd[1949]: time="2025-02-13T18:54:14.204100132Z" level=info msg="StartContainer for \"28ac1b45fbcde6cffe83954332340ea87c0d9bccd1967624e5e59ccc6030246e\"" Feb 13 18:54:14.261358 systemd[1]: Started cri-containerd-28ac1b45fbcde6cffe83954332340ea87c0d9bccd1967624e5e59ccc6030246e.scope - libcontainer container 28ac1b45fbcde6cffe83954332340ea87c0d9bccd1967624e5e59ccc6030246e. Feb 13 18:54:14.312252 containerd[1949]: time="2025-02-13T18:54:14.312176440Z" level=info msg="StartContainer for \"28ac1b45fbcde6cffe83954332340ea87c0d9bccd1967624e5e59ccc6030246e\" returns successfully" Feb 13 18:54:14.712828 kubelet[2397]: E0213 18:54:14.712756 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:15.096201 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 18:54:15.215690 kubelet[2397]: I0213 18:54:15.215578 2397 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ntsg6" podStartSLOduration=7.215555514 podStartE2EDuration="7.215555514s" podCreationTimestamp="2025-02-13 18:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 18:54:15.214719452 +0000 UTC m=+80.176063217" watchObservedRunningTime="2025-02-13 18:54:15.215555514 +0000 UTC m=+80.176899291" Feb 13 18:54:15.713754 kubelet[2397]: E0213 18:54:15.713700 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:16.660058 kubelet[2397]: E0213 18:54:16.659482 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:16.714248 kubelet[2397]: E0213 18:54:16.714180 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:17.714577 kubelet[2397]: E0213 18:54:17.714518 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:17.757279 systemd[1]: run-containerd-runc-k8s.io-28ac1b45fbcde6cffe83954332340ea87c0d9bccd1967624e5e59ccc6030246e-runc.Iuby90.mount: Deactivated successfully. Feb 13 18:54:18.715529 kubelet[2397]: E0213 18:54:18.715418 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:19.260128 systemd-networkd[1836]: lxc_health: Link UP Feb 13 18:54:19.267673 systemd-networkd[1836]: lxc_health: Gained carrier Feb 13 18:54:19.269196 (udev-worker)[5171]: Network interface NamePolicy= disabled on kernel command line. Feb 13 18:54:19.716718 kubelet[2397]: E0213 18:54:19.716647 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:20.137021 systemd[1]: run-containerd-runc-k8s.io-28ac1b45fbcde6cffe83954332340ea87c0d9bccd1967624e5e59ccc6030246e-runc.qAWd7k.mount: Deactivated successfully. Feb 13 18:54:20.717122 kubelet[2397]: E0213 18:54:20.716978 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:21.249301 systemd-networkd[1836]: lxc_health: Gained IPv6LL Feb 13 18:54:21.718190 kubelet[2397]: E0213 18:54:21.718130 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:22.718510 kubelet[2397]: E0213 18:54:22.718444 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:23.719501 kubelet[2397]: E0213 18:54:23.719427 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:23.725355 ntpd[1917]: Listen normally on 16 lxc_health [fe80::d46f:8ff:fe7b:32e8%15]:123 Feb 13 18:54:23.725901 ntpd[1917]: 13 Feb 18:54:23 ntpd[1917]: Listen normally on 16 lxc_health [fe80::d46f:8ff:fe7b:32e8%15]:123 Feb 13 18:54:24.720596 kubelet[2397]: E0213 18:54:24.720459 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:25.721826 kubelet[2397]: E0213 18:54:25.721729 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:26.722588 kubelet[2397]: E0213 18:54:26.722451 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:27.723704 kubelet[2397]: E0213 18:54:27.723630 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:28.723875 kubelet[2397]: E0213 18:54:28.723810 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:29.724533 kubelet[2397]: E0213 18:54:29.724449 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:30.725622 kubelet[2397]: E0213 18:54:30.725559 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:31.726488 kubelet[2397]: E0213 18:54:31.726418 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:32.727104 kubelet[2397]: E0213 18:54:32.726995 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:33.727656 kubelet[2397]: E0213 18:54:33.727577 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:34.728802 kubelet[2397]: E0213 18:54:34.728739 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:35.729256 kubelet[2397]: E0213 18:54:35.729163 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:36.655832 kubelet[2397]: E0213 18:54:36.655754 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:36.729601 kubelet[2397]: E0213 18:54:36.729545 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:37.729754 kubelet[2397]: E0213 18:54:37.729692 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:38.730639 kubelet[2397]: E0213 18:54:38.730583 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:38.771367 kubelet[2397]: E0213 18:54:38.771122 2397 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T18:54:28Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T18:54:28Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T18:54:28Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-02-13T18:54:28Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":69692964},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\\\",\\\"registry.k8s.io/kube-proxy:v1.30.10\\\"],\\\"sizeBytes\\\":25662389},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"172.31.27.136\": Patch \"https://172.31.18.101:6443/api/v1/nodes/172.31.27.136/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 18:54:39.731714 kubelet[2397]: E0213 18:54:39.731655 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:40.732771 kubelet[2397]: E0213 18:54:40.732706 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:41.733691 kubelet[2397]: E0213 18:54:41.733627 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:42.734628 kubelet[2397]: E0213 18:54:42.734557 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:43.735530 kubelet[2397]: E0213 18:54:43.735461 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:44.736477 kubelet[2397]: E0213 18:54:44.736415 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:45.737127 kubelet[2397]: E0213 18:54:45.737063 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:46.737814 kubelet[2397]: E0213 18:54:46.737740 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:47.738204 kubelet[2397]: E0213 18:54:47.738142 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:48.169450 kubelet[2397]: E0213 18:54:48.169323 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 13 18:54:48.739290 kubelet[2397]: E0213 18:54:48.739224 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:48.771716 kubelet[2397]: E0213 18:54:48.771653 2397 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.27.136\": Get \"https://172.31.18.101:6443/api/v1/nodes/172.31.27.136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 18:54:49.740065 kubelet[2397]: E0213 18:54:49.739983 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:50.740640 kubelet[2397]: E0213 18:54:50.740578 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:51.741168 kubelet[2397]: E0213 18:54:51.741101 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:52.742208 kubelet[2397]: E0213 18:54:52.742145 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:53.743471 kubelet[2397]: E0213 18:54:53.743369 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:54.744555 kubelet[2397]: E0213 18:54:54.744491 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:55.745445 kubelet[2397]: E0213 18:54:55.745385 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:56.655243 kubelet[2397]: E0213 18:54:56.655185 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:56.701161 containerd[1949]: time="2025-02-13T18:54:56.701103126Z" level=info msg="StopPodSandbox for \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\"" Feb 13 18:54:56.702310 containerd[1949]: time="2025-02-13T18:54:56.701246718Z" level=info msg="TearDown network for sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" successfully" Feb 13 18:54:56.702310 containerd[1949]: time="2025-02-13T18:54:56.701268629Z" level=info msg="StopPodSandbox for \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" returns successfully" Feb 13 18:54:56.702310 containerd[1949]: time="2025-02-13T18:54:56.701911658Z" level=info msg="RemovePodSandbox for \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\"" Feb 13 18:54:56.702310 containerd[1949]: time="2025-02-13T18:54:56.701954940Z" level=info msg="Forcibly stopping sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\"" Feb 13 18:54:56.702310 containerd[1949]: time="2025-02-13T18:54:56.702066944Z" level=info msg="TearDown network for sandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" successfully" Feb 13 18:54:56.706416 containerd[1949]: time="2025-02-13T18:54:56.706269512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 18:54:56.706416 containerd[1949]: time="2025-02-13T18:54:56.706359257Z" level=info msg="RemovePodSandbox \"1ed874a1b47d7ca9cedb9e25ae00477938659704e6bcfca7a297f78266d65371\" returns successfully" Feb 13 18:54:56.746383 kubelet[2397]: E0213 18:54:56.746322 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:57.747214 kubelet[2397]: E0213 18:54:57.747148 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:58.170060 kubelet[2397]: E0213 18:54:58.169959 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 18:54:58.748304 kubelet[2397]: E0213 18:54:58.748241 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:54:58.772871 kubelet[2397]: E0213 18:54:58.772659 2397 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.27.136\": Get \"https://172.31.18.101:6443/api/v1/nodes/172.31.27.136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 18:54:59.748685 kubelet[2397]: E0213 18:54:59.748622 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:00.749679 kubelet[2397]: E0213 18:55:00.749568 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:01.750010 kubelet[2397]: E0213 18:55:01.749942 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:02.751136 kubelet[2397]: E0213 18:55:02.751055 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:03.751640 kubelet[2397]: E0213 18:55:03.751575 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:04.752065 kubelet[2397]: E0213 18:55:04.751960 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:05.753061 kubelet[2397]: E0213 18:55:05.752973 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:06.754087 kubelet[2397]: E0213 18:55:06.754001 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:07.754239 kubelet[2397]: E0213 18:55:07.754172 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:08.171241 kubelet[2397]: E0213 18:55:08.171166 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 18:55:08.755255 kubelet[2397]: E0213 18:55:08.755159 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:08.773445 kubelet[2397]: E0213 18:55:08.773387 2397 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.27.136\": Get \"https://172.31.18.101:6443/api/v1/nodes/172.31.27.136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 18:55:09.009022 kubelet[2397]: E0213 18:55:09.004970 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": unexpected EOF" Feb 13 18:55:09.013767 kubelet[2397]: E0213 18:55:09.013705 2397 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": read tcp 172.31.27.136:54126->172.31.18.101:6443: read: connection reset by peer" Feb 13 18:55:09.014010 kubelet[2397]: I0213 18:55:09.013981 2397 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 13 18:55:09.015478 kubelet[2397]: E0213 18:55:09.014893 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.18.101:6443: connect: connection refused" interval="200ms" Feb 13 18:55:09.216007 kubelet[2397]: E0213 18:55:09.215940 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.18.101:6443: connect: connection refused" interval="400ms" Feb 13 18:55:09.617871 kubelet[2397]: E0213 18:55:09.617789 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": dial tcp 172.31.18.101:6443: connect: connection refused" interval="800ms" Feb 13 18:55:09.755381 kubelet[2397]: E0213 18:55:09.755311 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:10.002577 kubelet[2397]: E0213 18:55:10.002413 2397 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.27.136\": Get \"https://172.31.18.101:6443/api/v1/nodes/172.31.27.136?timeout=10s\": dial tcp 172.31.18.101:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Feb 13 18:55:10.002577 kubelet[2397]: E0213 18:55:10.002456 2397 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Feb 13 18:55:10.756440 kubelet[2397]: E0213 18:55:10.756359 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:11.756588 kubelet[2397]: E0213 18:55:11.756508 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:12.757248 kubelet[2397]: E0213 18:55:12.757188 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:13.757815 kubelet[2397]: E0213 18:55:13.757753 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:14.758228 kubelet[2397]: E0213 18:55:14.758153 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:15.758935 kubelet[2397]: E0213 18:55:15.758776 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:16.655244 kubelet[2397]: E0213 18:55:16.655174 2397 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:16.759965 kubelet[2397]: E0213 18:55:16.759900 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:17.760529 kubelet[2397]: E0213 18:55:17.760454 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:18.761562 kubelet[2397]: E0213 18:55:18.761476 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:19.762020 kubelet[2397]: E0213 18:55:19.761919 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:20.419516 kubelet[2397]: E0213 18:55:20.419439 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.27.136?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 13 18:55:20.763128 kubelet[2397]: E0213 18:55:20.762962 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:21.763252 kubelet[2397]: E0213 18:55:21.763145 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:22.764386 kubelet[2397]: E0213 18:55:22.764285 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:23.765567 kubelet[2397]: E0213 18:55:23.765451 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:24.766259 kubelet[2397]: E0213 18:55:24.766194 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:25.766819 kubelet[2397]: E0213 18:55:25.766753 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:26.766977 kubelet[2397]: E0213 18:55:26.766921 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:27.768169 kubelet[2397]: E0213 18:55:27.768091 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:28.769096 kubelet[2397]: E0213 18:55:28.769013 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:29.770089 kubelet[2397]: E0213 18:55:29.770018 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 18:55:30.273808 kubelet[2397]: E0213 18:55:30.273740 2397 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.27.136\": Get \"https://172.31.18.101:6443/api/v1/nodes/172.31.27.136?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 13 18:55:30.771092 kubelet[2397]: E0213 18:55:30.770998 2397 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"