Jan 29 10:48:08.160475 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 29 10:48:08.160519 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 10:48:08.160543 kernel: KASLR disabled due to lack of seed Jan 29 10:48:08.160559 kernel: efi: EFI v2.7 by EDK II Jan 29 10:48:08.160574 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 29 10:48:08.160589 kernel: secureboot: Secure boot disabled Jan 29 10:48:08.160606 kernel: ACPI: Early table checksum verification disabled Jan 29 10:48:08.160621 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 29 10:48:08.160637 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 29 10:48:08.160652 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 10:48:08.160672 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 29 10:48:08.160689 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 10:48:08.160704 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 29 10:48:08.160719 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 29 10:48:08.160737 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 29 10:48:08.160758 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 10:48:08.160775 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 29 10:48:08.160791 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 29 10:48:08.160807 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 29 10:48:08.160823 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 29 10:48:08.160839 kernel: printk: bootconsole [uart0] enabled Jan 29 10:48:08.160854 kernel: NUMA: Failed to initialise from firmware Jan 29 10:48:08.160870 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:48:08.160887 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 29 10:48:08.160903 kernel: Zone ranges: Jan 29 10:48:08.160919 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 10:48:08.160938 kernel: DMA32 empty Jan 29 10:48:08.160955 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 29 10:48:08.160971 kernel: Movable zone start for each node Jan 29 10:48:08.160987 kernel: Early memory node ranges Jan 29 10:48:08.161003 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 29 10:48:08.161020 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 29 10:48:08.161036 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 29 10:48:08.161052 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 29 10:48:08.161067 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 29 10:48:08.161083 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 29 10:48:08.161099 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 29 10:48:08.161115 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 29 10:48:08.161135 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:48:08.161152 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 29 10:48:08.161175 kernel: psci: probing for conduit method from ACPI. Jan 29 10:48:08.161298 kernel: psci: PSCIv1.0 detected in firmware. Jan 29 10:48:08.161323 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:48:08.161348 kernel: psci: Trusted OS migration not required Jan 29 10:48:08.161366 kernel: psci: SMC Calling Convention v1.1 Jan 29 10:48:08.161383 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:48:08.161401 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:48:08.161418 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 10:48:08.161435 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:48:08.161452 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:48:08.161470 kernel: CPU features: detected: Spectre-v2 Jan 29 10:48:08.161486 kernel: CPU features: detected: Spectre-v3a Jan 29 10:48:08.161504 kernel: CPU features: detected: Spectre-BHB Jan 29 10:48:08.161521 kernel: CPU features: detected: ARM erratum 1742098 Jan 29 10:48:08.161540 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 29 10:48:08.161563 kernel: alternatives: applying boot alternatives Jan 29 10:48:08.161583 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:48:08.161601 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:48:08.161618 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:48:08.161636 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:48:08.161653 kernel: Fallback order for Node 0: 0 Jan 29 10:48:08.161671 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 29 10:48:08.161688 kernel: Policy zone: Normal Jan 29 10:48:08.161705 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:48:08.161722 kernel: software IO TLB: area num 2. Jan 29 10:48:08.161743 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 29 10:48:08.161761 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Jan 29 10:48:08.161779 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 10:48:08.161796 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:48:08.161814 kernel: rcu: RCU event tracing is enabled. Jan 29 10:48:08.161831 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 10:48:08.161850 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:48:08.161867 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:48:08.161884 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:48:08.161901 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 10:48:08.161918 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:48:08.161940 kernel: GICv3: 96 SPIs implemented Jan 29 10:48:08.161957 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:48:08.161974 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:48:08.161990 kernel: GICv3: GICv3 features: 16 PPIs Jan 29 10:48:08.162007 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 29 10:48:08.162024 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 29 10:48:08.162041 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 10:48:08.162058 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 29 10:48:08.162076 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 29 10:48:08.162093 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 29 10:48:08.162110 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 29 10:48:08.162127 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:48:08.162149 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 29 10:48:08.162166 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 29 10:48:08.162183 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 29 10:48:08.162223 kernel: Console: colour dummy device 80x25 Jan 29 10:48:08.162244 kernel: printk: console [tty1] enabled Jan 29 10:48:08.162262 kernel: ACPI: Core revision 20230628 Jan 29 10:48:08.162281 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 29 10:48:08.162299 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:48:08.162316 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:48:08.162334 kernel: landlock: Up and running. Jan 29 10:48:08.162359 kernel: SELinux: Initializing. Jan 29 10:48:08.162376 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:48:08.162394 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:48:08.162412 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:48:08.162429 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:48:08.162447 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:48:08.162464 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:48:08.162481 kernel: Platform MSI: ITS@0x10080000 domain created Jan 29 10:48:08.162503 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 29 10:48:08.162521 kernel: Remapping and enabling EFI services. Jan 29 10:48:08.162538 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:48:08.162555 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:48:08.162572 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 29 10:48:08.162590 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 29 10:48:08.162607 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 29 10:48:08.162625 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 10:48:08.162642 kernel: SMP: Total of 2 processors activated. Jan 29 10:48:08.162659 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:48:08.162680 kernel: CPU features: detected: 32-bit EL1 Support Jan 29 10:48:08.162698 kernel: CPU features: detected: CRC32 instructions Jan 29 10:48:08.162727 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:48:08.162749 kernel: alternatives: applying system-wide alternatives Jan 29 10:48:08.162767 kernel: devtmpfs: initialized Jan 29 10:48:08.162785 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:48:08.162803 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 10:48:08.162821 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:48:08.162839 kernel: SMBIOS 3.0.0 present. Jan 29 10:48:08.162861 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 29 10:48:08.162879 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:48:08.162897 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:48:08.162916 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:48:08.162934 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:48:08.162952 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:48:08.162970 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jan 29 10:48:08.162992 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:48:08.163010 kernel: cpuidle: using governor menu Jan 29 10:48:08.163028 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:48:08.163046 kernel: ASID allocator initialised with 65536 entries Jan 29 10:48:08.163064 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:48:08.163082 kernel: Serial: AMBA PL011 UART driver Jan 29 10:48:08.163100 kernel: Modules: 17360 pages in range for non-PLT usage Jan 29 10:48:08.163118 kernel: Modules: 508880 pages in range for PLT usage Jan 29 10:48:08.163136 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:48:08.163158 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:48:08.163177 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:48:08.163214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:48:08.163237 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:48:08.163256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:48:08.163274 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:48:08.163294 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:48:08.163313 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:48:08.163331 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:48:08.163372 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:48:08.163394 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:48:08.163412 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:48:08.163430 kernel: ACPI: Interpreter enabled Jan 29 10:48:08.163448 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:48:08.163466 kernel: ACPI: MCFG table detected, 1 entries Jan 29 10:48:08.163485 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 29 10:48:08.163791 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 10:48:08.164013 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 10:48:08.166309 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 10:48:08.166576 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 29 10:48:08.166781 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 29 10:48:08.166807 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 29 10:48:08.166826 kernel: acpiphp: Slot [1] registered Jan 29 10:48:08.166845 kernel: acpiphp: Slot [2] registered Jan 29 10:48:08.166863 kernel: acpiphp: Slot [3] registered Jan 29 10:48:08.166891 kernel: acpiphp: Slot [4] registered Jan 29 10:48:08.166910 kernel: acpiphp: Slot [5] registered Jan 29 10:48:08.166928 kernel: acpiphp: Slot [6] registered Jan 29 10:48:08.166946 kernel: acpiphp: Slot [7] registered Jan 29 10:48:08.166964 kernel: acpiphp: Slot [8] registered Jan 29 10:48:08.166982 kernel: acpiphp: Slot [9] registered Jan 29 10:48:08.167000 kernel: acpiphp: Slot [10] registered Jan 29 10:48:08.167018 kernel: acpiphp: Slot [11] registered Jan 29 10:48:08.167037 kernel: acpiphp: Slot [12] registered Jan 29 10:48:08.167055 kernel: acpiphp: Slot [13] registered Jan 29 10:48:08.167078 kernel: acpiphp: Slot [14] registered Jan 29 10:48:08.167096 kernel: acpiphp: Slot [15] registered Jan 29 10:48:08.167113 kernel: acpiphp: Slot [16] registered Jan 29 10:48:08.167131 kernel: acpiphp: Slot [17] registered Jan 29 10:48:08.167149 kernel: acpiphp: Slot [18] registered Jan 29 10:48:08.167167 kernel: acpiphp: Slot [19] registered Jan 29 10:48:08.167184 kernel: acpiphp: Slot [20] registered Jan 29 10:48:08.167222 kernel: acpiphp: Slot [21] registered Jan 29 10:48:08.167245 kernel: acpiphp: Slot [22] registered Jan 29 10:48:08.167272 kernel: acpiphp: Slot [23] registered Jan 29 10:48:08.167292 kernel: acpiphp: Slot [24] registered Jan 29 10:48:08.167310 kernel: acpiphp: Slot [25] registered Jan 29 10:48:08.167329 kernel: acpiphp: Slot [26] registered Jan 29 10:48:08.167365 kernel: acpiphp: Slot [27] registered Jan 29 10:48:08.167389 kernel: acpiphp: Slot [28] registered Jan 29 10:48:08.167408 kernel: acpiphp: Slot [29] registered Jan 29 10:48:08.167427 kernel: acpiphp: Slot [30] registered Jan 29 10:48:08.167446 kernel: acpiphp: Slot [31] registered Jan 29 10:48:08.167466 kernel: PCI host bridge to bus 0000:00 Jan 29 10:48:08.167747 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 29 10:48:08.167950 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 10:48:08.168156 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 29 10:48:08.169560 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 29 10:48:08.169819 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 29 10:48:08.170049 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 29 10:48:08.170310 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 29 10:48:08.170538 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 10:48:08.170742 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 29 10:48:08.170945 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:48:08.171172 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 10:48:08.172518 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 29 10:48:08.172745 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 29 10:48:08.172963 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 29 10:48:08.173175 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:48:08.175559 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 29 10:48:08.175779 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 29 10:48:08.175987 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 29 10:48:08.176190 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 29 10:48:08.176425 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 29 10:48:08.176623 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 29 10:48:08.176803 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 10:48:08.176990 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 29 10:48:08.177018 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 10:48:08.177038 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 10:48:08.177057 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 10:48:08.177076 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 10:48:08.177094 kernel: iommu: Default domain type: Translated Jan 29 10:48:08.177120 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:48:08.177138 kernel: efivars: Registered efivars operations Jan 29 10:48:08.177156 kernel: vgaarb: loaded Jan 29 10:48:08.177175 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:48:08.177212 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:48:08.177237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:48:08.177256 kernel: pnp: PnP ACPI init Jan 29 10:48:08.177505 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 29 10:48:08.177542 kernel: pnp: PnP ACPI: found 1 devices Jan 29 10:48:08.177561 kernel: NET: Registered PF_INET protocol family Jan 29 10:48:08.177580 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:48:08.177599 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:48:08.177618 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:48:08.177637 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:48:08.177655 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:48:08.177673 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:48:08.177692 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:48:08.177716 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:48:08.177735 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:48:08.177753 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:48:08.177771 kernel: kvm [1]: HYP mode not available Jan 29 10:48:08.177789 kernel: Initialise system trusted keyrings Jan 29 10:48:08.177807 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:48:08.177825 kernel: Key type asymmetric registered Jan 29 10:48:08.177843 kernel: Asymmetric key parser 'x509' registered Jan 29 10:48:08.177862 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:48:08.177885 kernel: io scheduler mq-deadline registered Jan 29 10:48:08.177904 kernel: io scheduler kyber registered Jan 29 10:48:08.177922 kernel: io scheduler bfq registered Jan 29 10:48:08.178152 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 29 10:48:08.178181 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 10:48:08.178267 kernel: ACPI: button: Power Button [PWRB] Jan 29 10:48:08.178290 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 29 10:48:08.178313 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 10:48:08.178338 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:48:08.178358 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 10:48:08.178589 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 29 10:48:08.178621 kernel: printk: console [ttyS0] disabled Jan 29 10:48:08.178640 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 29 10:48:08.178659 kernel: printk: console [ttyS0] enabled Jan 29 10:48:08.178677 kernel: printk: bootconsole [uart0] disabled Jan 29 10:48:08.178695 kernel: thunder_xcv, ver 1.0 Jan 29 10:48:08.178713 kernel: thunder_bgx, ver 1.0 Jan 29 10:48:08.178731 kernel: nicpf, ver 1.0 Jan 29 10:48:08.178756 kernel: nicvf, ver 1.0 Jan 29 10:48:08.178970 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:48:08.179162 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:48:07 UTC (1738147687) Jan 29 10:48:08.179188 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:48:08.179236 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 29 10:48:08.179256 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:48:08.179275 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:48:08.179301 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:48:08.179320 kernel: Segment Routing with IPv6 Jan 29 10:48:08.179338 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:48:08.179376 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:48:08.179396 kernel: Key type dns_resolver registered Jan 29 10:48:08.179416 kernel: registered taskstats version 1 Jan 29 10:48:08.179435 kernel: Loading compiled-in X.509 certificates Jan 29 10:48:08.179453 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 10:48:08.179472 kernel: Key type .fscrypt registered Jan 29 10:48:08.179490 kernel: Key type fscrypt-provisioning registered Jan 29 10:48:08.179517 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:48:08.179535 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:48:08.179554 kernel: ima: No architecture policies found Jan 29 10:48:08.179574 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:48:08.179600 kernel: clk: Disabling unused clocks Jan 29 10:48:08.179643 kernel: Freeing unused kernel memory: 39936K Jan 29 10:48:08.179702 kernel: Run /init as init process Jan 29 10:48:08.179737 kernel: with arguments: Jan 29 10:48:08.179758 kernel: /init Jan 29 10:48:08.179790 kernel: with environment: Jan 29 10:48:08.179808 kernel: HOME=/ Jan 29 10:48:08.179827 kernel: TERM=linux Jan 29 10:48:08.179844 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:48:08.179868 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:48:08.179890 systemd[1]: Detected virtualization amazon. Jan 29 10:48:08.179911 systemd[1]: Detected architecture arm64. Jan 29 10:48:08.179935 systemd[1]: Running in initrd. Jan 29 10:48:08.179955 systemd[1]: No hostname configured, using default hostname. Jan 29 10:48:08.179974 systemd[1]: Hostname set to . Jan 29 10:48:08.179995 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:48:08.180015 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:48:08.180034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:48:08.180054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:48:08.180075 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:48:08.180101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:48:08.180122 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:48:08.180142 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:48:08.180165 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:48:08.180185 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:48:08.180240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:48:08.180261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:48:08.180288 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:48:08.180308 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:48:08.180328 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:48:08.180347 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:48:08.180367 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:48:08.180388 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:48:08.180408 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:48:08.180428 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:48:08.180448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:48:08.180472 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:48:08.180493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:48:08.180512 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:48:08.180532 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:48:08.180552 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:48:08.180571 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:48:08.180591 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:48:08.180611 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:48:08.180635 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:48:08.180655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:08.180675 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:48:08.180695 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:48:08.180764 systemd-journald[251]: Collecting audit messages is disabled. Jan 29 10:48:08.180813 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:48:08.180835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:48:08.180855 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:48:08.180874 systemd-journald[251]: Journal started Jan 29 10:48:08.180924 systemd-journald[251]: Runtime Journal (/run/log/journal/ec261290d286297a7f5457ac39f2febd) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:48:08.144280 systemd-modules-load[252]: Inserted module 'overlay' Jan 29 10:48:08.183958 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:48:08.197232 kernel: Bridge firewalling registered Jan 29 10:48:08.198259 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 29 10:48:08.204913 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:48:08.216621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:08.221067 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:48:08.239670 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:48:08.245183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:48:08.247996 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:48:08.250037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:48:08.293809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:48:08.305572 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:48:08.312535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:08.328467 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:48:08.330706 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:08.338795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:48:08.373223 dracut-cmdline[288]: dracut-dracut-053 Jan 29 10:48:08.382411 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:48:08.426332 systemd-resolved[292]: Positive Trust Anchors: Jan 29 10:48:08.428264 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:48:08.428330 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:48:08.507234 kernel: SCSI subsystem initialized Jan 29 10:48:08.514229 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:48:08.526228 kernel: iscsi: registered transport (tcp) Jan 29 10:48:08.548472 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:48:08.548565 kernel: QLogic iSCSI HBA Driver Jan 29 10:48:08.646225 kernel: random: crng init done Jan 29 10:48:08.644490 systemd-resolved[292]: Defaulting to hostname 'linux'. Jan 29 10:48:08.646343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:48:08.650988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:48:08.672282 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:48:08.681551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:48:08.714269 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:48:08.714344 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:48:08.717238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:48:08.782240 kernel: raid6: neonx8 gen() 6520 MB/s Jan 29 10:48:08.799226 kernel: raid6: neonx4 gen() 6502 MB/s Jan 29 10:48:08.816226 kernel: raid6: neonx2 gen() 5428 MB/s Jan 29 10:48:08.833226 kernel: raid6: neonx1 gen() 3922 MB/s Jan 29 10:48:08.850226 kernel: raid6: int64x8 gen() 3577 MB/s Jan 29 10:48:08.867226 kernel: raid6: int64x4 gen() 3678 MB/s Jan 29 10:48:08.884226 kernel: raid6: int64x2 gen() 3574 MB/s Jan 29 10:48:08.901999 kernel: raid6: int64x1 gen() 2748 MB/s Jan 29 10:48:08.902031 kernel: raid6: using algorithm neonx8 gen() 6520 MB/s Jan 29 10:48:08.919941 kernel: raid6: .... xor() 4731 MB/s, rmw enabled Jan 29 10:48:08.919978 kernel: raid6: using neon recovery algorithm Jan 29 10:48:08.927975 kernel: xor: measuring software checksum speed Jan 29 10:48:08.928026 kernel: 8regs : 12553 MB/sec Jan 29 10:48:08.929229 kernel: 32regs : 11784 MB/sec Jan 29 10:48:08.931107 kernel: arm64_neon : 8979 MB/sec Jan 29 10:48:08.931149 kernel: xor: using function: 8regs (12553 MB/sec) Jan 29 10:48:09.014246 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:48:09.032332 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:48:09.044524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:48:09.080824 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 29 10:48:09.090596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:48:09.109056 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:48:09.141066 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jan 29 10:48:09.197138 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:48:09.206526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:48:09.329376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:48:09.342741 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:48:09.385622 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:48:09.390795 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:48:09.393563 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:48:09.395765 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:48:09.415461 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:48:09.459816 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:48:09.532102 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 10:48:09.532164 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 29 10:48:09.543813 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 10:48:09.544068 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 10:48:09.544336 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:7e:ba:c4:87:79 Jan 29 10:48:09.550808 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:09.563831 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 10:48:09.567234 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 10:48:09.569677 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:48:09.569974 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:09.576923 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:48:09.579041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:48:09.579157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:09.581345 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:09.596241 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 10:48:09.597658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:09.605831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 10:48:09.605871 kernel: GPT:9289727 != 16777215 Jan 29 10:48:09.605895 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 10:48:09.605919 kernel: GPT:9289727 != 16777215 Jan 29 10:48:09.606922 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 10:48:09.607771 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:48:09.631329 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:09.648095 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:48:09.691280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:09.797871 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 10:48:09.811245 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (537) Jan 29 10:48:09.819260 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (542) Jan 29 10:48:09.853236 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 10:48:09.903510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:48:09.919381 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 10:48:09.924246 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 10:48:09.938440 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:48:09.950922 disk-uuid[662]: Primary Header is updated. Jan 29 10:48:09.950922 disk-uuid[662]: Secondary Entries is updated. Jan 29 10:48:09.950922 disk-uuid[662]: Secondary Header is updated. Jan 29 10:48:09.964253 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:48:10.980065 disk-uuid[663]: The operation has completed successfully. Jan 29 10:48:10.985311 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:48:11.152162 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:48:11.152406 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:48:11.222449 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:48:11.229547 sh[923]: Success Jan 29 10:48:11.267283 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:48:11.375592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:48:11.394425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:48:11.399442 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:48:11.425173 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 10:48:11.425256 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:11.426966 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:48:11.427001 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:48:11.429228 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:48:11.566257 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 10:48:11.598896 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:48:11.602586 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:48:11.616638 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:48:11.622281 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:48:11.641494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:11.641568 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:11.641605 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:48:11.648234 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:48:11.664079 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:48:11.666522 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:11.687854 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:48:11.697536 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:48:11.796691 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:48:11.806526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:48:11.862569 systemd-networkd[1115]: lo: Link UP Jan 29 10:48:11.862592 systemd-networkd[1115]: lo: Gained carrier Jan 29 10:48:11.866901 systemd-networkd[1115]: Enumeration completed Jan 29 10:48:11.869141 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:48:11.869672 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:11.869678 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:48:11.872244 systemd-networkd[1115]: eth0: Link UP Jan 29 10:48:11.872252 systemd-networkd[1115]: eth0: Gained carrier Jan 29 10:48:11.872269 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:11.885303 systemd[1]: Reached target network.target - Network. Jan 29 10:48:11.907284 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.28.141/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:48:12.160657 ignition[1028]: Ignition 2.20.0 Jan 29 10:48:12.161711 ignition[1028]: Stage: fetch-offline Jan 29 10:48:12.162159 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:12.162184 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:12.165478 ignition[1028]: Ignition finished successfully Jan 29 10:48:12.171100 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:48:12.188631 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 10:48:12.211162 ignition[1124]: Ignition 2.20.0 Jan 29 10:48:12.211183 ignition[1124]: Stage: fetch Jan 29 10:48:12.212740 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:12.212767 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:12.212970 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:12.223597 ignition[1124]: PUT result: OK Jan 29 10:48:12.226654 ignition[1124]: parsed url from cmdline: "" Jan 29 10:48:12.226676 ignition[1124]: no config URL provided Jan 29 10:48:12.226691 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:48:12.226722 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:48:12.226767 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:12.228332 ignition[1124]: PUT result: OK Jan 29 10:48:12.228411 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 10:48:12.237332 ignition[1124]: GET result: OK Jan 29 10:48:12.238084 ignition[1124]: parsing config with SHA512: a1626487cdba076cdcb2eff217e338f132399ac3967be39c4957073e1b2e9db5f3e9eed005f7b1aa5888c3c420f3a9f4d91d276950f734582ce61441b83718ee Jan 29 10:48:12.244610 unknown[1124]: fetched base config from "system" Jan 29 10:48:12.244640 unknown[1124]: fetched base config from "system" Jan 29 10:48:12.246306 ignition[1124]: fetch: fetch complete Jan 29 10:48:12.244654 unknown[1124]: fetched user config from "aws" Jan 29 10:48:12.246318 ignition[1124]: fetch: fetch passed Jan 29 10:48:12.254254 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 10:48:12.246405 ignition[1124]: Ignition finished successfully Jan 29 10:48:12.269491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:48:12.294954 ignition[1131]: Ignition 2.20.0 Jan 29 10:48:12.294995 ignition[1131]: Stage: kargs Jan 29 10:48:12.295887 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:12.295923 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:12.296084 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:12.301557 ignition[1131]: PUT result: OK Jan 29 10:48:12.307718 ignition[1131]: kargs: kargs passed Jan 29 10:48:12.307817 ignition[1131]: Ignition finished successfully Jan 29 10:48:12.312558 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:48:12.323513 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:48:12.348622 ignition[1137]: Ignition 2.20.0 Jan 29 10:48:12.348654 ignition[1137]: Stage: disks Jan 29 10:48:12.349633 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:12.349663 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:12.349818 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:12.351533 ignition[1137]: PUT result: OK Jan 29 10:48:12.360046 ignition[1137]: disks: disks passed Jan 29 10:48:12.360241 ignition[1137]: Ignition finished successfully Jan 29 10:48:12.364797 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:48:12.367263 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:48:12.370879 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:48:12.374950 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:48:12.378774 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:48:12.382373 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:48:12.399557 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:48:12.445805 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 10:48:12.453946 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:48:12.466656 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:48:12.553242 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 10:48:12.554303 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:48:12.557970 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:48:12.572362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:48:12.578815 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:48:12.581347 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 10:48:12.581434 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:48:12.608590 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1165) Jan 29 10:48:12.608629 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:12.608656 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:12.608682 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:48:12.581553 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:48:12.614236 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:48:12.617432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:48:12.626475 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:48:12.637520 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:48:13.036854 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:48:13.061506 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:48:13.070177 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:48:13.078121 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:48:13.148347 systemd-networkd[1115]: eth0: Gained IPv6LL Jan 29 10:48:13.501839 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:48:13.510413 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:48:13.517475 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:48:13.536771 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:48:13.540029 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:13.572001 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:48:13.587569 ignition[1279]: INFO : Ignition 2.20.0 Jan 29 10:48:13.587569 ignition[1279]: INFO : Stage: mount Jan 29 10:48:13.590719 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:13.590719 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:13.594778 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:13.597761 ignition[1279]: INFO : PUT result: OK Jan 29 10:48:13.602513 ignition[1279]: INFO : mount: mount passed Jan 29 10:48:13.602513 ignition[1279]: INFO : Ignition finished successfully Jan 29 10:48:13.607750 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:48:13.619403 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:48:13.637550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:48:13.662226 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1289) Jan 29 10:48:13.665589 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:48:13.665626 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:48:13.666744 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:48:13.671231 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:48:13.674453 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:48:13.709377 ignition[1306]: INFO : Ignition 2.20.0 Jan 29 10:48:13.709377 ignition[1306]: INFO : Stage: files Jan 29 10:48:13.712594 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:13.712594 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:13.712594 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:13.719175 ignition[1306]: INFO : PUT result: OK Jan 29 10:48:13.723879 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:48:13.726484 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:48:13.726484 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:48:13.735000 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:48:13.737806 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:48:13.740580 unknown[1306]: wrote ssh authorized keys file for user: core Jan 29 10:48:13.744753 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:48:13.759928 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:48:13.763167 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:48:13.763167 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:48:13.763167 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:48:13.763167 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:48:13.777135 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:48:13.777135 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:48:13.777135 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 10:48:14.273582 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 10:48:14.644670 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:48:14.648597 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:48:14.648597 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:48:14.648597 ignition[1306]: INFO : files: files passed Jan 29 10:48:14.648597 ignition[1306]: INFO : Ignition finished successfully Jan 29 10:48:14.658990 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:48:14.674597 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:48:14.682994 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:48:14.699419 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:48:14.701304 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:48:14.714465 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:48:14.714465 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:48:14.722595 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:48:14.728590 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:48:14.732027 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:48:14.748540 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:48:14.805032 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:48:14.805847 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:48:14.809676 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:48:14.811851 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:48:14.815673 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:48:14.829551 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:48:14.854267 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:48:14.877594 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:48:14.901391 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:48:14.904311 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:48:14.910789 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:48:14.914235 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:48:14.914468 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:48:14.932050 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:48:14.938982 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:48:14.943712 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:48:14.945347 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:48:14.950413 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:48:14.958107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:48:14.960111 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:48:14.962441 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:48:14.965503 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:48:14.973434 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:48:14.975271 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:48:14.975515 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:48:14.977959 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:48:14.980993 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:48:14.984425 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:48:14.986939 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:48:14.999929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:48:15.000152 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:48:15.008470 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:48:15.008715 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:48:15.011775 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:48:15.011970 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:48:15.027651 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:48:15.037403 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:48:15.041394 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:48:15.041698 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:48:15.046656 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:48:15.046892 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:48:15.063772 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:48:15.064487 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:48:15.083400 ignition[1358]: INFO : Ignition 2.20.0 Jan 29 10:48:15.083400 ignition[1358]: INFO : Stage: umount Jan 29 10:48:15.083400 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:48:15.083400 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:48:15.083400 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:48:15.095899 ignition[1358]: INFO : PUT result: OK Jan 29 10:48:15.095899 ignition[1358]: INFO : umount: umount passed Jan 29 10:48:15.095899 ignition[1358]: INFO : Ignition finished successfully Jan 29 10:48:15.101986 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:48:15.103399 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:48:15.108597 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:48:15.108711 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:48:15.112186 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:48:15.112826 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:48:15.117866 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 10:48:15.117960 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 10:48:15.119883 systemd[1]: Stopped target network.target - Network. Jan 29 10:48:15.122287 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:48:15.122390 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:48:15.124695 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:48:15.126510 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:48:15.137065 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:48:15.140372 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:48:15.142051 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:48:15.144188 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:48:15.144291 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:48:15.151355 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:48:15.151441 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:48:15.160509 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:48:15.160609 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:48:15.163305 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:48:15.163388 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:48:15.165628 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:48:15.167909 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:48:15.175319 systemd-networkd[1115]: eth0: DHCPv6 lease lost Jan 29 10:48:15.182833 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:48:15.184818 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:48:15.186830 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:48:15.194053 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:48:15.198041 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:48:15.203064 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:48:15.203268 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:48:15.228381 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:48:15.228470 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:48:15.230964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:48:15.231051 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:48:15.244845 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:48:15.250485 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:48:15.250604 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:48:15.253057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:48:15.253143 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:15.255473 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:48:15.255549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:48:15.257976 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:48:15.258051 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:48:15.260723 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:48:15.297685 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:48:15.298023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:48:15.302082 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:48:15.302231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:48:15.309985 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:48:15.310061 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:48:15.310170 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:48:15.310317 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:48:15.310803 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:48:15.310875 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:48:15.313731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:48:15.313813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:48:15.338646 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:48:15.341006 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:48:15.341115 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:48:15.343503 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:48:15.343586 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:48:15.350442 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:48:15.350547 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:48:15.356477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:48:15.356577 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:15.379035 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:48:15.379669 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:48:15.385936 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:48:15.386272 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:48:15.394704 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:48:15.406460 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:48:15.425523 systemd[1]: Switching root. Jan 29 10:48:15.460771 systemd-journald[251]: Journal stopped Jan 29 10:48:18.320425 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 29 10:48:18.320726 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:48:18.320776 kernel: SELinux: policy capability open_perms=1 Jan 29 10:48:18.320808 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:48:18.320838 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:48:18.320867 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:48:18.320895 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:48:18.320924 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:48:18.320963 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:48:18.320996 kernel: audit: type=1403 audit(1738147696.524:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:48:18.321036 systemd[1]: Successfully loaded SELinux policy in 71.368ms. Jan 29 10:48:18.321083 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.939ms. Jan 29 10:48:18.321115 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:48:18.321145 systemd[1]: Detected virtualization amazon. Jan 29 10:48:18.321174 systemd[1]: Detected architecture arm64. Jan 29 10:48:18.323254 systemd[1]: Detected first boot. Jan 29 10:48:18.323321 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:48:18.323362 zram_generator::config[1401]: No configuration found. Jan 29 10:48:18.323396 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:48:18.323428 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:48:18.323460 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:48:18.323491 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:48:18.323523 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:48:18.323559 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:48:18.323592 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:48:18.323623 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:48:18.323653 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:48:18.323684 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:48:18.323715 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:48:18.323749 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:48:18.323781 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:48:18.323811 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:48:18.323846 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:48:18.323877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:48:18.323906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:48:18.323939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:48:18.323980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 10:48:18.324010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:48:18.324041 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:48:18.324071 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:48:18.324106 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:48:18.324137 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:48:18.324167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:48:18.324421 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:48:18.324460 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:48:18.324493 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:48:18.324523 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:48:18.324551 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:48:18.324586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:48:18.324617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:48:18.324649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:48:18.324680 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:48:18.324710 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:48:18.324739 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:48:18.324769 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:48:18.324801 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:48:18.324831 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:48:18.324865 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:48:18.324895 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:48:18.324926 systemd[1]: Reached target machines.target - Containers. Jan 29 10:48:18.324955 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:48:18.324984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:48:18.325015 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:48:18.325046 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:48:18.325076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:48:18.325107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:48:18.325143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:48:18.325172 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:48:18.325229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:48:18.325267 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:48:18.325298 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:48:18.325329 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:48:18.325358 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:48:18.325390 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:48:18.325427 kernel: fuse: init (API version 7.39) Jan 29 10:48:18.325456 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:48:18.325484 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:48:18.325513 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:48:18.325540 kernel: ACPI: bus type drm_connector registered Jan 29 10:48:18.325571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:48:18.325601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:48:18.325631 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:48:18.325661 systemd[1]: Stopped verity-setup.service. Jan 29 10:48:18.325696 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:48:18.325725 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:48:18.325754 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:48:18.325782 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:48:18.325810 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:48:18.325841 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:48:18.325869 kernel: loop: module loaded Jan 29 10:48:18.325903 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:48:18.325932 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:48:18.325961 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:48:18.325989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:48:18.326018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:48:18.326048 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:48:18.326081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:48:18.326114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:48:18.326143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:48:18.326172 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:48:18.328278 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:48:18.328335 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:48:18.328369 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:48:18.328400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:48:18.328429 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:48:18.328506 systemd-journald[1479]: Collecting audit messages is disabled. Jan 29 10:48:18.328558 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:48:18.328590 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:48:18.328621 systemd-journald[1479]: Journal started Jan 29 10:48:18.328680 systemd-journald[1479]: Runtime Journal (/run/log/journal/ec261290d286297a7f5457ac39f2febd) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:48:17.762967 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:48:17.801534 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 10:48:17.802315 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:48:18.337271 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:48:18.356221 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:48:18.356311 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:48:18.358824 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:48:18.368278 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:48:18.386368 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:48:18.402174 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:48:18.404521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:48:18.412262 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:48:18.416240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:48:18.426715 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:48:18.426801 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:48:18.438246 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:48:18.450242 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:48:18.479393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:48:18.484253 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:48:18.486766 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:48:18.497944 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:48:18.501777 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:48:18.505014 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:48:18.531288 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:48:18.559055 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:48:18.575503 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:48:18.580624 kernel: loop0: detected capacity change from 0 to 53784 Jan 29 10:48:18.590543 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:48:18.606366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:48:18.620542 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:48:18.648912 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:18.656865 systemd-journald[1479]: Time spent on flushing to /var/log/journal/ec261290d286297a7f5457ac39f2febd is 59.750ms for 898 entries. Jan 29 10:48:18.656865 systemd-journald[1479]: System Journal (/var/log/journal/ec261290d286297a7f5457ac39f2febd) is 8.0M, max 195.6M, 187.6M free. Jan 29 10:48:18.731510 systemd-journald[1479]: Received client request to flush runtime journal. Jan 29 10:48:18.731611 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:48:18.731650 kernel: loop1: detected capacity change from 0 to 113552 Jan 29 10:48:18.662797 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 29 10:48:18.662821 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 29 10:48:18.677531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:48:18.688597 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:48:18.696985 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:48:18.703124 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:48:18.710687 udevadm[1541]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 10:48:18.736344 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:48:18.816268 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:48:18.826603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:48:18.861831 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Jan 29 10:48:18.861869 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Jan 29 10:48:18.873325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:48:18.883294 kernel: loop2: detected capacity change from 0 to 116784 Jan 29 10:48:18.972998 kernel: loop3: detected capacity change from 0 to 189592 Jan 29 10:48:19.017247 kernel: loop4: detected capacity change from 0 to 53784 Jan 29 10:48:19.055483 kernel: loop5: detected capacity change from 0 to 113552 Jan 29 10:48:19.069302 kernel: loop6: detected capacity change from 0 to 116784 Jan 29 10:48:19.098247 kernel: loop7: detected capacity change from 0 to 189592 Jan 29 10:48:19.125515 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 10:48:19.126500 (sd-merge)[1558]: Merged extensions into '/usr'. Jan 29 10:48:19.133969 systemd[1]: Reloading requested from client PID 1512 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:48:19.134004 systemd[1]: Reloading... Jan 29 10:48:19.264233 zram_generator::config[1581]: No configuration found. Jan 29 10:48:19.605318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:48:19.721978 systemd[1]: Reloading finished in 583 ms. Jan 29 10:48:19.771714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:48:19.796565 systemd[1]: Starting ensure-sysext.service... Jan 29 10:48:19.807582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:48:19.833718 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:48:19.833749 systemd[1]: Reloading... Jan 29 10:48:19.876985 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:48:19.877530 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:48:19.880171 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:48:19.880762 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 29 10:48:19.880926 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 29 10:48:19.889987 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:48:19.890010 systemd-tmpfiles[1637]: Skipping /boot Jan 29 10:48:19.923970 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:48:19.924000 systemd-tmpfiles[1637]: Skipping /boot Jan 29 10:48:20.004240 zram_generator::config[1665]: No configuration found. Jan 29 10:48:20.228135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:48:20.329802 ldconfig[1505]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:48:20.342084 systemd[1]: Reloading finished in 507 ms. Jan 29 10:48:20.367279 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:48:20.370087 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:48:20.377166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:48:20.401653 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:48:20.410541 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:48:20.422564 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:48:20.429519 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:48:20.441565 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:48:20.452660 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:48:20.466462 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:48:20.481563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:48:20.489411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:48:20.498664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:48:20.500810 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:48:20.504749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:48:20.505067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:48:20.517134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:48:20.525441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:48:20.527622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:48:20.549451 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:48:20.553178 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:48:20.555595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:48:20.562386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:48:20.563522 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:48:20.586235 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:48:20.599353 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:48:20.618400 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:48:20.623841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:48:20.626044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:48:20.626703 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:48:20.632425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:48:20.632720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:48:20.642424 systemd[1]: Finished ensure-sysext.service. Jan 29 10:48:20.646318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:48:20.650354 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:48:20.666532 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:48:20.686959 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:48:20.689309 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:48:20.690061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:48:20.693294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:48:20.696235 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:48:20.696528 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:48:20.704880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:48:20.725058 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:48:20.725674 systemd-udevd[1725]: Using default interface naming scheme 'v255'. Jan 29 10:48:20.727566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:48:20.730268 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:48:20.748341 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:48:20.752867 augenrules[1766]: No rules Jan 29 10:48:20.755902 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:48:20.758352 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:48:20.771569 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:48:20.810958 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:48:20.828479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:48:20.934053 systemd-resolved[1724]: Positive Trust Anchors: Jan 29 10:48:20.934094 systemd-resolved[1724]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:48:20.934157 systemd-resolved[1724]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:48:20.945971 systemd-resolved[1724]: Defaulting to hostname 'linux'. Jan 29 10:48:20.949048 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:48:20.951524 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:48:20.973996 systemd-networkd[1780]: lo: Link UP Jan 29 10:48:20.974010 systemd-networkd[1780]: lo: Gained carrier Jan 29 10:48:20.975175 systemd-networkd[1780]: Enumeration completed Jan 29 10:48:20.977242 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:48:20.979481 systemd[1]: Reached target network.target - Network. Jan 29 10:48:20.990594 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:48:21.042417 (udev-worker)[1789]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:21.047126 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 10:48:21.102158 systemd-networkd[1780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:21.102568 systemd-networkd[1780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:48:21.106140 systemd-networkd[1780]: eth0: Link UP Jan 29 10:48:21.106589 systemd-networkd[1780]: eth0: Gained carrier Jan 29 10:48:21.106623 systemd-networkd[1780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:48:21.117394 systemd-networkd[1780]: eth0: DHCPv4 address 172.31.28.141/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:48:21.196305 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1799) Jan 29 10:48:21.301821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:48:21.430673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:48:21.431656 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:48:21.444010 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:48:21.454551 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:48:21.481285 lvm[1897]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:48:21.490296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:48:21.496876 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:48:21.530795 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:48:21.533673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:48:21.535773 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:48:21.537899 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:48:21.540236 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:48:21.542805 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:48:21.545026 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:48:21.547392 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:48:21.549662 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:48:21.549711 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:48:21.551647 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:48:21.554887 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:48:21.559403 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:48:21.570416 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:48:21.574706 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:48:21.578068 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:48:21.580508 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:48:21.582428 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:48:21.584458 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:48:21.584506 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:48:21.602372 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:48:21.609559 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 10:48:21.615747 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:48:21.623122 lvm[1907]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:48:21.629525 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:48:21.636597 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:48:21.638706 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:48:21.646608 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:48:21.654536 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 10:48:21.663475 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 10:48:21.667513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:48:21.687256 jq[1911]: false Jan 29 10:48:21.675808 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:48:21.689627 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:48:21.692611 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:48:21.693529 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:48:21.701535 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:48:21.710457 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:48:21.718893 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:48:21.719294 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:48:21.742735 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:48:21.766346 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:48:21.771958 update_engine[1919]: I20250129 10:48:21.770709 1919 main.cc:92] Flatcar Update Engine starting Jan 29 10:48:21.814772 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:48:21.816374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:48:21.846266 jq[1920]: true Jan 29 10:48:21.850366 extend-filesystems[1912]: Found loop4 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found loop5 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found loop6 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found loop7 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1p1 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1p2 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1p3 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found usr Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1p4 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1p6 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1p7 Jan 29 10:48:21.850366 extend-filesystems[1912]: Found nvme0n1p9 Jan 29 10:48:21.850366 extend-filesystems[1912]: Checking size of /dev/nvme0n1p9 Jan 29 10:48:21.899434 dbus-daemon[1910]: [system] SELinux support is enabled Jan 29 10:48:21.899754 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:48:21.908366 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:48:21.908428 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:48:21.912309 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:00:26 UTC 2025 (1): Starting Jan 29 10:48:21.912372 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:00:26 UTC 2025 (1): Starting Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: available at https://www.nwtime.org/support Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: basedate set to 2025-01-17 Jan 29 10:48:21.936414 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: gps base set to 2025-01-19 (week 2350) Jan 29 10:48:21.913465 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:48:21.912392 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:21.913507 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:48:21.912411 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:48:21.912429 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:48:21.912447 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 29 10:48:21.912465 ntpd[1914]: available at https://www.nwtime.org/support Jan 29 10:48:21.912483 ntpd[1914]: ---------------------------------------------------- Jan 29 10:48:21.926533 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1780 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 10:48:21.930094 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 29 10:48:21.932365 ntpd[1914]: basedate set to 2025-01-17 Jan 29 10:48:21.932401 ntpd[1914]: gps base set to 2025-01-19 (week 2350) Jan 29 10:48:21.943181 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:48:21.943181 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:48:21.943312 update_engine[1919]: I20250129 10:48:21.940482 1919 update_check_scheduler.cc:74] Next update check in 2m15s Jan 29 10:48:21.942824 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:48:21.942899 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:48:21.945436 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:48:21.948372 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:48:21.948372 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Listen normally on 3 eth0 172.31.28.141:123 Jan 29 10:48:21.948372 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 29 10:48:21.948372 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: bind(21) AF_INET6 fe80::47e:baff:fec4:8779%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:21.948372 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: unable to create socket on eth0 (5) for fe80::47e:baff:fec4:8779%2#123 Jan 29 10:48:21.948372 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: failed to init interface for address fe80::47e:baff:fec4:8779%2 Jan 29 10:48:21.948372 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 29 10:48:21.945518 ntpd[1914]: Listen normally on 3 eth0 172.31.28.141:123 Jan 29 10:48:21.945584 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 29 10:48:21.945660 ntpd[1914]: bind(21) AF_INET6 fe80::47e:baff:fec4:8779%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:21.945699 ntpd[1914]: unable to create socket on eth0 (5) for fe80::47e:baff:fec4:8779%2#123 Jan 29 10:48:21.945726 ntpd[1914]: failed to init interface for address fe80::47e:baff:fec4:8779%2 Jan 29 10:48:21.945782 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 29 10:48:21.955124 (ntainerd)[1939]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:48:21.958504 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 10:48:21.961324 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:48:21.961691 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:48:21.965932 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 10:48:21.971861 extend-filesystems[1912]: Resized partition /dev/nvme0n1p9 Jan 29 10:48:21.975082 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:48:21.980814 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:21.982587 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:21.982587 ntpd[1914]: 29 Jan 10:48:21 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:21.980872 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:48:21.990148 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:48:22.000311 extend-filesystems[1960]: resize2fs 1.47.1 (20-May-2024) Jan 29 10:48:22.012718 jq[1945]: true Jan 29 10:48:22.025254 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 10:48:22.045987 systemd-logind[1918]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 10:48:22.046038 systemd-logind[1918]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 29 10:48:22.047629 systemd-logind[1918]: New seat seat0. Jan 29 10:48:22.050508 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:48:22.129230 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 10:48:22.156582 locksmithd[1959]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:48:22.166596 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1783) Jan 29 10:48:22.169270 extend-filesystems[1960]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 10:48:22.169270 extend-filesystems[1960]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 10:48:22.169270 extend-filesystems[1960]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 10:48:22.179497 extend-filesystems[1912]: Resized filesystem in /dev/nvme0n1p9 Jan 29 10:48:22.192374 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:48:22.192885 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:48:22.240261 bash[2002]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:48:22.247040 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:48:22.264693 systemd[1]: Starting sshkeys.service... Jan 29 10:48:22.275705 coreos-metadata[1909]: Jan 29 10:48:22.275 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:48:22.275705 coreos-metadata[1909]: Jan 29 10:48:22.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 10:48:22.275705 coreos-metadata[1909]: Jan 29 10:48:22.275 INFO Fetch successful Jan 29 10:48:22.276255 coreos-metadata[1909]: Jan 29 10:48:22.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 10:48:22.276255 coreos-metadata[1909]: Jan 29 10:48:22.275 INFO Fetch successful Jan 29 10:48:22.276255 coreos-metadata[1909]: Jan 29 10:48:22.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 10:48:22.277492 coreos-metadata[1909]: Jan 29 10:48:22.276 INFO Fetch successful Jan 29 10:48:22.277492 coreos-metadata[1909]: Jan 29 10:48:22.276 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 10:48:22.277966 coreos-metadata[1909]: Jan 29 10:48:22.277 INFO Fetch successful Jan 29 10:48:22.277966 coreos-metadata[1909]: Jan 29 10:48:22.277 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 10:48:22.283305 coreos-metadata[1909]: Jan 29 10:48:22.283 INFO Fetch failed with 404: resource not found Jan 29 10:48:22.283305 coreos-metadata[1909]: Jan 29 10:48:22.283 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 10:48:22.285228 coreos-metadata[1909]: Jan 29 10:48:22.284 INFO Fetch successful Jan 29 10:48:22.285228 coreos-metadata[1909]: Jan 29 10:48:22.284 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 10:48:22.286969 coreos-metadata[1909]: Jan 29 10:48:22.286 INFO Fetch successful Jan 29 10:48:22.287068 coreos-metadata[1909]: Jan 29 10:48:22.286 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 10:48:22.287126 coreos-metadata[1909]: Jan 29 10:48:22.287 INFO Fetch successful Jan 29 10:48:22.287126 coreos-metadata[1909]: Jan 29 10:48:22.287 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 10:48:22.290462 coreos-metadata[1909]: Jan 29 10:48:22.289 INFO Fetch successful Jan 29 10:48:22.290462 coreos-metadata[1909]: Jan 29 10:48:22.289 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 10:48:22.290462 coreos-metadata[1909]: Jan 29 10:48:22.289 INFO Fetch successful Jan 29 10:48:22.378370 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 10:48:22.386886 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 10:48:22.424445 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 10:48:22.425316 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 10:48:22.429070 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1955 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 10:48:22.442812 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 10:48:22.448302 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 10:48:22.453102 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:48:22.585189 polkitd[2039]: Started polkitd version 121 Jan 29 10:48:22.660809 polkitd[2039]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 10:48:22.660931 polkitd[2039]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 10:48:22.662944 polkitd[2039]: Finished loading, compiling and executing 2 rules Jan 29 10:48:22.665446 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 10:48:22.665716 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 10:48:22.669247 polkitd[2039]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 10:48:22.692698 coreos-metadata[2022]: Jan 29 10:48:22.692 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:48:22.701240 coreos-metadata[2022]: Jan 29 10:48:22.700 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 10:48:22.706109 coreos-metadata[2022]: Jan 29 10:48:22.705 INFO Fetch successful Jan 29 10:48:22.706109 coreos-metadata[2022]: Jan 29 10:48:22.705 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 10:48:22.706109 coreos-metadata[2022]: Jan 29 10:48:22.705 INFO Fetch successful Jan 29 10:48:22.713353 unknown[2022]: wrote ssh authorized keys file for user: core Jan 29 10:48:22.764178 systemd-resolved[1724]: System hostname changed to 'ip-172-31-28-141'. Jan 29 10:48:22.764183 systemd-hostnamed[1955]: Hostname set to (transient) Jan 29 10:48:22.799216 update-ssh-keys[2099]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:48:22.805125 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 10:48:22.820512 systemd[1]: Finished sshkeys.service. Jan 29 10:48:22.845071 containerd[1939]: time="2025-01-29T10:48:22.844926576Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:48:22.894993 containerd[1939]: time="2025-01-29T10:48:22.894916584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.897599424Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.897667428Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.897703608Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.897991272Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.898023804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.898141128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.898168068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.898489644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.898519644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.898548888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899252 containerd[1939]: time="2025-01-29T10:48:22.898575396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899843 containerd[1939]: time="2025-01-29T10:48:22.898730184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899843 containerd[1939]: time="2025-01-29T10:48:22.899102616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899843 containerd[1939]: time="2025-01-29T10:48:22.899337948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:48:22.899843 containerd[1939]: time="2025-01-29T10:48:22.899369052Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:48:22.899843 containerd[1939]: time="2025-01-29T10:48:22.899541300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:48:22.899843 containerd[1939]: time="2025-01-29T10:48:22.899640792Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:48:22.913074 ntpd[1914]: bind(24) AF_INET6 fe80::47e:baff:fec4:8779%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:22.913611 ntpd[1914]: 29 Jan 10:48:22 ntpd[1914]: bind(24) AF_INET6 fe80::47e:baff:fec4:8779%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:48:22.913611 ntpd[1914]: 29 Jan 10:48:22 ntpd[1914]: unable to create socket on eth0 (6) for fe80::47e:baff:fec4:8779%2#123 Jan 29 10:48:22.913611 ntpd[1914]: 29 Jan 10:48:22 ntpd[1914]: failed to init interface for address fe80::47e:baff:fec4:8779%2 Jan 29 10:48:22.913146 ntpd[1914]: unable to create socket on eth0 (6) for fe80::47e:baff:fec4:8779%2#123 Jan 29 10:48:22.913187 ntpd[1914]: failed to init interface for address fe80::47e:baff:fec4:8779%2 Jan 29 10:48:22.917607 containerd[1939]: time="2025-01-29T10:48:22.916835508Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:48:22.917607 containerd[1939]: time="2025-01-29T10:48:22.916938132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:48:22.917607 containerd[1939]: time="2025-01-29T10:48:22.916974576Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:48:22.917607 containerd[1939]: time="2025-01-29T10:48:22.917012424Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:48:22.917607 containerd[1939]: time="2025-01-29T10:48:22.917050764Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:48:22.917607 containerd[1939]: time="2025-01-29T10:48:22.917325564Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:48:22.917907 containerd[1939]: time="2025-01-29T10:48:22.917778444Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:48:22.918027 containerd[1939]: time="2025-01-29T10:48:22.917989776Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:48:22.918093 containerd[1939]: time="2025-01-29T10:48:22.918033156Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:48:22.918093 containerd[1939]: time="2025-01-29T10:48:22.918069300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:48:22.918186 containerd[1939]: time="2025-01-29T10:48:22.918101004Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918186 containerd[1939]: time="2025-01-29T10:48:22.918131016Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918186 containerd[1939]: time="2025-01-29T10:48:22.918161076Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918350 containerd[1939]: time="2025-01-29T10:48:22.918218184Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918350 containerd[1939]: time="2025-01-29T10:48:22.918264336Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918350 containerd[1939]: time="2025-01-29T10:48:22.918298200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918350 containerd[1939]: time="2025-01-29T10:48:22.918331164Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918499 containerd[1939]: time="2025-01-29T10:48:22.918359988Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:48:22.918499 containerd[1939]: time="2025-01-29T10:48:22.918399792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918499 containerd[1939]: time="2025-01-29T10:48:22.918430272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918499 containerd[1939]: time="2025-01-29T10:48:22.918460356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918499 containerd[1939]: time="2025-01-29T10:48:22.918491040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918704 containerd[1939]: time="2025-01-29T10:48:22.918519396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918704 containerd[1939]: time="2025-01-29T10:48:22.918548844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918704 containerd[1939]: time="2025-01-29T10:48:22.918575712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918704 containerd[1939]: time="2025-01-29T10:48:22.918604980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918704 containerd[1939]: time="2025-01-29T10:48:22.918634272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918704 containerd[1939]: time="2025-01-29T10:48:22.918667764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918704 containerd[1939]: time="2025-01-29T10:48:22.918697164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918981 containerd[1939]: time="2025-01-29T10:48:22.918725280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918981 containerd[1939]: time="2025-01-29T10:48:22.918753168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918981 containerd[1939]: time="2025-01-29T10:48:22.918784296Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:48:22.918981 containerd[1939]: time="2025-01-29T10:48:22.918825636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918981 containerd[1939]: time="2025-01-29T10:48:22.918855564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.918981 containerd[1939]: time="2025-01-29T10:48:22.918881112Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:48:22.919274 containerd[1939]: time="2025-01-29T10:48:22.919020684Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:48:22.919274 containerd[1939]: time="2025-01-29T10:48:22.919059864Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:48:22.919274 containerd[1939]: time="2025-01-29T10:48:22.919084788Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:48:22.919274 containerd[1939]: time="2025-01-29T10:48:22.919112868Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:48:22.919274 containerd[1939]: time="2025-01-29T10:48:22.919135116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.919274 containerd[1939]: time="2025-01-29T10:48:22.919164768Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:48:22.921241 containerd[1939]: time="2025-01-29T10:48:22.919188876Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:48:22.921241 containerd[1939]: time="2025-01-29T10:48:22.919852236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:48:22.921399 containerd[1939]: time="2025-01-29T10:48:22.920414076Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:48:22.921399 containerd[1939]: time="2025-01-29T10:48:22.920498880Z" level=info msg="Connect containerd service" Jan 29 10:48:22.921399 containerd[1939]: time="2025-01-29T10:48:22.920558136Z" level=info msg="using legacy CRI server" Jan 29 10:48:22.921399 containerd[1939]: time="2025-01-29T10:48:22.920576004Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:48:22.921399 containerd[1939]: time="2025-01-29T10:48:22.920822136Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:48:22.922784 containerd[1939]: time="2025-01-29T10:48:22.922717452Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:48:22.923093 containerd[1939]: time="2025-01-29T10:48:22.923030844Z" level=info msg="Start subscribing containerd event" Jan 29 10:48:22.923299 containerd[1939]: time="2025-01-29T10:48:22.923273640Z" level=info msg="Start recovering state" Jan 29 10:48:22.923547 containerd[1939]: time="2025-01-29T10:48:22.923510724Z" level=info msg="Start event monitor" Jan 29 10:48:22.923650 containerd[1939]: time="2025-01-29T10:48:22.923625612Z" level=info msg="Start snapshots syncer" Jan 29 10:48:22.923765 containerd[1939]: time="2025-01-29T10:48:22.923741400Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:48:22.923881 containerd[1939]: time="2025-01-29T10:48:22.923856912Z" level=info msg="Start streaming server" Jan 29 10:48:22.925332 containerd[1939]: time="2025-01-29T10:48:22.925270272Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:48:22.925639 containerd[1939]: time="2025-01-29T10:48:22.925613796Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:48:22.925939 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:48:22.929466 containerd[1939]: time="2025-01-29T10:48:22.929388468Z" level=info msg="containerd successfully booted in 0.088819s" Jan 29 10:48:22.940401 systemd-networkd[1780]: eth0: Gained IPv6LL Jan 29 10:48:22.947782 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:48:22.951905 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:48:22.967366 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 10:48:22.986377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:48:22.996394 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:48:23.084216 amazon-ssm-agent[2113]: Initializing new seelog logger Jan 29 10:48:23.084698 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:48:23.089674 amazon-ssm-agent[2113]: New Seelog Logger Creation Complete Jan 29 10:48:23.089974 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.090059 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.090827 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 processing appconfig overrides Jan 29 10:48:23.091415 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.091560 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.091745 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 processing appconfig overrides Jan 29 10:48:23.092243 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.092243 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.092243 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 processing appconfig overrides Jan 29 10:48:23.093251 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO Proxy environment variables: Jan 29 10:48:23.094938 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.095058 amazon-ssm-agent[2113]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:48:23.095389 amazon-ssm-agent[2113]: 2025/01/29 10:48:23 processing appconfig overrides Jan 29 10:48:23.192494 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO https_proxy: Jan 29 10:48:23.295215 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO http_proxy: Jan 29 10:48:23.391046 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO no_proxy: Jan 29 10:48:23.415185 sshd_keygen[1951]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:48:23.462658 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:48:23.473673 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:48:23.484687 systemd[1]: Started sshd@0-172.31.28.141:22-139.178.89.65:49724.service - OpenSSH per-connection server daemon (139.178.89.65:49724). Jan 29 10:48:23.491162 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO Checking if agent identity type OnPrem can be assumed Jan 29 10:48:23.531815 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:48:23.532164 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:48:23.548653 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:48:23.589345 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO Checking if agent identity type EC2 can be assumed Jan 29 10:48:23.602500 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:48:23.616753 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:48:23.628808 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 10:48:23.631705 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:48:23.688635 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO Agent will take identity from EC2 Jan 29 10:48:23.785600 sshd[2140]: Accepted publickey for core from 139.178.89.65 port 49724 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:23.787312 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:48:23.789297 sshd-session[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:23.815660 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:48:23.830703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:48:23.844753 systemd-logind[1918]: New session 1 of user core. Jan 29 10:48:23.873265 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:48:23.886796 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:48:23.891833 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:48:23.912989 (systemd)[2152]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:48:23.986373 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:48:24.087308 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 10:48:24.143854 systemd[2152]: Queued start job for default target default.target. Jan 29 10:48:24.152285 systemd[2152]: Created slice app.slice - User Application Slice. Jan 29 10:48:24.152348 systemd[2152]: Reached target paths.target - Paths. Jan 29 10:48:24.152380 systemd[2152]: Reached target timers.target - Timers. Jan 29 10:48:24.158389 systemd[2152]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:48:24.185503 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 29 10:48:24.192815 systemd[2152]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:48:24.193062 systemd[2152]: Reached target sockets.target - Sockets. Jan 29 10:48:24.193095 systemd[2152]: Reached target basic.target - Basic System. Jan 29 10:48:24.193175 systemd[2152]: Reached target default.target - Main User Target. Jan 29 10:48:24.193272 systemd[2152]: Startup finished in 262ms. Jan 29 10:48:24.194675 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:48:24.207855 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:48:24.285807 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 10:48:24.386715 systemd[1]: Started sshd@1-172.31.28.141:22-139.178.89.65:49740.service - OpenSSH per-connection server daemon (139.178.89.65:49740). Jan 29 10:48:24.392303 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 10:48:24.490926 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [Registrar] Starting registrar module Jan 29 10:48:24.521751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:48:24.526414 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:48:24.529629 systemd[1]: Startup finished in 1.081s (kernel) + 8.720s (initrd) + 8.074s (userspace) = 17.876s. Jan 29 10:48:24.538381 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:48:24.590950 agetty[2146]: failed to open credentials directory Jan 29 10:48:24.591610 amazon-ssm-agent[2113]: 2025-01-29 10:48:23 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 10:48:24.592106 agetty[2148]: failed to open credentials directory Jan 29 10:48:24.607167 sshd[2163]: Accepted publickey for core from 139.178.89.65 port 49740 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:24.612492 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:24.630465 systemd-logind[1918]: New session 2 of user core. Jan 29 10:48:24.636111 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:48:24.773380 sshd[2175]: Connection closed by 139.178.89.65 port 49740 Jan 29 10:48:24.776375 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:24.782721 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 10:48:24.786364 systemd[1]: sshd@1-172.31.28.141:22-139.178.89.65:49740.service: Deactivated successfully. Jan 29 10:48:24.793445 systemd-logind[1918]: Session 2 logged out. Waiting for processes to exit. Jan 29 10:48:24.814998 systemd[1]: Started sshd@2-172.31.28.141:22-139.178.89.65:49746.service - OpenSSH per-connection server daemon (139.178.89.65:49746). Jan 29 10:48:24.817663 systemd-logind[1918]: Removed session 2. Jan 29 10:48:24.992031 amazon-ssm-agent[2113]: 2025-01-29 10:48:24 INFO [EC2Identity] EC2 registration was successful. Jan 29 10:48:25.024245 amazon-ssm-agent[2113]: 2025-01-29 10:48:24 INFO [CredentialRefresher] credentialRefresher has started Jan 29 10:48:25.024245 amazon-ssm-agent[2113]: 2025-01-29 10:48:24 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 10:48:25.024245 amazon-ssm-agent[2113]: 2025-01-29 10:48:25 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 10:48:25.054550 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 49746 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:25.056077 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:25.067084 systemd-logind[1918]: New session 3 of user core. Jan 29 10:48:25.072687 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:48:25.092225 amazon-ssm-agent[2113]: 2025-01-29 10:48:25 INFO [CredentialRefresher] Next credential rotation will be in 30.933325721933333 minutes Jan 29 10:48:25.195470 sshd[2186]: Connection closed by 139.178.89.65 port 49746 Jan 29 10:48:25.196806 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:25.204601 systemd-logind[1918]: Session 3 logged out. Waiting for processes to exit. Jan 29 10:48:25.204804 systemd[1]: sshd@2-172.31.28.141:22-139.178.89.65:49746.service: Deactivated successfully. Jan 29 10:48:25.208840 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 10:48:25.213309 systemd-logind[1918]: Removed session 3. Jan 29 10:48:25.232883 systemd[1]: Started sshd@3-172.31.28.141:22-139.178.89.65:49752.service - OpenSSH per-connection server daemon (139.178.89.65:49752). Jan 29 10:48:25.321991 kubelet[2170]: E0129 10:48:25.321931 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:48:25.327453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:48:25.327763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:48:25.329347 systemd[1]: kubelet.service: Consumed 1.248s CPU time. Jan 29 10:48:25.427985 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 49752 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:25.430443 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:25.437993 systemd-logind[1918]: New session 4 of user core. Jan 29 10:48:25.447440 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:48:25.573080 sshd[2196]: Connection closed by 139.178.89.65 port 49752 Jan 29 10:48:25.572889 sshd-session[2191]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:25.578356 systemd[1]: sshd@3-172.31.28.141:22-139.178.89.65:49752.service: Deactivated successfully. Jan 29 10:48:25.581127 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:48:25.584109 systemd-logind[1918]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:48:25.586143 systemd-logind[1918]: Removed session 4. Jan 29 10:48:25.613721 systemd[1]: Started sshd@4-172.31.28.141:22-139.178.89.65:49758.service - OpenSSH per-connection server daemon (139.178.89.65:49758). Jan 29 10:48:25.810942 sshd[2201]: Accepted publickey for core from 139.178.89.65 port 49758 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:25.813355 sshd-session[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:25.820586 systemd-logind[1918]: New session 5 of user core. Jan 29 10:48:25.832465 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:48:25.913015 ntpd[1914]: Listen normally on 7 eth0 [fe80::47e:baff:fec4:8779%2]:123 Jan 29 10:48:25.914082 ntpd[1914]: 29 Jan 10:48:25 ntpd[1914]: Listen normally on 7 eth0 [fe80::47e:baff:fec4:8779%2]:123 Jan 29 10:48:25.949750 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 10:48:25.950415 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:25.966319 sudo[2204]: pam_unix(sudo:session): session closed for user root Jan 29 10:48:25.989264 sshd[2203]: Connection closed by 139.178.89.65 port 49758 Jan 29 10:48:25.990290 sshd-session[2201]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:25.995088 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:48:25.997531 systemd[1]: sshd@4-172.31.28.141:22-139.178.89.65:49758.service: Deactivated successfully. Jan 29 10:48:26.002399 systemd-logind[1918]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:48:26.004598 systemd-logind[1918]: Removed session 5. Jan 29 10:48:26.024669 systemd[1]: Started sshd@5-172.31.28.141:22-139.178.89.65:49764.service - OpenSSH per-connection server daemon (139.178.89.65:49764). Jan 29 10:48:26.062176 amazon-ssm-agent[2113]: 2025-01-29 10:48:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 10:48:26.163399 amazon-ssm-agent[2113]: 2025-01-29 10:48:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2211) started Jan 29 10:48:26.230957 sshd[2209]: Accepted publickey for core from 139.178.89.65 port 49764 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:26.236124 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:26.247529 systemd-logind[1918]: New session 6 of user core. Jan 29 10:48:26.254489 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 10:48:26.264443 amazon-ssm-agent[2113]: 2025-01-29 10:48:26 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 10:48:26.358383 sudo[2224]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 10:48:26.359000 sudo[2224]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:26.365320 sudo[2224]: pam_unix(sudo:session): session closed for user root Jan 29 10:48:26.375046 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 10:48:26.375696 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:26.398800 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:48:26.447462 augenrules[2246]: No rules Jan 29 10:48:26.450366 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:48:26.451311 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:48:26.454112 sudo[2223]: pam_unix(sudo:session): session closed for user root Jan 29 10:48:26.477083 sshd[2221]: Connection closed by 139.178.89.65 port 49764 Jan 29 10:48:26.477933 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:26.483861 systemd[1]: sshd@5-172.31.28.141:22-139.178.89.65:49764.service: Deactivated successfully. Jan 29 10:48:26.486773 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 10:48:26.488755 systemd-logind[1918]: Session 6 logged out. Waiting for processes to exit. Jan 29 10:48:26.490660 systemd-logind[1918]: Removed session 6. Jan 29 10:48:26.516741 systemd[1]: Started sshd@6-172.31.28.141:22-139.178.89.65:49774.service - OpenSSH per-connection server daemon (139.178.89.65:49774). Jan 29 10:48:26.708475 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 49774 ssh2: RSA SHA256:JmvWSq8OQrjuKxgpNsrUVji2I6gJ/9NfV7R8kJq+KKI Jan 29 10:48:26.710869 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:48:26.718955 systemd-logind[1918]: New session 7 of user core. Jan 29 10:48:26.730467 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 10:48:26.833598 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:48:26.834188 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:48:27.571975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:48:27.572315 systemd[1]: kubelet.service: Consumed 1.248s CPU time. Jan 29 10:48:27.580882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:48:27.641800 systemd[1]: Reloading requested from client PID 2289 ('systemctl') (unit session-7.scope)... Jan 29 10:48:27.641835 systemd[1]: Reloading... Jan 29 10:48:27.834360 zram_generator::config[2330]: No configuration found. Jan 29 10:48:28.085217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:48:28.256368 systemd[1]: Reloading finished in 613 ms. Jan 29 10:48:28.341394 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 10:48:28.341833 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 10:48:28.342373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:48:28.353991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:48:28.863793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:48:28.879524 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:48:28.670950 systemd-resolved[1724]: Clock change detected. Flushing caches. Jan 29 10:48:28.678918 systemd-journald[1479]: Time jumped backwards, rotating. Jan 29 10:48:28.722385 kubelet[2392]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:48:28.722385 kubelet[2392]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:48:28.722385 kubelet[2392]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:48:28.722892 kubelet[2392]: I0129 10:48:28.722521 2392 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:48:29.672043 kubelet[2392]: I0129 10:48:29.671413 2392 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 10:48:29.672043 kubelet[2392]: I0129 10:48:29.671467 2392 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:48:29.672043 kubelet[2392]: I0129 10:48:29.671865 2392 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 10:48:29.704934 kubelet[2392]: I0129 10:48:29.704616 2392 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:48:29.714886 kubelet[2392]: E0129 10:48:29.714832 2392 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 10:48:29.714886 kubelet[2392]: I0129 10:48:29.714884 2392 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 10:48:29.722032 kubelet[2392]: I0129 10:48:29.721277 2392 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:48:29.722032 kubelet[2392]: I0129 10:48:29.721519 2392 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 10:48:29.722032 kubelet[2392]: I0129 10:48:29.721733 2392 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:48:29.722245 kubelet[2392]: I0129 10:48:29.721768 2392 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.28.141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 10:48:29.722245 kubelet[2392]: I0129 10:48:29.722128 2392 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:48:29.722245 kubelet[2392]: I0129 10:48:29.722147 2392 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 10:48:29.722870 kubelet[2392]: I0129 10:48:29.722334 2392 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:48:29.725665 kubelet[2392]: I0129 10:48:29.724904 2392 kubelet.go:408] "Attempting to sync node with API server" Jan 29 10:48:29.725665 kubelet[2392]: I0129 10:48:29.724955 2392 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:48:29.725665 kubelet[2392]: I0129 10:48:29.725066 2392 kubelet.go:314] "Adding apiserver pod source" Jan 29 10:48:29.725665 kubelet[2392]: I0129 10:48:29.725087 2392 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:48:29.727376 kubelet[2392]: E0129 10:48:29.727326 2392 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:29.727445 kubelet[2392]: E0129 10:48:29.727407 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:29.728724 kubelet[2392]: I0129 10:48:29.728690 2392 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:48:29.731829 kubelet[2392]: I0129 10:48:29.731793 2392 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:48:29.733001 kubelet[2392]: W0129 10:48:29.732939 2392 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:48:29.734178 kubelet[2392]: I0129 10:48:29.734149 2392 server.go:1269] "Started kubelet" Jan 29 10:48:29.735133 kubelet[2392]: I0129 10:48:29.735032 2392 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:48:29.736711 kubelet[2392]: I0129 10:48:29.736664 2392 server.go:460] "Adding debug handlers to kubelet server" Jan 29 10:48:29.740063 kubelet[2392]: I0129 10:48:29.739946 2392 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:48:29.740591 kubelet[2392]: I0129 10:48:29.740561 2392 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:48:29.743005 kubelet[2392]: I0129 10:48:29.742372 2392 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:48:29.748950 kubelet[2392]: I0129 10:48:29.747660 2392 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 10:48:29.754367 kubelet[2392]: I0129 10:48:29.754227 2392 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 10:48:29.754576 kubelet[2392]: E0129 10:48:29.754527 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:29.757612 kubelet[2392]: I0129 10:48:29.757555 2392 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 10:48:29.759930 kubelet[2392]: I0129 10:48:29.757715 2392 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:48:29.761150 kubelet[2392]: I0129 10:48:29.761107 2392 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:48:29.764132 kubelet[2392]: I0129 10:48:29.763217 2392 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:48:29.775089 kubelet[2392]: I0129 10:48:29.774385 2392 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:48:29.786511 kubelet[2392]: E0129 10:48:29.786451 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.28.141\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 10:48:29.786649 kubelet[2392]: W0129 10:48:29.786574 2392 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 10:48:29.786699 kubelet[2392]: E0129 10:48:29.786642 2392 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 10:48:29.786884 kubelet[2392]: W0129 10:48:29.786742 2392 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.28.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 10:48:29.786884 kubelet[2392]: E0129 10:48:29.786801 2392 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.28.141\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 10:48:29.787006 kubelet[2392]: W0129 10:48:29.786887 2392 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 10:48:29.787006 kubelet[2392]: E0129 10:48:29.786915 2392 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 10:48:29.789067 kubelet[2392]: E0129 10:48:29.787272 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.141.181f242205dc4f5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.141,UID:172.31.28.141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.28.141,},FirstTimestamp:2025-01-29 10:48:29.734113117 +0000 UTC m=+1.088683578,LastTimestamp:2025-01-29 10:48:29.734113117 +0000 UTC m=+1.088683578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.141,}" Jan 29 10:48:29.809515 kubelet[2392]: I0129 10:48:29.809460 2392 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:48:29.809515 kubelet[2392]: I0129 10:48:29.809496 2392 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:48:29.810088 kubelet[2392]: I0129 10:48:29.809725 2392 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:48:29.810088 kubelet[2392]: E0129 10:48:29.809677 2392 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.28.141.181f24220a36f555 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.28.141,UID:172.31.28.141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.28.141 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.28.141,},FirstTimestamp:2025-01-29 10:48:29.807162709 +0000 UTC m=+1.161733146,LastTimestamp:2025-01-29 10:48:29.807162709 +0000 UTC m=+1.161733146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.28.141,}" Jan 29 10:48:29.816127 kubelet[2392]: I0129 10:48:29.816075 2392 policy_none.go:49] "None policy: Start" Jan 29 10:48:29.818273 kubelet[2392]: I0129 10:48:29.817695 2392 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:48:29.818273 kubelet[2392]: I0129 10:48:29.817751 2392 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:48:29.835071 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:48:29.853691 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:48:29.855138 kubelet[2392]: E0129 10:48:29.854685 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:29.865115 kubelet[2392]: I0129 10:48:29.863705 2392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:48:29.864708 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:48:29.868365 kubelet[2392]: I0129 10:48:29.868313 2392 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:48:29.868365 kubelet[2392]: I0129 10:48:29.868358 2392 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:48:29.868504 kubelet[2392]: I0129 10:48:29.868387 2392 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 10:48:29.868504 kubelet[2392]: E0129 10:48:29.868457 2392 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:48:29.878475 kubelet[2392]: I0129 10:48:29.878422 2392 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:48:29.881502 kubelet[2392]: I0129 10:48:29.880940 2392 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 10:48:29.881502 kubelet[2392]: I0129 10:48:29.881123 2392 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:48:29.881641 kubelet[2392]: I0129 10:48:29.881552 2392 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:48:29.887309 kubelet[2392]: E0129 10:48:29.887253 2392 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.28.141\" not found" Jan 29 10:48:29.985091 kubelet[2392]: I0129 10:48:29.983524 2392 kubelet_node_status.go:72] "Attempting to register node" node="172.31.28.141" Jan 29 10:48:29.994762 kubelet[2392]: I0129 10:48:29.994560 2392 kubelet_node_status.go:75] "Successfully registered node" node="172.31.28.141" Jan 29 10:48:29.994762 kubelet[2392]: E0129 10:48:29.994603 2392 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.28.141\": node \"172.31.28.141\" not found" Jan 29 10:48:30.054743 kubelet[2392]: E0129 10:48:30.054695 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.155745 kubelet[2392]: E0129 10:48:30.155690 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.182386 sudo[2257]: pam_unix(sudo:session): session closed for user root Jan 29 10:48:30.205694 sshd[2256]: Connection closed by 139.178.89.65 port 49774 Jan 29 10:48:30.206447 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Jan 29 10:48:30.212651 systemd[1]: sshd@6-172.31.28.141:22-139.178.89.65:49774.service: Deactivated successfully. Jan 29 10:48:30.216719 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 10:48:30.219524 systemd-logind[1918]: Session 7 logged out. Waiting for processes to exit. Jan 29 10:48:30.221399 systemd-logind[1918]: Removed session 7. Jan 29 10:48:30.257564 kubelet[2392]: E0129 10:48:30.256774 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.357325 kubelet[2392]: E0129 10:48:30.357259 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.457779 kubelet[2392]: E0129 10:48:30.457721 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.558285 kubelet[2392]: E0129 10:48:30.558231 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.658676 kubelet[2392]: E0129 10:48:30.658624 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.675126 kubelet[2392]: I0129 10:48:30.675076 2392 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 10:48:30.675332 kubelet[2392]: W0129 10:48:30.675288 2392 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 10:48:30.728593 kubelet[2392]: E0129 10:48:30.728519 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:30.759181 kubelet[2392]: E0129 10:48:30.759131 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.859670 kubelet[2392]: E0129 10:48:30.859543 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:30.959754 kubelet[2392]: E0129 10:48:30.959708 2392 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.28.141\" not found" Jan 29 10:48:31.061174 kubelet[2392]: I0129 10:48:31.061074 2392 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 10:48:31.061827 containerd[1939]: time="2025-01-29T10:48:31.061680083Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:48:31.062347 kubelet[2392]: I0129 10:48:31.062029 2392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 10:48:31.729360 kubelet[2392]: E0129 10:48:31.729303 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:31.729901 kubelet[2392]: I0129 10:48:31.729398 2392 apiserver.go:52] "Watching apiserver" Jan 29 10:48:31.747299 systemd[1]: Created slice kubepods-besteffort-pod87294608_8bb0_4e0a_94c0_d01a965d6232.slice - libcontainer container kubepods-besteffort-pod87294608_8bb0_4e0a_94c0_d01a965d6232.slice. Jan 29 10:48:31.763017 kubelet[2392]: I0129 10:48:31.761869 2392 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:48:31.767075 kubelet[2392]: I0129 10:48:31.767018 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-hostproc\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.767332 kubelet[2392]: I0129 10:48:31.767285 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-bpf-maps\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.767501 kubelet[2392]: I0129 10:48:31.767464 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cni-path\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.767652 kubelet[2392]: I0129 10:48:31.767614 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-xtables-lock\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.767785 kubelet[2392]: I0129 10:48:31.767762 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-hubble-tls\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.767944 kubelet[2392]: I0129 10:48:31.767909 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87294608-8bb0-4e0a-94c0-d01a965d6232-xtables-lock\") pod \"kube-proxy-ktdwn\" (UID: \"87294608-8bb0-4e0a-94c0-d01a965d6232\") " pod="kube-system/kube-proxy-ktdwn" Jan 29 10:48:31.768153 kubelet[2392]: I0129 10:48:31.768104 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tzw\" (UniqueName: \"kubernetes.io/projected/87294608-8bb0-4e0a-94c0-d01a965d6232-kube-api-access-p2tzw\") pod \"kube-proxy-ktdwn\" (UID: \"87294608-8bb0-4e0a-94c0-d01a965d6232\") " pod="kube-system/kube-proxy-ktdwn" Jan 29 10:48:31.768322 kubelet[2392]: I0129 10:48:31.768280 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-run\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.768506 kubelet[2392]: I0129 10:48:31.768462 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-net\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.768686 kubelet[2392]: I0129 10:48:31.768644 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-kernel\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.768875 kubelet[2392]: I0129 10:48:31.768836 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87294608-8bb0-4e0a-94c0-d01a965d6232-kube-proxy\") pod \"kube-proxy-ktdwn\" (UID: \"87294608-8bb0-4e0a-94c0-d01a965d6232\") " pod="kube-system/kube-proxy-ktdwn" Jan 29 10:48:31.769107 kubelet[2392]: I0129 10:48:31.769044 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e467bc2e-5fa1-4001-82ab-7225d63627b3-clustermesh-secrets\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.769262 kubelet[2392]: I0129 10:48:31.769180 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-etc-cni-netd\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.769262 kubelet[2392]: I0129 10:48:31.769227 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-lib-modules\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.769539 kubelet[2392]: I0129 10:48:31.769393 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-config-path\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.769539 kubelet[2392]: I0129 10:48:31.769458 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhx8\" (UniqueName: \"kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-kube-api-access-cwhx8\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.769539 kubelet[2392]: I0129 10:48:31.769500 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87294608-8bb0-4e0a-94c0-d01a965d6232-lib-modules\") pod \"kube-proxy-ktdwn\" (UID: \"87294608-8bb0-4e0a-94c0-d01a965d6232\") " pod="kube-system/kube-proxy-ktdwn" Jan 29 10:48:31.769845 kubelet[2392]: I0129 10:48:31.769755 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-cgroup\") pod \"cilium-rschd\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " pod="kube-system/cilium-rschd" Jan 29 10:48:31.770393 systemd[1]: Created slice kubepods-burstable-pode467bc2e_5fa1_4001_82ab_7225d63627b3.slice - libcontainer container kubepods-burstable-pode467bc2e_5fa1_4001_82ab_7225d63627b3.slice. Jan 29 10:48:32.065103 containerd[1939]: time="2025-01-29T10:48:32.064923936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktdwn,Uid:87294608-8bb0-4e0a-94c0-d01a965d6232,Namespace:kube-system,Attempt:0,}" Jan 29 10:48:32.082035 containerd[1939]: time="2025-01-29T10:48:32.081823716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rschd,Uid:e467bc2e-5fa1-4001-82ab-7225d63627b3,Namespace:kube-system,Attempt:0,}" Jan 29 10:48:32.616057 containerd[1939]: time="2025-01-29T10:48:32.615580203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:48:32.619783 containerd[1939]: time="2025-01-29T10:48:32.619549023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 10:48:32.621379 containerd[1939]: time="2025-01-29T10:48:32.621285099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:48:32.623620 containerd[1939]: time="2025-01-29T10:48:32.622911015Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:48:32.628280 containerd[1939]: time="2025-01-29T10:48:32.628227363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:48:32.634538 containerd[1939]: time="2025-01-29T10:48:32.634466175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:48:32.638761 containerd[1939]: time="2025-01-29T10:48:32.638694915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.015987ms" Jan 29 10:48:32.640616 containerd[1939]: time="2025-01-29T10:48:32.640545291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.596211ms" Jan 29 10:48:32.731163 kubelet[2392]: E0129 10:48:32.731099 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:32.819030 containerd[1939]: time="2025-01-29T10:48:32.818755300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:48:32.819830 containerd[1939]: time="2025-01-29T10:48:32.819647260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:48:32.820113 containerd[1939]: time="2025-01-29T10:48:32.819944176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:48:32.820712 containerd[1939]: time="2025-01-29T10:48:32.820486024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:48:32.823796 containerd[1939]: time="2025-01-29T10:48:32.822008104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:48:32.823796 containerd[1939]: time="2025-01-29T10:48:32.822131632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:48:32.823796 containerd[1939]: time="2025-01-29T10:48:32.822169216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:48:32.823796 containerd[1939]: time="2025-01-29T10:48:32.822321412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:48:32.895209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954363267.mount: Deactivated successfully. Jan 29 10:48:33.071729 systemd[1]: run-containerd-runc-k8s.io-ca7d1bfdbb6fcf66238e537fe6528a4f64fac6258fd4970880b6313ba1bb7bde-runc.iGehJE.mount: Deactivated successfully. Jan 29 10:48:33.085305 systemd[1]: Started cri-containerd-d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22.scope - libcontainer container d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22. Jan 29 10:48:33.100584 systemd[1]: Started cri-containerd-ca7d1bfdbb6fcf66238e537fe6528a4f64fac6258fd4970880b6313ba1bb7bde.scope - libcontainer container ca7d1bfdbb6fcf66238e537fe6528a4f64fac6258fd4970880b6313ba1bb7bde. Jan 29 10:48:33.143155 containerd[1939]: time="2025-01-29T10:48:33.143072546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rschd,Uid:e467bc2e-5fa1-4001-82ab-7225d63627b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\"" Jan 29 10:48:33.152421 containerd[1939]: time="2025-01-29T10:48:33.150404738Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 10:48:33.168111 containerd[1939]: time="2025-01-29T10:48:33.168043802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktdwn,Uid:87294608-8bb0-4e0a-94c0-d01a965d6232,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca7d1bfdbb6fcf66238e537fe6528a4f64fac6258fd4970880b6313ba1bb7bde\"" Jan 29 10:48:33.731911 kubelet[2392]: E0129 10:48:33.731860 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:34.733187 kubelet[2392]: E0129 10:48:34.733059 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:35.733853 kubelet[2392]: E0129 10:48:35.733803 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:36.735189 kubelet[2392]: E0129 10:48:36.735124 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:37.737300 kubelet[2392]: E0129 10:48:37.737094 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:38.188307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696200861.mount: Deactivated successfully. Jan 29 10:48:38.738839 kubelet[2392]: E0129 10:48:38.738795 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:39.740430 kubelet[2392]: E0129 10:48:39.740292 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:40.611923 containerd[1939]: time="2025-01-29T10:48:40.609821903Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:48:40.616063 containerd[1939]: time="2025-01-29T10:48:40.615967967Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 10:48:40.620385 containerd[1939]: time="2025-01-29T10:48:40.619294283Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:48:40.622846 containerd[1939]: time="2025-01-29T10:48:40.622260647Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.471792513s" Jan 29 10:48:40.622846 containerd[1939]: time="2025-01-29T10:48:40.622324319Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 10:48:40.627205 containerd[1939]: time="2025-01-29T10:48:40.627133751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 10:48:40.631850 containerd[1939]: time="2025-01-29T10:48:40.631067267Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 10:48:40.696459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932541987.mount: Deactivated successfully. Jan 29 10:48:40.741199 kubelet[2392]: E0129 10:48:40.741143 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:40.761812 containerd[1939]: time="2025-01-29T10:48:40.761590272Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\"" Jan 29 10:48:40.762941 containerd[1939]: time="2025-01-29T10:48:40.762778620Z" level=info msg="StartContainer for \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\"" Jan 29 10:48:40.838297 systemd[1]: Started cri-containerd-3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998.scope - libcontainer container 3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998. Jan 29 10:48:40.899384 containerd[1939]: time="2025-01-29T10:48:40.899202132Z" level=info msg="StartContainer for \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\" returns successfully" Jan 29 10:48:40.912718 systemd[1]: cri-containerd-3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998.scope: Deactivated successfully. Jan 29 10:48:41.686776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998-rootfs.mount: Deactivated successfully. Jan 29 10:48:41.741636 kubelet[2392]: E0129 10:48:41.741563 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:42.084294 containerd[1939]: time="2025-01-29T10:48:42.084207322Z" level=info msg="shim disconnected" id=3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998 namespace=k8s.io Jan 29 10:48:42.084294 containerd[1939]: time="2025-01-29T10:48:42.084290326Z" level=warning msg="cleaning up after shim disconnected" id=3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998 namespace=k8s.io Jan 29 10:48:42.084930 containerd[1939]: time="2025-01-29T10:48:42.084312166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:48:42.742429 kubelet[2392]: E0129 10:48:42.742353 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:42.939573 containerd[1939]: time="2025-01-29T10:48:42.939500414Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 10:48:42.974424 containerd[1939]: time="2025-01-29T10:48:42.974347191Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\"" Jan 29 10:48:42.975200 containerd[1939]: time="2025-01-29T10:48:42.975148455Z" level=info msg="StartContainer for \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\"" Jan 29 10:48:43.036691 systemd[1]: run-containerd-runc-k8s.io-ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc-runc.g7a2Uu.mount: Deactivated successfully. Jan 29 10:48:43.050646 systemd[1]: Started cri-containerd-ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc.scope - libcontainer container ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc. Jan 29 10:48:43.106709 containerd[1939]: time="2025-01-29T10:48:43.106643747Z" level=info msg="StartContainer for \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\" returns successfully" Jan 29 10:48:43.142436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:48:43.143822 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:43.143942 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:48:43.156430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:48:43.156880 systemd[1]: cri-containerd-ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc.scope: Deactivated successfully. Jan 29 10:48:43.208197 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:48:43.464842 containerd[1939]: time="2025-01-29T10:48:43.463501441Z" level=info msg="shim disconnected" id=ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc namespace=k8s.io Jan 29 10:48:43.464842 containerd[1939]: time="2025-01-29T10:48:43.463912969Z" level=warning msg="cleaning up after shim disconnected" id=ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc namespace=k8s.io Jan 29 10:48:43.464842 containerd[1939]: time="2025-01-29T10:48:43.463939249Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:48:43.501347 containerd[1939]: time="2025-01-29T10:48:43.501274105Z" level=warning msg="cleanup warnings time=\"2025-01-29T10:48:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 10:48:43.743115 kubelet[2392]: E0129 10:48:43.742951 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:43.788705 containerd[1939]: time="2025-01-29T10:48:43.787140783Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 29 10:48:43.788705 containerd[1939]: time="2025-01-29T10:48:43.788596623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:48:43.791183 containerd[1939]: time="2025-01-29T10:48:43.791115387Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:48:43.792539 containerd[1939]: time="2025-01-29T10:48:43.792443571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:48:43.794125 containerd[1939]: time="2025-01-29T10:48:43.794065263Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 3.166869484s" Jan 29 10:48:43.795158 containerd[1939]: time="2025-01-29T10:48:43.794131659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 10:48:43.798273 containerd[1939]: time="2025-01-29T10:48:43.798053595Z" level=info msg="CreateContainer within sandbox \"ca7d1bfdbb6fcf66238e537fe6528a4f64fac6258fd4970880b6313ba1bb7bde\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:48:43.835699 containerd[1939]: time="2025-01-29T10:48:43.835570155Z" level=info msg="CreateContainer within sandbox \"ca7d1bfdbb6fcf66238e537fe6528a4f64fac6258fd4970880b6313ba1bb7bde\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"745699a6bfc6fce32fd7f8a17aeb4659e62610f1130b334e14178675bf020a62\"" Jan 29 10:48:43.836721 containerd[1939]: time="2025-01-29T10:48:43.836645871Z" level=info msg="StartContainer for \"745699a6bfc6fce32fd7f8a17aeb4659e62610f1130b334e14178675bf020a62\"" Jan 29 10:48:43.886313 systemd[1]: Started cri-containerd-745699a6bfc6fce32fd7f8a17aeb4659e62610f1130b334e14178675bf020a62.scope - libcontainer container 745699a6bfc6fce32fd7f8a17aeb4659e62610f1130b334e14178675bf020a62. Jan 29 10:48:43.950330 containerd[1939]: time="2025-01-29T10:48:43.950049987Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 10:48:43.960220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc-rootfs.mount: Deactivated successfully. Jan 29 10:48:43.960404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515621538.mount: Deactivated successfully. Jan 29 10:48:43.970793 containerd[1939]: time="2025-01-29T10:48:43.970709199Z" level=info msg="StartContainer for \"745699a6bfc6fce32fd7f8a17aeb4659e62610f1130b334e14178675bf020a62\" returns successfully" Jan 29 10:48:43.994925 containerd[1939]: time="2025-01-29T10:48:43.994615300Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\"" Jan 29 10:48:43.997011 containerd[1939]: time="2025-01-29T10:48:43.995392912Z" level=info msg="StartContainer for \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\"" Jan 29 10:48:44.065876 systemd[1]: Started cri-containerd-b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681.scope - libcontainer container b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681. Jan 29 10:48:44.138557 containerd[1939]: time="2025-01-29T10:48:44.138507720Z" level=info msg="StartContainer for \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\" returns successfully" Jan 29 10:48:44.140963 systemd[1]: cri-containerd-b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681.scope: Deactivated successfully. Jan 29 10:48:44.190239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681-rootfs.mount: Deactivated successfully. Jan 29 10:48:44.390512 containerd[1939]: time="2025-01-29T10:48:44.390414494Z" level=info msg="shim disconnected" id=b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681 namespace=k8s.io Jan 29 10:48:44.390512 containerd[1939]: time="2025-01-29T10:48:44.390502382Z" level=warning msg="cleaning up after shim disconnected" id=b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681 namespace=k8s.io Jan 29 10:48:44.390882 containerd[1939]: time="2025-01-29T10:48:44.390523790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:48:44.743777 kubelet[2392]: E0129 10:48:44.743624 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:44.963625 containerd[1939]: time="2025-01-29T10:48:44.963350692Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 10:48:44.988727 containerd[1939]: time="2025-01-29T10:48:44.988655441Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\"" Jan 29 10:48:44.991036 containerd[1939]: time="2025-01-29T10:48:44.989925905Z" level=info msg="StartContainer for \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\"" Jan 29 10:48:45.043302 systemd[1]: Started cri-containerd-ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679.scope - libcontainer container ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679. Jan 29 10:48:45.083068 systemd[1]: cri-containerd-ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679.scope: Deactivated successfully. Jan 29 10:48:45.087722 containerd[1939]: time="2025-01-29T10:48:45.087670645Z" level=info msg="StartContainer for \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\" returns successfully" Jan 29 10:48:45.124601 containerd[1939]: time="2025-01-29T10:48:45.124519513Z" level=info msg="shim disconnected" id=ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679 namespace=k8s.io Jan 29 10:48:45.124601 containerd[1939]: time="2025-01-29T10:48:45.124593157Z" level=warning msg="cleaning up after shim disconnected" id=ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679 namespace=k8s.io Jan 29 10:48:45.126259 containerd[1939]: time="2025-01-29T10:48:45.124614241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:48:45.744426 kubelet[2392]: E0129 10:48:45.744357 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:45.978809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679-rootfs.mount: Deactivated successfully. Jan 29 10:48:45.979893 containerd[1939]: time="2025-01-29T10:48:45.979822793Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 10:48:46.001598 kubelet[2392]: I0129 10:48:46.000453 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ktdwn" podStartSLOduration=5.375102053 podStartE2EDuration="16.000422162s" podCreationTimestamp="2025-01-29 10:48:30 +0000 UTC" firstStartedPulling="2025-01-29 10:48:33.170291078 +0000 UTC m=+4.524861527" lastFinishedPulling="2025-01-29 10:48:43.795611187 +0000 UTC m=+15.150181636" observedRunningTime="2025-01-29 10:48:45.004143349 +0000 UTC m=+16.358713822" watchObservedRunningTime="2025-01-29 10:48:46.000422162 +0000 UTC m=+17.354992611" Jan 29 10:48:46.007202 containerd[1939]: time="2025-01-29T10:48:46.007026410Z" level=info msg="CreateContainer within sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\"" Jan 29 10:48:46.008276 containerd[1939]: time="2025-01-29T10:48:46.008122730Z" level=info msg="StartContainer for \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\"" Jan 29 10:48:46.064316 systemd[1]: Started cri-containerd-7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2.scope - libcontainer container 7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2. Jan 29 10:48:46.120258 containerd[1939]: time="2025-01-29T10:48:46.120117722Z" level=info msg="StartContainer for \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\" returns successfully" Jan 29 10:48:46.255900 kubelet[2392]: I0129 10:48:46.255066 2392 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 10:48:46.745508 kubelet[2392]: E0129 10:48:46.745441 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:46.969033 kernel: Initializing XFRM netlink socket Jan 29 10:48:47.016433 kubelet[2392]: I0129 10:48:47.015490 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rschd" podStartSLOduration=9.538194094 podStartE2EDuration="17.015467019s" podCreationTimestamp="2025-01-29 10:48:30 +0000 UTC" firstStartedPulling="2025-01-29 10:48:33.147966158 +0000 UTC m=+4.502536607" lastFinishedPulling="2025-01-29 10:48:40.625239083 +0000 UTC m=+11.979809532" observedRunningTime="2025-01-29 10:48:47.013860579 +0000 UTC m=+18.368431064" watchObservedRunningTime="2025-01-29 10:48:47.015467019 +0000 UTC m=+18.370037480" Jan 29 10:48:47.746263 kubelet[2392]: E0129 10:48:47.746200 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:47.887526 systemd[1]: Created slice kubepods-besteffort-pod4b7aceaa_790c_4c9e_8bd0_3f4bc1bdd53b.slice - libcontainer container kubepods-besteffort-pod4b7aceaa_790c_4c9e_8bd0_3f4bc1bdd53b.slice. Jan 29 10:48:48.033073 kubelet[2392]: I0129 10:48:48.032885 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzxqv\" (UniqueName: \"kubernetes.io/projected/4b7aceaa-790c-4c9e-8bd0-3f4bc1bdd53b-kube-api-access-tzxqv\") pod \"nginx-deployment-8587fbcb89-2t2k6\" (UID: \"4b7aceaa-790c-4c9e-8bd0-3f4bc1bdd53b\") " pod="default/nginx-deployment-8587fbcb89-2t2k6" Jan 29 10:48:48.193529 containerd[1939]: time="2025-01-29T10:48:48.193357204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2t2k6,Uid:4b7aceaa-790c-4c9e-8bd0-3f4bc1bdd53b,Namespace:default,Attempt:0,}" Jan 29 10:48:48.747525 kubelet[2392]: E0129 10:48:48.747209 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:48.783287 systemd-networkd[1780]: cilium_host: Link UP Jan 29 10:48:48.783576 systemd-networkd[1780]: cilium_net: Link UP Jan 29 10:48:48.783881 systemd-networkd[1780]: cilium_net: Gained carrier Jan 29 10:48:48.788364 systemd-networkd[1780]: cilium_host: Gained carrier Jan 29 10:48:48.791530 (udev-worker)[3109]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:48.792611 (udev-worker)[2743]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:48.946117 (udev-worker)[3119]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:48:48.955259 systemd-networkd[1780]: cilium_vxlan: Link UP Jan 29 10:48:48.955274 systemd-networkd[1780]: cilium_vxlan: Gained carrier Jan 29 10:48:49.026392 systemd-networkd[1780]: cilium_host: Gained IPv6LL Jan 29 10:48:49.426029 kernel: NET: Registered PF_ALG protocol family Jan 29 10:48:49.726359 kubelet[2392]: E0129 10:48:49.726197 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:49.748272 kubelet[2392]: E0129 10:48:49.748188 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:49.772286 systemd-networkd[1780]: cilium_net: Gained IPv6LL Jan 29 10:48:50.660328 systemd-networkd[1780]: lxc_health: Link UP Jan 29 10:48:50.670061 systemd-networkd[1780]: lxc_health: Gained carrier Jan 29 10:48:50.730679 systemd-networkd[1780]: cilium_vxlan: Gained IPv6LL Jan 29 10:48:50.749017 kubelet[2392]: E0129 10:48:50.748418 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:51.258427 systemd-networkd[1780]: lxcc3c25e7438b3: Link UP Jan 29 10:48:51.271242 kernel: eth0: renamed from tmp66c61 Jan 29 10:48:51.280065 systemd-networkd[1780]: lxcc3c25e7438b3: Gained carrier Jan 29 10:48:51.749350 kubelet[2392]: E0129 10:48:51.749274 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:52.074377 systemd-networkd[1780]: lxc_health: Gained IPv6LL Jan 29 10:48:52.458668 systemd-networkd[1780]: lxcc3c25e7438b3: Gained IPv6LL Jan 29 10:48:52.557229 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 10:48:52.751038 kubelet[2392]: E0129 10:48:52.750174 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:53.750814 kubelet[2392]: E0129 10:48:53.750744 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:54.671012 ntpd[1914]: Listen normally on 8 cilium_host 192.168.1.95:123 Jan 29 10:48:54.671137 ntpd[1914]: Listen normally on 9 cilium_net [fe80::2ca8:3aff:fe19:326e%3]:123 Jan 29 10:48:54.671890 ntpd[1914]: 29 Jan 10:48:54 ntpd[1914]: Listen normally on 8 cilium_host 192.168.1.95:123 Jan 29 10:48:54.671890 ntpd[1914]: 29 Jan 10:48:54 ntpd[1914]: Listen normally on 9 cilium_net [fe80::2ca8:3aff:fe19:326e%3]:123 Jan 29 10:48:54.671890 ntpd[1914]: 29 Jan 10:48:54 ntpd[1914]: Listen normally on 10 cilium_host [fe80::a459:e1ff:fe6d:db11%4]:123 Jan 29 10:48:54.671890 ntpd[1914]: 29 Jan 10:48:54 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::3803:3dff:fec9:60fe%5]:123 Jan 29 10:48:54.671890 ntpd[1914]: 29 Jan 10:48:54 ntpd[1914]: Listen normally on 12 lxc_health [fe80::f423:94ff:fed9:3bf9%7]:123 Jan 29 10:48:54.671890 ntpd[1914]: 29 Jan 10:48:54 ntpd[1914]: Listen normally on 13 lxcc3c25e7438b3 [fe80::ac2a:92ff:feee:d3e5%9]:123 Jan 29 10:48:54.671218 ntpd[1914]: Listen normally on 10 cilium_host [fe80::a459:e1ff:fe6d:db11%4]:123 Jan 29 10:48:54.671288 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::3803:3dff:fec9:60fe%5]:123 Jan 29 10:48:54.671351 ntpd[1914]: Listen normally on 12 lxc_health [fe80::f423:94ff:fed9:3bf9%7]:123 Jan 29 10:48:54.671419 ntpd[1914]: Listen normally on 13 lxcc3c25e7438b3 [fe80::ac2a:92ff:feee:d3e5%9]:123 Jan 29 10:48:54.751682 kubelet[2392]: E0129 10:48:54.751610 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:55.752819 kubelet[2392]: E0129 10:48:55.752736 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:56.753062 kubelet[2392]: E0129 10:48:56.752914 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:57.753643 kubelet[2392]: E0129 10:48:57.753571 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:58.754477 kubelet[2392]: E0129 10:48:58.754405 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:48:59.244938 containerd[1939]: time="2025-01-29T10:48:59.244332747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:48:59.244938 containerd[1939]: time="2025-01-29T10:48:59.244465011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:48:59.244938 containerd[1939]: time="2025-01-29T10:48:59.244500651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:48:59.244938 containerd[1939]: time="2025-01-29T10:48:59.244696335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:48:59.278286 systemd[1]: Started cri-containerd-66c6161e8ea610959b6938fc9d70a73d609e40fd1a3630c7bd9042087a5ffc85.scope - libcontainer container 66c6161e8ea610959b6938fc9d70a73d609e40fd1a3630c7bd9042087a5ffc85. Jan 29 10:48:59.336629 containerd[1939]: time="2025-01-29T10:48:59.336461428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2t2k6,Uid:4b7aceaa-790c-4c9e-8bd0-3f4bc1bdd53b,Namespace:default,Attempt:0,} returns sandbox id \"66c6161e8ea610959b6938fc9d70a73d609e40fd1a3630c7bd9042087a5ffc85\"" Jan 29 10:48:59.341074 containerd[1939]: time="2025-01-29T10:48:59.340918732Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 10:48:59.502936 kubelet[2392]: I0129 10:48:59.501681 2392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:48:59.755097 kubelet[2392]: E0129 10:48:59.754950 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:00.756920 kubelet[2392]: E0129 10:49:00.756757 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:01.757526 kubelet[2392]: E0129 10:49:01.757414 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:02.348441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208276863.mount: Deactivated successfully. Jan 29 10:49:02.758417 kubelet[2392]: E0129 10:49:02.758296 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:03.737896 containerd[1939]: time="2025-01-29T10:49:03.737375494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:03.739296 containerd[1939]: time="2025-01-29T10:49:03.739201666Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 29 10:49:03.740358 containerd[1939]: time="2025-01-29T10:49:03.740289166Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:03.745960 containerd[1939]: time="2025-01-29T10:49:03.745880962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:03.748077 containerd[1939]: time="2025-01-29T10:49:03.747867802Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 4.40675227s" Jan 29 10:49:03.748077 containerd[1939]: time="2025-01-29T10:49:03.747919066Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 10:49:03.752731 containerd[1939]: time="2025-01-29T10:49:03.752661526Z" level=info msg="CreateContainer within sandbox \"66c6161e8ea610959b6938fc9d70a73d609e40fd1a3630c7bd9042087a5ffc85\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 10:49:03.760204 kubelet[2392]: E0129 10:49:03.760140 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:03.777128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1714293126.mount: Deactivated successfully. Jan 29 10:49:03.782245 containerd[1939]: time="2025-01-29T10:49:03.782168518Z" level=info msg="CreateContainer within sandbox \"66c6161e8ea610959b6938fc9d70a73d609e40fd1a3630c7bd9042087a5ffc85\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"03245e0ec128bcf9758237579a1ec69c69a1017f9eb692a4192c620b1da742ef\"" Jan 29 10:49:03.783569 containerd[1939]: time="2025-01-29T10:49:03.783081742Z" level=info msg="StartContainer for \"03245e0ec128bcf9758237579a1ec69c69a1017f9eb692a4192c620b1da742ef\"" Jan 29 10:49:03.838299 systemd[1]: Started cri-containerd-03245e0ec128bcf9758237579a1ec69c69a1017f9eb692a4192c620b1da742ef.scope - libcontainer container 03245e0ec128bcf9758237579a1ec69c69a1017f9eb692a4192c620b1da742ef. Jan 29 10:49:03.882928 containerd[1939]: time="2025-01-29T10:49:03.882856162Z" level=info msg="StartContainer for \"03245e0ec128bcf9758237579a1ec69c69a1017f9eb692a4192c620b1da742ef\" returns successfully" Jan 29 10:49:04.053461 kubelet[2392]: I0129 10:49:04.053361 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-2t2k6" podStartSLOduration=12.643102253 podStartE2EDuration="17.053339347s" podCreationTimestamp="2025-01-29 10:48:47 +0000 UTC" firstStartedPulling="2025-01-29 10:48:59.34000402 +0000 UTC m=+30.694574469" lastFinishedPulling="2025-01-29 10:49:03.750241126 +0000 UTC m=+35.104811563" observedRunningTime="2025-01-29 10:49:04.053038603 +0000 UTC m=+35.407609064" watchObservedRunningTime="2025-01-29 10:49:04.053339347 +0000 UTC m=+35.407909784" Jan 29 10:49:04.761275 kubelet[2392]: E0129 10:49:04.761212 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:05.761618 kubelet[2392]: E0129 10:49:05.761556 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:06.761899 kubelet[2392]: E0129 10:49:06.761837 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:07.414856 update_engine[1919]: I20250129 10:49:07.414758 1919 update_attempter.cc:509] Updating boot flags... Jan 29 10:49:07.494019 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3603) Jan 29 10:49:07.704562 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3606) Jan 29 10:49:07.762580 kubelet[2392]: E0129 10:49:07.762533 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:07.990167 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3606) Jan 29 10:49:08.337131 systemd[1]: Created slice kubepods-besteffort-pod063a1d3b_9d11_4257_8f19_bfaae2f47ce0.slice - libcontainer container kubepods-besteffort-pod063a1d3b_9d11_4257_8f19_bfaae2f47ce0.slice. Jan 29 10:49:08.467930 kubelet[2392]: I0129 10:49:08.467797 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p674l\" (UniqueName: \"kubernetes.io/projected/063a1d3b-9d11-4257-8f19-bfaae2f47ce0-kube-api-access-p674l\") pod \"nfs-server-provisioner-0\" (UID: \"063a1d3b-9d11-4257-8f19-bfaae2f47ce0\") " pod="default/nfs-server-provisioner-0" Jan 29 10:49:08.467930 kubelet[2392]: I0129 10:49:08.467867 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/063a1d3b-9d11-4257-8f19-bfaae2f47ce0-data\") pod \"nfs-server-provisioner-0\" (UID: \"063a1d3b-9d11-4257-8f19-bfaae2f47ce0\") " pod="default/nfs-server-provisioner-0" Jan 29 10:49:08.642354 containerd[1939]: time="2025-01-29T10:49:08.642202946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:063a1d3b-9d11-4257-8f19-bfaae2f47ce0,Namespace:default,Attempt:0,}" Jan 29 10:49:08.695206 systemd-networkd[1780]: lxccb2a3a5fbd59: Link UP Jan 29 10:49:08.709113 kernel: eth0: renamed from tmp781d9 Jan 29 10:49:08.713332 (udev-worker)[3594]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:49:08.713540 systemd-networkd[1780]: lxccb2a3a5fbd59: Gained carrier Jan 29 10:49:08.763855 kubelet[2392]: E0129 10:49:08.763765 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:09.042535 containerd[1939]: time="2025-01-29T10:49:09.042043620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:09.042535 containerd[1939]: time="2025-01-29T10:49:09.042149244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:09.042535 containerd[1939]: time="2025-01-29T10:49:09.042178500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:09.042535 containerd[1939]: time="2025-01-29T10:49:09.042329136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:09.088319 systemd[1]: Started cri-containerd-781d971fedbd0498ff7586790797193edb9ca31969a51101068f8271c364c789.scope - libcontainer container 781d971fedbd0498ff7586790797193edb9ca31969a51101068f8271c364c789. Jan 29 10:49:09.145972 containerd[1939]: time="2025-01-29T10:49:09.145911325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:063a1d3b-9d11-4257-8f19-bfaae2f47ce0,Namespace:default,Attempt:0,} returns sandbox id \"781d971fedbd0498ff7586790797193edb9ca31969a51101068f8271c364c789\"" Jan 29 10:49:09.149892 containerd[1939]: time="2025-01-29T10:49:09.149580025Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 10:49:09.588121 systemd[1]: run-containerd-runc-k8s.io-781d971fedbd0498ff7586790797193edb9ca31969a51101068f8271c364c789-runc.EolTeB.mount: Deactivated successfully. Jan 29 10:49:09.725493 kubelet[2392]: E0129 10:49:09.725425 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:09.766122 kubelet[2392]: E0129 10:49:09.766060 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:10.571160 systemd-networkd[1780]: lxccb2a3a5fbd59: Gained IPv6LL Jan 29 10:49:10.766616 kubelet[2392]: E0129 10:49:10.766490 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:11.701480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678825471.mount: Deactivated successfully. Jan 29 10:49:11.767376 kubelet[2392]: E0129 10:49:11.767322 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:12.671091 ntpd[1914]: Listen normally on 14 lxccb2a3a5fbd59 [fe80::f8de:6bff:feba:bce1%11]:123 Jan 29 10:49:12.672189 ntpd[1914]: 29 Jan 10:49:12 ntpd[1914]: Listen normally on 14 lxccb2a3a5fbd59 [fe80::f8de:6bff:feba:bce1%11]:123 Jan 29 10:49:12.767908 kubelet[2392]: E0129 10:49:12.767848 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:13.770123 kubelet[2392]: E0129 10:49:13.770050 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:14.645322 containerd[1939]: time="2025-01-29T10:49:14.645256052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:14.647442 containerd[1939]: time="2025-01-29T10:49:14.647365136Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 29 10:49:14.648953 containerd[1939]: time="2025-01-29T10:49:14.648880760Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:14.654510 containerd[1939]: time="2025-01-29T10:49:14.654408488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:14.657029 containerd[1939]: time="2025-01-29T10:49:14.656516612Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.506879995s" Jan 29 10:49:14.657029 containerd[1939]: time="2025-01-29T10:49:14.656574896Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 29 10:49:14.661716 containerd[1939]: time="2025-01-29T10:49:14.661646108Z" level=info msg="CreateContainer within sandbox \"781d971fedbd0498ff7586790797193edb9ca31969a51101068f8271c364c789\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 10:49:14.690505 containerd[1939]: time="2025-01-29T10:49:14.690375044Z" level=info msg="CreateContainer within sandbox \"781d971fedbd0498ff7586790797193edb9ca31969a51101068f8271c364c789\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0a804206ee0835a59f6beefb98da6ca696260e1dca85724e793b4d5f2ea171b0\"" Jan 29 10:49:14.691307 containerd[1939]: time="2025-01-29T10:49:14.691216808Z" level=info msg="StartContainer for \"0a804206ee0835a59f6beefb98da6ca696260e1dca85724e793b4d5f2ea171b0\"" Jan 29 10:49:14.742288 systemd[1]: Started cri-containerd-0a804206ee0835a59f6beefb98da6ca696260e1dca85724e793b4d5f2ea171b0.scope - libcontainer container 0a804206ee0835a59f6beefb98da6ca696260e1dca85724e793b4d5f2ea171b0. Jan 29 10:49:14.770746 kubelet[2392]: E0129 10:49:14.770327 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:14.786378 containerd[1939]: time="2025-01-29T10:49:14.786266277Z" level=info msg="StartContainer for \"0a804206ee0835a59f6beefb98da6ca696260e1dca85724e793b4d5f2ea171b0\" returns successfully" Jan 29 10:49:15.095137 kubelet[2392]: I0129 10:49:15.095051 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.585204987 podStartE2EDuration="7.095030658s" podCreationTimestamp="2025-01-29 10:49:08 +0000 UTC" firstStartedPulling="2025-01-29 10:49:09.148761709 +0000 UTC m=+40.503332158" lastFinishedPulling="2025-01-29 10:49:14.658587392 +0000 UTC m=+46.013157829" observedRunningTime="2025-01-29 10:49:15.094102974 +0000 UTC m=+46.448673447" watchObservedRunningTime="2025-01-29 10:49:15.095030658 +0000 UTC m=+46.449601143" Jan 29 10:49:15.771550 kubelet[2392]: E0129 10:49:15.771481 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:16.771700 kubelet[2392]: E0129 10:49:16.771638 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:17.772276 kubelet[2392]: E0129 10:49:17.772214 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:18.772608 kubelet[2392]: E0129 10:49:18.772523 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:19.773270 kubelet[2392]: E0129 10:49:19.773203 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:20.773454 kubelet[2392]: E0129 10:49:20.773372 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:21.774211 kubelet[2392]: E0129 10:49:21.774154 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:22.774772 kubelet[2392]: E0129 10:49:22.774709 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:23.774900 kubelet[2392]: E0129 10:49:23.774842 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:24.493143 systemd[1]: Created slice kubepods-besteffort-pod58b38019_b273_4a85_859d_1f2775d8bac6.slice - libcontainer container kubepods-besteffort-pod58b38019_b273_4a85_859d_1f2775d8bac6.slice. Jan 29 10:49:24.668510 kubelet[2392]: I0129 10:49:24.668380 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-242zg\" (UniqueName: \"kubernetes.io/projected/58b38019-b273-4a85-859d-1f2775d8bac6-kube-api-access-242zg\") pod \"test-pod-1\" (UID: \"58b38019-b273-4a85-859d-1f2775d8bac6\") " pod="default/test-pod-1" Jan 29 10:49:24.668510 kubelet[2392]: I0129 10:49:24.668450 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba8277ac-1e0a-4422-a95a-09d441bfa0c8\" (UniqueName: \"kubernetes.io/nfs/58b38019-b273-4a85-859d-1f2775d8bac6-pvc-ba8277ac-1e0a-4422-a95a-09d441bfa0c8\") pod \"test-pod-1\" (UID: \"58b38019-b273-4a85-859d-1f2775d8bac6\") " pod="default/test-pod-1" Jan 29 10:49:24.778646 kubelet[2392]: E0129 10:49:24.778087 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:24.808147 kernel: FS-Cache: Loaded Jan 29 10:49:24.850335 kernel: RPC: Registered named UNIX socket transport module. Jan 29 10:49:24.850468 kernel: RPC: Registered udp transport module. Jan 29 10:49:24.850510 kernel: RPC: Registered tcp transport module. Jan 29 10:49:24.852188 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 10:49:24.852244 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 10:49:25.161227 kernel: NFS: Registering the id_resolver key type Jan 29 10:49:25.161432 kernel: Key type id_resolver registered Jan 29 10:49:25.162322 kernel: Key type id_legacy registered Jan 29 10:49:25.198058 nfsidmap[4038]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 10:49:25.204035 nfsidmap[4039]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 10:49:25.399350 containerd[1939]: time="2025-01-29T10:49:25.399283241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:58b38019-b273-4a85-859d-1f2775d8bac6,Namespace:default,Attempt:0,}" Jan 29 10:49:25.471076 (udev-worker)[4027]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:49:25.475220 systemd-networkd[1780]: lxc4e25b73bf5bd: Link UP Jan 29 10:49:25.485065 kernel: eth0: renamed from tmp39606 Jan 29 10:49:25.484332 (udev-worker)[4031]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:49:25.493197 systemd-networkd[1780]: lxc4e25b73bf5bd: Gained carrier Jan 29 10:49:25.778703 kubelet[2392]: E0129 10:49:25.778511 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:25.832112 containerd[1939]: time="2025-01-29T10:49:25.831905263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:25.832112 containerd[1939]: time="2025-01-29T10:49:25.832068547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:25.833080 containerd[1939]: time="2025-01-29T10:49:25.832371511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:25.833374 containerd[1939]: time="2025-01-29T10:49:25.833211235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:25.874331 systemd[1]: Started cri-containerd-39606b62ec87f6e775a48530fae8dfcad6838a766ff714fbdcc31a68af3db4b2.scope - libcontainer container 39606b62ec87f6e775a48530fae8dfcad6838a766ff714fbdcc31a68af3db4b2. Jan 29 10:49:25.933003 containerd[1939]: time="2025-01-29T10:49:25.932897684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:58b38019-b273-4a85-859d-1f2775d8bac6,Namespace:default,Attempt:0,} returns sandbox id \"39606b62ec87f6e775a48530fae8dfcad6838a766ff714fbdcc31a68af3db4b2\"" Jan 29 10:49:25.936176 containerd[1939]: time="2025-01-29T10:49:25.936063332Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 10:49:26.228854 containerd[1939]: time="2025-01-29T10:49:26.228770093Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:26.230536 containerd[1939]: time="2025-01-29T10:49:26.230459969Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 10:49:26.237080 containerd[1939]: time="2025-01-29T10:49:26.236927525Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 300.806689ms" Jan 29 10:49:26.237080 containerd[1939]: time="2025-01-29T10:49:26.236998769Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 10:49:26.240881 containerd[1939]: time="2025-01-29T10:49:26.240822605Z" level=info msg="CreateContainer within sandbox \"39606b62ec87f6e775a48530fae8dfcad6838a766ff714fbdcc31a68af3db4b2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 10:49:26.268480 containerd[1939]: time="2025-01-29T10:49:26.268392594Z" level=info msg="CreateContainer within sandbox \"39606b62ec87f6e775a48530fae8dfcad6838a766ff714fbdcc31a68af3db4b2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9cc78f394aa3e72df3ce3c882c6347a912c1337a3cfe8d65eaf2cb2fc2eb3f7e\"" Jan 29 10:49:26.269864 containerd[1939]: time="2025-01-29T10:49:26.269616018Z" level=info msg="StartContainer for \"9cc78f394aa3e72df3ce3c882c6347a912c1337a3cfe8d65eaf2cb2fc2eb3f7e\"" Jan 29 10:49:26.322282 systemd[1]: Started cri-containerd-9cc78f394aa3e72df3ce3c882c6347a912c1337a3cfe8d65eaf2cb2fc2eb3f7e.scope - libcontainer container 9cc78f394aa3e72df3ce3c882c6347a912c1337a3cfe8d65eaf2cb2fc2eb3f7e. Jan 29 10:49:26.371281 containerd[1939]: time="2025-01-29T10:49:26.371103378Z" level=info msg="StartContainer for \"9cc78f394aa3e72df3ce3c882c6347a912c1337a3cfe8d65eaf2cb2fc2eb3f7e\" returns successfully" Jan 29 10:49:26.570381 systemd-networkd[1780]: lxc4e25b73bf5bd: Gained IPv6LL Jan 29 10:49:26.779646 kubelet[2392]: E0129 10:49:26.779575 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:27.118530 kubelet[2392]: I0129 10:49:27.118445 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.815646813 podStartE2EDuration="18.118422738s" podCreationTimestamp="2025-01-29 10:49:09 +0000 UTC" firstStartedPulling="2025-01-29 10:49:25.935608436 +0000 UTC m=+57.290178885" lastFinishedPulling="2025-01-29 10:49:26.238384373 +0000 UTC m=+57.592954810" observedRunningTime="2025-01-29 10:49:27.117946218 +0000 UTC m=+58.472516667" watchObservedRunningTime="2025-01-29 10:49:27.118422738 +0000 UTC m=+58.472993175" Jan 29 10:49:27.780040 kubelet[2392]: E0129 10:49:27.779949 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:28.670941 ntpd[1914]: Listen normally on 15 lxc4e25b73bf5bd [fe80::cc9b:e3ff:feee:2151%13]:123 Jan 29 10:49:28.671483 ntpd[1914]: 29 Jan 10:49:28 ntpd[1914]: Listen normally on 15 lxc4e25b73bf5bd [fe80::cc9b:e3ff:feee:2151%13]:123 Jan 29 10:49:28.780333 kubelet[2392]: E0129 10:49:28.780256 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:29.725539 kubelet[2392]: E0129 10:49:29.725467 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:29.781064 kubelet[2392]: E0129 10:49:29.781005 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:30.781553 kubelet[2392]: E0129 10:49:30.781497 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:31.782431 kubelet[2392]: E0129 10:49:31.782360 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:32.783625 kubelet[2392]: E0129 10:49:32.783551 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:33.784064 kubelet[2392]: E0129 10:49:33.784005 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:34.682094 containerd[1939]: time="2025-01-29T10:49:34.682028187Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:49:34.694242 containerd[1939]: time="2025-01-29T10:49:34.693973131Z" level=info msg="StopContainer for \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\" with timeout 2 (s)" Jan 29 10:49:34.695325 containerd[1939]: time="2025-01-29T10:49:34.695155023Z" level=info msg="Stop container \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\" with signal terminated" Jan 29 10:49:34.707960 systemd-networkd[1780]: lxc_health: Link DOWN Jan 29 10:49:34.708320 systemd-networkd[1780]: lxc_health: Lost carrier Jan 29 10:49:34.729836 systemd[1]: cri-containerd-7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2.scope: Deactivated successfully. Jan 29 10:49:34.730333 systemd[1]: cri-containerd-7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2.scope: Consumed 13.995s CPU time. Jan 29 10:49:34.766385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2-rootfs.mount: Deactivated successfully. Jan 29 10:49:34.785019 kubelet[2392]: E0129 10:49:34.784932 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:34.913781 kubelet[2392]: E0129 10:49:34.913611 2392 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 10:49:35.317100 containerd[1939]: time="2025-01-29T10:49:35.316971639Z" level=info msg="shim disconnected" id=7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2 namespace=k8s.io Jan 29 10:49:35.317100 containerd[1939]: time="2025-01-29T10:49:35.317071155Z" level=warning msg="cleaning up after shim disconnected" id=7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2 namespace=k8s.io Jan 29 10:49:35.317100 containerd[1939]: time="2025-01-29T10:49:35.317092599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:49:35.340493 containerd[1939]: time="2025-01-29T10:49:35.340417863Z" level=info msg="StopContainer for \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\" returns successfully" Jan 29 10:49:35.341351 containerd[1939]: time="2025-01-29T10:49:35.341290323Z" level=info msg="StopPodSandbox for \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\"" Jan 29 10:49:35.341436 containerd[1939]: time="2025-01-29T10:49:35.341358039Z" level=info msg="Container to stop \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:49:35.341436 containerd[1939]: time="2025-01-29T10:49:35.341386035Z" level=info msg="Container to stop \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:49:35.341436 containerd[1939]: time="2025-01-29T10:49:35.341407023Z" level=info msg="Container to stop \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:49:35.341436 containerd[1939]: time="2025-01-29T10:49:35.341427855Z" level=info msg="Container to stop \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:49:35.341717 containerd[1939]: time="2025-01-29T10:49:35.341449623Z" level=info msg="Container to stop \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 10:49:35.345569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22-shm.mount: Deactivated successfully. Jan 29 10:49:35.354947 systemd[1]: cri-containerd-d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22.scope: Deactivated successfully. Jan 29 10:49:35.389817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22-rootfs.mount: Deactivated successfully. Jan 29 10:49:35.397960 containerd[1939]: time="2025-01-29T10:49:35.397868103Z" level=info msg="shim disconnected" id=d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22 namespace=k8s.io Jan 29 10:49:35.397960 containerd[1939]: time="2025-01-29T10:49:35.397950375Z" level=warning msg="cleaning up after shim disconnected" id=d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22 namespace=k8s.io Jan 29 10:49:35.397960 containerd[1939]: time="2025-01-29T10:49:35.397971507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:49:35.417792 containerd[1939]: time="2025-01-29T10:49:35.417698571Z" level=warning msg="cleanup warnings time=\"2025-01-29T10:49:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 10:49:35.419302 containerd[1939]: time="2025-01-29T10:49:35.419198163Z" level=info msg="TearDown network for sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" successfully" Jan 29 10:49:35.419302 containerd[1939]: time="2025-01-29T10:49:35.419241471Z" level=info msg="StopPodSandbox for \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" returns successfully" Jan 29 10:49:35.531532 kubelet[2392]: I0129 10:49:35.531421 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cni-path\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531532 kubelet[2392]: I0129 10:49:35.531487 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-kernel\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531742 kubelet[2392]: I0129 10:49:35.531602 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-etc-cni-netd\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531742 kubelet[2392]: I0129 10:49:35.531647 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-config-path\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531742 kubelet[2392]: I0129 10:49:35.531682 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-lib-modules\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531742 kubelet[2392]: I0129 10:49:35.531717 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-bpf-maps\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531960 kubelet[2392]: I0129 10:49:35.531756 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-hubble-tls\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531960 kubelet[2392]: I0129 10:49:35.531791 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-hostproc\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531960 kubelet[2392]: I0129 10:49:35.531822 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-net\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531960 kubelet[2392]: I0129 10:49:35.531856 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-xtables-lock\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531960 kubelet[2392]: I0129 10:49:35.531889 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-run\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.531960 kubelet[2392]: I0129 10:49:35.531925 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e467bc2e-5fa1-4001-82ab-7225d63627b3-clustermesh-secrets\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.532295 kubelet[2392]: I0129 10:49:35.531961 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwhx8\" (UniqueName: \"kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-kube-api-access-cwhx8\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.532295 kubelet[2392]: I0129 10:49:35.532027 2392 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-cgroup\") pod \"e467bc2e-5fa1-4001-82ab-7225d63627b3\" (UID: \"e467bc2e-5fa1-4001-82ab-7225d63627b3\") " Jan 29 10:49:35.532295 kubelet[2392]: I0129 10:49:35.531528 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.532295 kubelet[2392]: I0129 10:49:35.531560 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.532295 kubelet[2392]: I0129 10:49:35.532106 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.532561 kubelet[2392]: I0129 10:49:35.532184 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.533226 kubelet[2392]: I0129 10:49:35.532690 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.533226 kubelet[2392]: I0129 10:49:35.532748 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.533226 kubelet[2392]: I0129 10:49:35.532795 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.535348 kubelet[2392]: I0129 10:49:35.535284 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.535494 kubelet[2392]: I0129 10:49:35.535400 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.535494 kubelet[2392]: I0129 10:49:35.535473 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 10:49:35.542031 kubelet[2392]: I0129 10:49:35.541948 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:49:35.546233 kubelet[2392]: I0129 10:49:35.545876 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:49:35.546233 kubelet[2392]: I0129 10:49:35.545882 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e467bc2e-5fa1-4001-82ab-7225d63627b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:49:35.547261 systemd[1]: var-lib-kubelet-pods-e467bc2e\x2d5fa1\x2d4001\x2d82ab\x2d7225d63627b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcwhx8.mount: Deactivated successfully. Jan 29 10:49:35.547462 systemd[1]: var-lib-kubelet-pods-e467bc2e\x2d5fa1\x2d4001\x2d82ab\x2d7225d63627b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 10:49:35.547689 systemd[1]: var-lib-kubelet-pods-e467bc2e\x2d5fa1\x2d4001\x2d82ab\x2d7225d63627b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 10:49:35.549446 kubelet[2392]: I0129 10:49:35.548160 2392 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-kube-api-access-cwhx8" (OuterVolumeSpecName: "kube-api-access-cwhx8") pod "e467bc2e-5fa1-4001-82ab-7225d63627b3" (UID: "e467bc2e-5fa1-4001-82ab-7225d63627b3"). InnerVolumeSpecName "kube-api-access-cwhx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:49:35.633230 kubelet[2392]: I0129 10:49:35.633118 2392 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-hostproc\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.633399 kubelet[2392]: I0129 10:49:35.633378 2392 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-hubble-tls\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.633499 kubelet[2392]: I0129 10:49:35.633480 2392 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-xtables-lock\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.633603 kubelet[2392]: I0129 10:49:35.633578 2392 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-net\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.633719 kubelet[2392]: I0129 10:49:35.633697 2392 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cwhx8\" (UniqueName: \"kubernetes.io/projected/e467bc2e-5fa1-4001-82ab-7225d63627b3-kube-api-access-cwhx8\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.633806 2392 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-cgroup\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.633832 2392 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-run\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.633852 2392 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e467bc2e-5fa1-4001-82ab-7225d63627b3-clustermesh-secrets\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.633871 2392 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-etc-cni-netd\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.633890 2392 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e467bc2e-5fa1-4001-82ab-7225d63627b3-cilium-config-path\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.633908 2392 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-lib-modules\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.633957 2392 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-bpf-maps\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634057 kubelet[2392]: I0129 10:49:35.634006 2392 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-cni-path\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.634442 kubelet[2392]: I0129 10:49:35.634030 2392 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e467bc2e-5fa1-4001-82ab-7225d63627b3-host-proc-sys-kernel\") on node \"172.31.28.141\" DevicePath \"\"" Jan 29 10:49:35.785731 kubelet[2392]: E0129 10:49:35.785675 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:35.881912 systemd[1]: Removed slice kubepods-burstable-pode467bc2e_5fa1_4001_82ab_7225d63627b3.slice - libcontainer container kubepods-burstable-pode467bc2e_5fa1_4001_82ab_7225d63627b3.slice. Jan 29 10:49:35.882166 systemd[1]: kubepods-burstable-pode467bc2e_5fa1_4001_82ab_7225d63627b3.slice: Consumed 14.139s CPU time. Jan 29 10:49:36.130787 kubelet[2392]: I0129 10:49:36.129081 2392 scope.go:117] "RemoveContainer" containerID="7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2" Jan 29 10:49:36.132220 containerd[1939]: time="2025-01-29T10:49:36.132167691Z" level=info msg="RemoveContainer for \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\"" Jan 29 10:49:36.139033 containerd[1939]: time="2025-01-29T10:49:36.138946443Z" level=info msg="RemoveContainer for \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\" returns successfully" Jan 29 10:49:36.139386 kubelet[2392]: I0129 10:49:36.139354 2392 scope.go:117] "RemoveContainer" containerID="ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679" Jan 29 10:49:36.141393 containerd[1939]: time="2025-01-29T10:49:36.141330327Z" level=info msg="RemoveContainer for \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\"" Jan 29 10:49:36.147408 containerd[1939]: time="2025-01-29T10:49:36.147351099Z" level=info msg="RemoveContainer for \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\" returns successfully" Jan 29 10:49:36.147854 kubelet[2392]: I0129 10:49:36.147691 2392 scope.go:117] "RemoveContainer" containerID="b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681" Jan 29 10:49:36.149938 containerd[1939]: time="2025-01-29T10:49:36.149769183Z" level=info msg="RemoveContainer for \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\"" Jan 29 10:49:36.154549 containerd[1939]: time="2025-01-29T10:49:36.154492119Z" level=info msg="RemoveContainer for \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\" returns successfully" Jan 29 10:49:36.155049 kubelet[2392]: I0129 10:49:36.154836 2392 scope.go:117] "RemoveContainer" containerID="ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc" Jan 29 10:49:36.158239 containerd[1939]: time="2025-01-29T10:49:36.157870143Z" level=info msg="RemoveContainer for \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\"" Jan 29 10:49:36.163016 containerd[1939]: time="2025-01-29T10:49:36.162880935Z" level=info msg="RemoveContainer for \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\" returns successfully" Jan 29 10:49:36.163424 kubelet[2392]: I0129 10:49:36.163231 2392 scope.go:117] "RemoveContainer" containerID="3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998" Jan 29 10:49:36.169933 containerd[1939]: time="2025-01-29T10:49:36.169229739Z" level=info msg="RemoveContainer for \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\"" Jan 29 10:49:36.176917 containerd[1939]: time="2025-01-29T10:49:36.176745639Z" level=info msg="RemoveContainer for \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\" returns successfully" Jan 29 10:49:36.177358 kubelet[2392]: I0129 10:49:36.177096 2392 scope.go:117] "RemoveContainer" containerID="7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2" Jan 29 10:49:36.178039 containerd[1939]: time="2025-01-29T10:49:36.177868299Z" level=error msg="ContainerStatus for \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\": not found" Jan 29 10:49:36.178570 kubelet[2392]: E0129 10:49:36.178276 2392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\": not found" containerID="7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2" Jan 29 10:49:36.178570 kubelet[2392]: I0129 10:49:36.178335 2392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2"} err="failed to get container status \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7134ad16214e018c90966039955a0c751c10abb2f5ab1f54593215857c9996a2\": not found" Jan 29 10:49:36.178570 kubelet[2392]: I0129 10:49:36.178445 2392 scope.go:117] "RemoveContainer" containerID="ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679" Jan 29 10:49:36.178958 containerd[1939]: time="2025-01-29T10:49:36.178782963Z" level=error msg="ContainerStatus for \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\": not found" Jan 29 10:49:36.179246 kubelet[2392]: E0129 10:49:36.179206 2392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\": not found" containerID="ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679" Jan 29 10:49:36.179313 kubelet[2392]: I0129 10:49:36.179256 2392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679"} err="failed to get container status \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae3a57daed1682ade54f286832343f553fa121858af1403dc3184488a101f679\": not found" Jan 29 10:49:36.179313 kubelet[2392]: I0129 10:49:36.179293 2392 scope.go:117] "RemoveContainer" containerID="b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681" Jan 29 10:49:36.179895 containerd[1939]: time="2025-01-29T10:49:36.179780151Z" level=error msg="ContainerStatus for \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\": not found" Jan 29 10:49:36.180299 kubelet[2392]: E0129 10:49:36.180095 2392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\": not found" containerID="b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681" Jan 29 10:49:36.180299 kubelet[2392]: I0129 10:49:36.180140 2392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681"} err="failed to get container status \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\": rpc error: code = NotFound desc = an error occurred when try to find container \"b99084c5a792f4d9b2b8a73e7c08fa7de994a1f82de7ce96cd92dda7cd6f8681\": not found" Jan 29 10:49:36.180299 kubelet[2392]: I0129 10:49:36.180170 2392 scope.go:117] "RemoveContainer" containerID="ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc" Jan 29 10:49:36.180603 containerd[1939]: time="2025-01-29T10:49:36.180537351Z" level=error msg="ContainerStatus for \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\": not found" Jan 29 10:49:36.180876 kubelet[2392]: E0129 10:49:36.180814 2392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\": not found" containerID="ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc" Jan 29 10:49:36.180939 kubelet[2392]: I0129 10:49:36.180900 2392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc"} err="failed to get container status \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab31f41b68b605275395e7e772a46b7212839d1f93f25f2380970c96c0a07abc\": not found" Jan 29 10:49:36.181049 kubelet[2392]: I0129 10:49:36.180935 2392 scope.go:117] "RemoveContainer" containerID="3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998" Jan 29 10:49:36.181476 containerd[1939]: time="2025-01-29T10:49:36.181338051Z" level=error msg="ContainerStatus for \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\": not found" Jan 29 10:49:36.181571 kubelet[2392]: E0129 10:49:36.181547 2392 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\": not found" containerID="3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998" Jan 29 10:49:36.181633 kubelet[2392]: I0129 10:49:36.181585 2392 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998"} err="failed to get container status \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c7445dab99127062969a909d210e404a558156fe53895362fe9bf6ea1b30998\": not found" Jan 29 10:49:36.787511 kubelet[2392]: E0129 10:49:36.787438 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:37.670885 ntpd[1914]: Deleting interface #12 lxc_health, fe80::f423:94ff:fed9:3bf9%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 29 10:49:37.671437 ntpd[1914]: 29 Jan 10:49:37 ntpd[1914]: Deleting interface #12 lxc_health, fe80::f423:94ff:fed9:3bf9%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 29 10:49:37.788403 kubelet[2392]: E0129 10:49:37.788339 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:37.873460 kubelet[2392]: I0129 10:49:37.873398 2392 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e467bc2e-5fa1-4001-82ab-7225d63627b3" path="/var/lib/kubelet/pods/e467bc2e-5fa1-4001-82ab-7225d63627b3/volumes" Jan 29 10:49:37.908211 kubelet[2392]: E0129 10:49:37.908167 2392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e467bc2e-5fa1-4001-82ab-7225d63627b3" containerName="clean-cilium-state" Jan 29 10:49:37.908211 kubelet[2392]: E0129 10:49:37.908208 2392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e467bc2e-5fa1-4001-82ab-7225d63627b3" containerName="cilium-agent" Jan 29 10:49:37.908503 kubelet[2392]: E0129 10:49:37.908226 2392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e467bc2e-5fa1-4001-82ab-7225d63627b3" containerName="mount-cgroup" Jan 29 10:49:37.908503 kubelet[2392]: E0129 10:49:37.908242 2392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e467bc2e-5fa1-4001-82ab-7225d63627b3" containerName="mount-bpf-fs" Jan 29 10:49:37.908503 kubelet[2392]: E0129 10:49:37.908257 2392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e467bc2e-5fa1-4001-82ab-7225d63627b3" containerName="apply-sysctl-overwrites" Jan 29 10:49:37.908503 kubelet[2392]: I0129 10:49:37.908293 2392 memory_manager.go:354] "RemoveStaleState removing state" podUID="e467bc2e-5fa1-4001-82ab-7225d63627b3" containerName="cilium-agent" Jan 29 10:49:37.917950 systemd[1]: Created slice kubepods-besteffort-pod5364cd46_3bd1_4e63_b7fc_d0c4b0bef799.slice - libcontainer container kubepods-besteffort-pod5364cd46_3bd1_4e63_b7fc_d0c4b0bef799.slice. Jan 29 10:49:37.943213 kubelet[2392]: W0129 10:49:37.942969 2392 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.28.141" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.28.141' and this object Jan 29 10:49:37.943213 kubelet[2392]: E0129 10:49:37.943051 2392 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172.31.28.141\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.31.28.141' and this object" logger="UnhandledError" Jan 29 10:49:37.995841 systemd[1]: Created slice kubepods-burstable-pod45a34041_8fb6_4970_a2aa_e499cfb49c02.slice - libcontainer container kubepods-burstable-pod45a34041_8fb6_4970_a2aa_e499cfb49c02.slice. Jan 29 10:49:38.047649 kubelet[2392]: I0129 10:49:38.047496 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z47wx\" (UniqueName: \"kubernetes.io/projected/5364cd46-3bd1-4e63-b7fc-d0c4b0bef799-kube-api-access-z47wx\") pod \"cilium-operator-5d85765b45-7ft58\" (UID: \"5364cd46-3bd1-4e63-b7fc-d0c4b0bef799\") " pod="kube-system/cilium-operator-5d85765b45-7ft58" Jan 29 10:49:38.047649 kubelet[2392]: I0129 10:49:38.047563 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5364cd46-3bd1-4e63-b7fc-d0c4b0bef799-cilium-config-path\") pod \"cilium-operator-5d85765b45-7ft58\" (UID: \"5364cd46-3bd1-4e63-b7fc-d0c4b0bef799\") " pod="kube-system/cilium-operator-5d85765b45-7ft58" Jan 29 10:49:38.148529 kubelet[2392]: I0129 10:49:38.147873 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-hostproc\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148529 kubelet[2392]: I0129 10:49:38.147946 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45a34041-8fb6-4970-a2aa-e499cfb49c02-clustermesh-secrets\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148529 kubelet[2392]: I0129 10:49:38.148007 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-host-proc-sys-net\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148529 kubelet[2392]: I0129 10:49:38.148066 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-cilium-cgroup\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148529 kubelet[2392]: I0129 10:49:38.148101 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-etc-cni-netd\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148529 kubelet[2392]: I0129 10:49:38.148137 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45a34041-8fb6-4970-a2aa-e499cfb49c02-cilium-ipsec-secrets\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148932 kubelet[2392]: I0129 10:49:38.148173 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-lib-modules\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148932 kubelet[2392]: I0129 10:49:38.148206 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-xtables-lock\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148932 kubelet[2392]: I0129 10:49:38.148242 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45a34041-8fb6-4970-a2aa-e499cfb49c02-cilium-config-path\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148932 kubelet[2392]: I0129 10:49:38.148299 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45a34041-8fb6-4970-a2aa-e499cfb49c02-hubble-tls\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148932 kubelet[2392]: I0129 10:49:38.148371 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-cilium-run\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.148932 kubelet[2392]: I0129 10:49:38.148408 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-bpf-maps\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.149263 kubelet[2392]: I0129 10:49:38.148457 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-cni-path\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.149263 kubelet[2392]: I0129 10:49:38.148500 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45a34041-8fb6-4970-a2aa-e499cfb49c02-host-proc-sys-kernel\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.149263 kubelet[2392]: I0129 10:49:38.148541 2392 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-487qz\" (UniqueName: \"kubernetes.io/projected/45a34041-8fb6-4970-a2aa-e499cfb49c02-kube-api-access-487qz\") pod \"cilium-rkn55\" (UID: \"45a34041-8fb6-4970-a2aa-e499cfb49c02\") " pod="kube-system/cilium-rkn55" Jan 29 10:49:38.788668 kubelet[2392]: E0129 10:49:38.788607 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:38.823747 containerd[1939]: time="2025-01-29T10:49:38.823662992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7ft58,Uid:5364cd46-3bd1-4e63-b7fc-d0c4b0bef799,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:38.865370 containerd[1939]: time="2025-01-29T10:49:38.864928316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:38.865502 containerd[1939]: time="2025-01-29T10:49:38.865137140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:38.865502 containerd[1939]: time="2025-01-29T10:49:38.865169072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:38.865502 containerd[1939]: time="2025-01-29T10:49:38.865337468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:38.895312 systemd[1]: Started cri-containerd-7d88d92b07a81f7ad7514a84082de776d301760e3bd247472e7709d01ef43487.scope - libcontainer container 7d88d92b07a81f7ad7514a84082de776d301760e3bd247472e7709d01ef43487. Jan 29 10:49:38.907793 containerd[1939]: time="2025-01-29T10:49:38.907623320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkn55,Uid:45a34041-8fb6-4970-a2aa-e499cfb49c02,Namespace:kube-system,Attempt:0,}" Jan 29 10:49:38.951203 containerd[1939]: time="2025-01-29T10:49:38.950403981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:49:38.951203 containerd[1939]: time="2025-01-29T10:49:38.950494929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:49:38.951203 containerd[1939]: time="2025-01-29T10:49:38.950530653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:38.951203 containerd[1939]: time="2025-01-29T10:49:38.950674677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:49:38.969370 containerd[1939]: time="2025-01-29T10:49:38.969319173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7ft58,Uid:5364cd46-3bd1-4e63-b7fc-d0c4b0bef799,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d88d92b07a81f7ad7514a84082de776d301760e3bd247472e7709d01ef43487\"" Jan 29 10:49:38.973120 containerd[1939]: time="2025-01-29T10:49:38.973071921Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 10:49:38.996311 systemd[1]: Started cri-containerd-dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298.scope - libcontainer container dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298. Jan 29 10:49:39.034556 containerd[1939]: time="2025-01-29T10:49:39.034474157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rkn55,Uid:45a34041-8fb6-4970-a2aa-e499cfb49c02,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\"" Jan 29 10:49:39.039148 containerd[1939]: time="2025-01-29T10:49:39.038760473Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 10:49:39.068425 containerd[1939]: time="2025-01-29T10:49:39.068331941Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b\"" Jan 29 10:49:39.069542 containerd[1939]: time="2025-01-29T10:49:39.069300269Z" level=info msg="StartContainer for \"10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b\"" Jan 29 10:49:39.111306 systemd[1]: Started cri-containerd-10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b.scope - libcontainer container 10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b. Jan 29 10:49:39.158326 containerd[1939]: time="2025-01-29T10:49:39.158155878Z" level=info msg="StartContainer for \"10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b\" returns successfully" Jan 29 10:49:39.185013 systemd[1]: cri-containerd-10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b.scope: Deactivated successfully. Jan 29 10:49:39.223269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b-rootfs.mount: Deactivated successfully. Jan 29 10:49:39.242527 containerd[1939]: time="2025-01-29T10:49:39.242443878Z" level=info msg="shim disconnected" id=10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b namespace=k8s.io Jan 29 10:49:39.242527 containerd[1939]: time="2025-01-29T10:49:39.242522850Z" level=warning msg="cleaning up after shim disconnected" id=10b47a03ae3691f6df4a0323c7a8e598c2331b3ec41bd230aac3eb21d9f0289b namespace=k8s.io Jan 29 10:49:39.242527 containerd[1939]: time="2025-01-29T10:49:39.242545650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:49:39.789764 kubelet[2392]: E0129 10:49:39.789697 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:39.914785 kubelet[2392]: E0129 10:49:39.914722 2392 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 10:49:40.156245 containerd[1939]: time="2025-01-29T10:49:40.156105031Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 10:49:40.204816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923710528.mount: Deactivated successfully. Jan 29 10:49:40.207725 containerd[1939]: time="2025-01-29T10:49:40.205084903Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1\"" Jan 29 10:49:40.207725 containerd[1939]: time="2025-01-29T10:49:40.206133811Z" level=info msg="StartContainer for \"b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1\"" Jan 29 10:49:40.261291 systemd[1]: Started cri-containerd-b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1.scope - libcontainer container b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1. Jan 29 10:49:40.312491 containerd[1939]: time="2025-01-29T10:49:40.312403543Z" level=info msg="StartContainer for \"b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1\" returns successfully" Jan 29 10:49:40.326043 systemd[1]: cri-containerd-b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1.scope: Deactivated successfully. Jan 29 10:49:40.397936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1-rootfs.mount: Deactivated successfully. Jan 29 10:49:40.468544 containerd[1939]: time="2025-01-29T10:49:40.468347228Z" level=info msg="shim disconnected" id=b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1 namespace=k8s.io Jan 29 10:49:40.469396 containerd[1939]: time="2025-01-29T10:49:40.469351220Z" level=warning msg="cleaning up after shim disconnected" id=b1ca31164dcb908d9a047684a76226915644248aa6e5e52eb25529a49fcc6cf1 namespace=k8s.io Jan 29 10:49:40.469552 containerd[1939]: time="2025-01-29T10:49:40.469523840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:49:40.790657 kubelet[2392]: E0129 10:49:40.790612 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:41.049605 containerd[1939]: time="2025-01-29T10:49:41.049470235Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:41.051222 containerd[1939]: time="2025-01-29T10:49:41.051159511Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 10:49:41.052707 containerd[1939]: time="2025-01-29T10:49:41.052666879Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:49:41.055544 containerd[1939]: time="2025-01-29T10:49:41.055497895Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.082210094s" Jan 29 10:49:41.056226 containerd[1939]: time="2025-01-29T10:49:41.055683439Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 10:49:41.059850 containerd[1939]: time="2025-01-29T10:49:41.059621335Z" level=info msg="CreateContainer within sandbox \"7d88d92b07a81f7ad7514a84082de776d301760e3bd247472e7709d01ef43487\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 10:49:41.081109 containerd[1939]: time="2025-01-29T10:49:41.081051751Z" level=info msg="CreateContainer within sandbox \"7d88d92b07a81f7ad7514a84082de776d301760e3bd247472e7709d01ef43487\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a0cf9e7aaa6669896ff603a8c9ee2d64931b32d45b2b12846c3097eb32bd8adf\"" Jan 29 10:49:41.084036 containerd[1939]: time="2025-01-29T10:49:41.082305487Z" level=info msg="StartContainer for \"a0cf9e7aaa6669896ff603a8c9ee2d64931b32d45b2b12846c3097eb32bd8adf\"" Jan 29 10:49:41.127305 systemd[1]: Started cri-containerd-a0cf9e7aaa6669896ff603a8c9ee2d64931b32d45b2b12846c3097eb32bd8adf.scope - libcontainer container a0cf9e7aaa6669896ff603a8c9ee2d64931b32d45b2b12846c3097eb32bd8adf. Jan 29 10:49:41.171767 containerd[1939]: time="2025-01-29T10:49:41.171705620Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 10:49:41.189929 containerd[1939]: time="2025-01-29T10:49:41.189682496Z" level=info msg="StartContainer for \"a0cf9e7aaa6669896ff603a8c9ee2d64931b32d45b2b12846c3097eb32bd8adf\" returns successfully" Jan 29 10:49:41.219217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641572183.mount: Deactivated successfully. Jan 29 10:49:41.223338 containerd[1939]: time="2025-01-29T10:49:41.222454700Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e\"" Jan 29 10:49:41.224538 containerd[1939]: time="2025-01-29T10:49:41.224494352Z" level=info msg="StartContainer for \"e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e\"" Jan 29 10:49:41.297091 systemd[1]: Started cri-containerd-e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e.scope - libcontainer container e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e. Jan 29 10:49:41.366126 containerd[1939]: time="2025-01-29T10:49:41.365278617Z" level=info msg="StartContainer for \"e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e\" returns successfully" Jan 29 10:49:41.368258 systemd[1]: cri-containerd-e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e.scope: Deactivated successfully. Jan 29 10:49:41.670632 containerd[1939]: time="2025-01-29T10:49:41.670426630Z" level=info msg="shim disconnected" id=e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e namespace=k8s.io Jan 29 10:49:41.670632 containerd[1939]: time="2025-01-29T10:49:41.670520686Z" level=warning msg="cleaning up after shim disconnected" id=e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e namespace=k8s.io Jan 29 10:49:41.670632 containerd[1939]: time="2025-01-29T10:49:41.670544170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:49:41.732096 kubelet[2392]: I0129 10:49:41.731646 2392 setters.go:600] "Node became not ready" node="172.31.28.141" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T10:49:41Z","lastTransitionTime":"2025-01-29T10:49:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 10:49:41.792299 kubelet[2392]: E0129 10:49:41.792233 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:42.178761 containerd[1939]: time="2025-01-29T10:49:42.178654149Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 10:49:42.196179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e78e2491bc5738b5d56413b2f88151fccc25264b8a1a35775be043badf75b64e-rootfs.mount: Deactivated successfully. Jan 29 10:49:42.200641 containerd[1939]: time="2025-01-29T10:49:42.200472153Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19\"" Jan 29 10:49:42.202842 containerd[1939]: time="2025-01-29T10:49:42.201429861Z" level=info msg="StartContainer for \"e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19\"" Jan 29 10:49:42.258305 systemd[1]: Started cri-containerd-e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19.scope - libcontainer container e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19. Jan 29 10:49:42.314792 systemd[1]: cri-containerd-e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19.scope: Deactivated successfully. Jan 29 10:49:42.317829 containerd[1939]: time="2025-01-29T10:49:42.317638101Z" level=info msg="StartContainer for \"e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19\" returns successfully" Jan 29 10:49:42.349513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19-rootfs.mount: Deactivated successfully. Jan 29 10:49:42.378272 containerd[1939]: time="2025-01-29T10:49:42.378154990Z" level=info msg="shim disconnected" id=e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19 namespace=k8s.io Jan 29 10:49:42.378272 containerd[1939]: time="2025-01-29T10:49:42.378269146Z" level=warning msg="cleaning up after shim disconnected" id=e740117aec81381d22103cac8e153f1e4333b14b7ae81d2df36dbef9931bcc19 namespace=k8s.io Jan 29 10:49:42.378740 containerd[1939]: time="2025-01-29T10:49:42.378291478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:49:42.793350 kubelet[2392]: E0129 10:49:42.793275 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:43.190344 containerd[1939]: time="2025-01-29T10:49:43.189795430Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 10:49:43.217595 containerd[1939]: time="2025-01-29T10:49:43.217509898Z" level=info msg="CreateContainer within sandbox \"dd2447f7ac4b5e14eb229875a26dabc75a3e3e9fde7a6e2ce0216653b147a298\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c5f380d045e7bf7a18278b1e7cda075cfd003a149060949d5c18611aafc60916\"" Jan 29 10:49:43.218295 containerd[1939]: time="2025-01-29T10:49:43.218246926Z" level=info msg="StartContainer for \"c5f380d045e7bf7a18278b1e7cda075cfd003a149060949d5c18611aafc60916\"" Jan 29 10:49:43.223693 kubelet[2392]: I0129 10:49:43.222917 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7ft58" podStartSLOduration=4.137595152 podStartE2EDuration="6.222854374s" podCreationTimestamp="2025-01-29 10:49:37 +0000 UTC" firstStartedPulling="2025-01-29 10:49:38.971882913 +0000 UTC m=+70.326453350" lastFinishedPulling="2025-01-29 10:49:41.057142135 +0000 UTC m=+72.411712572" observedRunningTime="2025-01-29 10:49:42.228792369 +0000 UTC m=+73.583362818" watchObservedRunningTime="2025-01-29 10:49:43.222854374 +0000 UTC m=+74.577424835" Jan 29 10:49:43.271326 systemd[1]: Started cri-containerd-c5f380d045e7bf7a18278b1e7cda075cfd003a149060949d5c18611aafc60916.scope - libcontainer container c5f380d045e7bf7a18278b1e7cda075cfd003a149060949d5c18611aafc60916. Jan 29 10:49:43.331667 containerd[1939]: time="2025-01-29T10:49:43.331296130Z" level=info msg="StartContainer for \"c5f380d045e7bf7a18278b1e7cda075cfd003a149060949d5c18611aafc60916\" returns successfully" Jan 29 10:49:43.794365 kubelet[2392]: E0129 10:49:43.794297 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:44.076118 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 10:49:44.795181 kubelet[2392]: E0129 10:49:44.795119 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:45.796101 kubelet[2392]: E0129 10:49:45.796035 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:46.796700 kubelet[2392]: E0129 10:49:46.796632 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:47.798097 kubelet[2392]: E0129 10:49:47.797335 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:48.062148 systemd-networkd[1780]: lxc_health: Link UP Jan 29 10:49:48.069963 (udev-worker)[5182]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:49:48.071198 systemd-networkd[1780]: lxc_health: Gained carrier Jan 29 10:49:48.798467 kubelet[2392]: E0129 10:49:48.798398 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:48.943930 kubelet[2392]: I0129 10:49:48.943402 2392 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rkn55" podStartSLOduration=11.943381266 podStartE2EDuration="11.943381266s" podCreationTimestamp="2025-01-29 10:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:49:44.226712999 +0000 UTC m=+75.581283460" watchObservedRunningTime="2025-01-29 10:49:48.943381266 +0000 UTC m=+80.297951703" Jan 29 10:49:49.482429 systemd-networkd[1780]: lxc_health: Gained IPv6LL Jan 29 10:49:49.654606 systemd[1]: run-containerd-runc-k8s.io-c5f380d045e7bf7a18278b1e7cda075cfd003a149060949d5c18611aafc60916-runc.cqKnPs.mount: Deactivated successfully. Jan 29 10:49:49.726132 kubelet[2392]: E0129 10:49:49.726082 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:49.799101 kubelet[2392]: E0129 10:49:49.799029 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:50.799762 kubelet[2392]: E0129 10:49:50.799663 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:51.671018 ntpd[1914]: Listen normally on 16 lxc_health [fe80::70ab:e3ff:fe85:b0a2%15]:123 Jan 29 10:49:51.671604 ntpd[1914]: 29 Jan 10:49:51 ntpd[1914]: Listen normally on 16 lxc_health [fe80::70ab:e3ff:fe85:b0a2%15]:123 Jan 29 10:49:51.799909 kubelet[2392]: E0129 10:49:51.799818 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:52.800940 kubelet[2392]: E0129 10:49:52.800852 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:53.801322 kubelet[2392]: E0129 10:49:53.801252 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:54.801815 kubelet[2392]: E0129 10:49:54.801744 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:55.802609 kubelet[2392]: E0129 10:49:55.802537 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:56.803314 kubelet[2392]: E0129 10:49:56.803218 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:57.803786 kubelet[2392]: E0129 10:49:57.803710 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:58.804909 kubelet[2392]: E0129 10:49:58.804851 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:49:59.805681 kubelet[2392]: E0129 10:49:59.805625 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:00.806097 kubelet[2392]: E0129 10:50:00.806035 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:01.806828 kubelet[2392]: E0129 10:50:01.806770 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:02.807328 kubelet[2392]: E0129 10:50:02.807257 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:03.808383 kubelet[2392]: E0129 10:50:03.808320 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:04.809521 kubelet[2392]: E0129 10:50:04.809459 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:05.810226 kubelet[2392]: E0129 10:50:05.810158 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:06.811257 kubelet[2392]: E0129 10:50:06.811181 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:07.811569 kubelet[2392]: E0129 10:50:07.811512 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:08.812601 kubelet[2392]: E0129 10:50:08.812540 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:09.725656 kubelet[2392]: E0129 10:50:09.725584 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:09.813659 kubelet[2392]: E0129 10:50:09.813598 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:10.813814 kubelet[2392]: E0129 10:50:10.813754 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:11.814000 kubelet[2392]: E0129 10:50:11.813917 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:11.995214 kubelet[2392]: E0129 10:50:11.995134 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:50:12.023001 kubelet[2392]: E0129 10:50:12.022891 2392 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T10:50:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T10:50:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T10:50:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-29T10:50:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\\\"],\\\"sizeBytes\\\":157636062},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":67680368},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\\\",\\\"registry.k8s.io/kube-proxy:v1.31.5\\\"],\\\"sizeBytes\\\":26771136},{\\\"names\\\":[\\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"],\\\"sizeBytes\\\":17128551},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"172.31.28.141\": Patch \"https://172.31.31.186:6443/api/v1/nodes/172.31.28.141/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:50:12.815089 kubelet[2392]: E0129 10:50:12.815024 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:13.816006 kubelet[2392]: E0129 10:50:13.815924 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:14.816820 kubelet[2392]: E0129 10:50:14.816756 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:15.817501 kubelet[2392]: E0129 10:50:15.817446 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:16.818266 kubelet[2392]: E0129 10:50:16.818202 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:17.819282 kubelet[2392]: E0129 10:50:17.819216 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:18.819555 kubelet[2392]: E0129 10:50:18.819483 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:19.820040 kubelet[2392]: E0129 10:50:19.819949 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:20.820958 kubelet[2392]: E0129 10:50:20.820900 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:21.821102 kubelet[2392]: E0129 10:50:21.821042 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:21.995506 kubelet[2392]: E0129 10:50:21.995421 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:50:22.023997 kubelet[2392]: E0129 10:50:22.023939 2392 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.28.141\": Get \"https://172.31.31.186:6443/api/v1/nodes/172.31.28.141?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:50:22.821882 kubelet[2392]: E0129 10:50:22.821813 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:23.822928 kubelet[2392]: E0129 10:50:23.822867 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:24.823952 kubelet[2392]: E0129 10:50:24.823890 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:25.824606 kubelet[2392]: E0129 10:50:25.824535 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:26.825492 kubelet[2392]: E0129 10:50:26.825431 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:27.807956 kubelet[2392]: E0129 10:50:27.807829 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": unexpected EOF" Jan 29 10:50:27.816992 kubelet[2392]: E0129 10:50:27.815718 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": read tcp 172.31.28.141:57548->172.31.31.186:6443: read: connection reset by peer" Jan 29 10:50:27.816992 kubelet[2392]: E0129 10:50:27.816683 2392 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": dial tcp 172.31.31.186:6443: connect: connection refused" Jan 29 10:50:27.816992 kubelet[2392]: I0129 10:50:27.816731 2392 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 10:50:27.818627 kubelet[2392]: E0129 10:50:27.818554 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": dial tcp 172.31.31.186:6443: connect: connection refused" interval="200ms" Jan 29 10:50:27.825854 kubelet[2392]: E0129 10:50:27.825811 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:28.019791 kubelet[2392]: E0129 10:50:28.019709 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": dial tcp 172.31.31.186:6443: connect: connection refused" interval="400ms" Jan 29 10:50:28.420680 kubelet[2392]: E0129 10:50:28.420610 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": dial tcp 172.31.31.186:6443: connect: connection refused" interval="800ms" Jan 29 10:50:28.809028 kubelet[2392]: E0129 10:50:28.808881 2392 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.28.141\": Get \"https://172.31.31.186:6443/api/v1/nodes/172.31.28.141?timeout=10s\": dial tcp 172.31.31.186:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Jan 29 10:50:28.809700 kubelet[2392]: E0129 10:50:28.809375 2392 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.28.141\": Get \"https://172.31.31.186:6443/api/v1/nodes/172.31.28.141?timeout=10s\": dial tcp 172.31.31.186:6443: connect: connection refused" Jan 29 10:50:28.810039 kubelet[2392]: E0129 10:50:28.809926 2392 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.28.141\": Get \"https://172.31.31.186:6443/api/v1/nodes/172.31.28.141?timeout=10s\": dial tcp 172.31.31.186:6443: connect: connection refused" Jan 29 10:50:28.810039 kubelet[2392]: E0129 10:50:28.810008 2392 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:50:28.827366 kubelet[2392]: E0129 10:50:28.827301 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:29.725969 kubelet[2392]: E0129 10:50:29.725905 2392 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:29.793664 containerd[1939]: time="2025-01-29T10:50:29.793590129Z" level=info msg="StopPodSandbox for \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\"" Jan 29 10:50:29.797002 containerd[1939]: time="2025-01-29T10:50:29.793823493Z" level=info msg="TearDown network for sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" successfully" Jan 29 10:50:29.797002 containerd[1939]: time="2025-01-29T10:50:29.793908693Z" level=info msg="StopPodSandbox for \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" returns successfully" Jan 29 10:50:29.797002 containerd[1939]: time="2025-01-29T10:50:29.795335157Z" level=info msg="RemovePodSandbox for \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\"" Jan 29 10:50:29.797002 containerd[1939]: time="2025-01-29T10:50:29.795411021Z" level=info msg="Forcibly stopping sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\"" Jan 29 10:50:29.797002 containerd[1939]: time="2025-01-29T10:50:29.795551937Z" level=info msg="TearDown network for sandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" successfully" Jan 29 10:50:29.803208 containerd[1939]: time="2025-01-29T10:50:29.803134557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:50:29.803345 containerd[1939]: time="2025-01-29T10:50:29.803225469Z" level=info msg="RemovePodSandbox \"d6dba91d3f8b1f57303a9c3480229412a0ae80980370b87deb2c0beffb20fc22\" returns successfully" Jan 29 10:50:29.828380 kubelet[2392]: E0129 10:50:29.828316 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:30.829206 kubelet[2392]: E0129 10:50:30.829133 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:31.829522 kubelet[2392]: E0129 10:50:31.829449 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:32.830645 kubelet[2392]: E0129 10:50:32.830577 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:33.831838 kubelet[2392]: E0129 10:50:33.831763 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:34.832997 kubelet[2392]: E0129 10:50:34.832917 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:35.833137 kubelet[2392]: E0129 10:50:35.833060 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:36.833595 kubelet[2392]: E0129 10:50:36.833531 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:37.455731 update_engine[1919]: I20250129 10:50:37.455651 1919 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 10:50:37.456943 update_engine[1919]: I20250129 10:50:37.456369 1919 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 10:50:37.456943 update_engine[1919]: I20250129 10:50:37.456652 1919 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 10:50:37.457667 update_engine[1919]: I20250129 10:50:37.457528 1919 omaha_request_params.cc:62] Current group set to beta Jan 29 10:50:37.457756 update_engine[1919]: I20250129 10:50:37.457669 1919 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 10:50:37.457756 update_engine[1919]: I20250129 10:50:37.457690 1919 update_attempter.cc:643] Scheduling an action processor start. Jan 29 10:50:37.457756 update_engine[1919]: I20250129 10:50:37.457723 1919 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 10:50:37.457888 update_engine[1919]: I20250129 10:50:37.457782 1919 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 10:50:37.457940 update_engine[1919]: I20250129 10:50:37.457883 1919 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 10:50:37.457940 update_engine[1919]: I20250129 10:50:37.457902 1919 omaha_request_action.cc:272] Request: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: Jan 29 10:50:37.457940 update_engine[1919]: I20250129 10:50:37.457919 1919 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 10:50:37.459047 locksmithd[1959]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 10:50:37.460130 update_engine[1919]: I20250129 10:50:37.460068 1919 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 10:50:37.460625 update_engine[1919]: I20250129 10:50:37.460563 1919 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 10:50:37.539244 update_engine[1919]: E20250129 10:50:37.539162 1919 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 10:50:37.539344 update_engine[1919]: I20250129 10:50:37.539291 1919 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 10:50:37.833986 kubelet[2392]: E0129 10:50:37.833924 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:38.834610 kubelet[2392]: E0129 10:50:38.834550 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:50:39.222432 kubelet[2392]: E0129 10:50:39.222284 2392 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.186:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.28.141?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 29 10:50:39.834757 kubelet[2392]: E0129 10:50:39.834688 2392 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"