Jan 13 20:08:08.199386 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:08:08.199431 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:08:08.199455 kernel: KASLR disabled due to lack of seed Jan 13 20:08:08.199471 kernel: efi: EFI v2.7 by EDK II Jan 13 20:08:08.199487 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Jan 13 20:08:08.199502 kernel: secureboot: Secure boot disabled Jan 13 20:08:08.199519 kernel: ACPI: Early table checksum verification disabled Jan 13 20:08:08.199535 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:08:08.199550 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:08:08.199565 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:08:08.199585 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:08:08.199601 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:08:08.199616 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:08:08.199632 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:08:08.199650 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:08:08.199671 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:08:08.199688 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:08:08.199704 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:08:08.199720 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:08:08.199737 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:08:08.199753 kernel: printk: bootconsole [uart0] enabled Jan 13 20:08:08.199769 kernel: NUMA: Failed to initialise from firmware Jan 13 20:08:08.199785 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:08:08.199802 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:08:08.199818 kernel: Zone ranges: Jan 13 20:08:08.199834 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:08:08.199854 kernel: DMA32 empty Jan 13 20:08:08.199870 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:08:08.199886 kernel: Movable zone start for each node Jan 13 20:08:08.199902 kernel: Early memory node ranges Jan 13 20:08:08.199918 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:08:08.199934 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:08:08.199968 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:08:08.199985 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:08:08.200002 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:08:08.200018 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:08:08.200034 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:08:08.200050 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:08:08.200072 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:08:08.200089 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:08:08.200112 kernel: psci: probing for conduit method from ACPI. Jan 13 20:08:08.200130 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:08:08.200148 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:08:08.200169 kernel: psci: Trusted OS migration not required Jan 13 20:08:08.200186 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:08:08.200204 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:08:08.200221 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:08:08.200239 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:08:08.200285 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:08:08.200304 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:08:08.200322 kernel: CPU features: detected: Spectre-v2 Jan 13 20:08:08.200339 kernel: CPU features: detected: Spectre-v3a Jan 13 20:08:08.200356 kernel: CPU features: detected: Spectre-BHB Jan 13 20:08:08.200373 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:08:08.200390 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:08:08.200414 kernel: alternatives: applying boot alternatives Jan 13 20:08:08.200433 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:08:08.200452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:08:08.200469 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:08:08.200486 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:08:08.200504 kernel: Fallback order for Node 0: 0 Jan 13 20:08:08.200522 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:08:08.200540 kernel: Policy zone: Normal Jan 13 20:08:08.200557 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:08:08.200574 kernel: software IO TLB: area num 2. Jan 13 20:08:08.200597 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:08:08.200615 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Jan 13 20:08:08.200632 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:08:08.200650 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:08:08.200668 kernel: rcu: RCU event tracing is enabled. Jan 13 20:08:08.200686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:08:08.200704 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:08:08.200722 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:08:08.200739 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:08:08.200756 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:08:08.200773 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:08:08.200794 kernel: GICv3: 96 SPIs implemented Jan 13 20:08:08.200812 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:08:08.200829 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:08:08.200846 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:08:08.200863 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:08:08.200880 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:08:08.200897 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:08:08.200915 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:08:08.200933 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:08:08.200950 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:08:08.200967 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:08:08.200985 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:08:08.201006 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:08:08.201024 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:08:08.201041 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:08:08.201058 kernel: Console: colour dummy device 80x25 Jan 13 20:08:08.201076 kernel: printk: console [tty1] enabled Jan 13 20:08:08.201094 kernel: ACPI: Core revision 20230628 Jan 13 20:08:08.201112 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:08:08.201130 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:08:08.201147 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:08:08.201165 kernel: landlock: Up and running. Jan 13 20:08:08.201187 kernel: SELinux: Initializing. Jan 13 20:08:08.201205 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:08:08.201222 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:08:08.201240 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:08:08.201280 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:08:08.201301 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:08:08.201319 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:08:08.201337 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:08:08.201360 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:08:08.201378 kernel: Remapping and enabling EFI services. Jan 13 20:08:08.201396 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:08:08.201413 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:08:08.201431 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:08:08.201449 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:08:08.201466 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:08:08.201484 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:08:08.201501 kernel: SMP: Total of 2 processors activated. Jan 13 20:08:08.201519 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:08:08.201541 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:08:08.201559 kernel: CPU features: detected: CRC32 instructions Jan 13 20:08:08.201587 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:08:08.201610 kernel: alternatives: applying system-wide alternatives Jan 13 20:08:08.201628 kernel: devtmpfs: initialized Jan 13 20:08:08.201646 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:08:08.201665 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:08:08.201683 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:08:08.201702 kernel: SMBIOS 3.0.0 present. Jan 13 20:08:08.201724 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:08:08.201743 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:08:08.201761 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:08:08.201780 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:08:08.201798 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:08:08.201816 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:08:08.201835 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jan 13 20:08:08.201857 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:08:08.201876 kernel: cpuidle: using governor menu Jan 13 20:08:08.201894 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:08:08.201913 kernel: ASID allocator initialised with 65536 entries Jan 13 20:08:08.201931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:08:08.201949 kernel: Serial: AMBA PL011 UART driver Jan 13 20:08:08.201967 kernel: Modules: 17440 pages in range for non-PLT usage Jan 13 20:08:08.201986 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:08:08.202032 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:08:08.202060 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:08:08.202079 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:08:08.202097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:08:08.202116 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:08:08.202134 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:08:08.202153 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:08:08.202171 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:08:08.202190 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:08:08.202209 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:08:08.202231 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:08:08.202276 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:08:08.202322 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:08:08.202343 kernel: ACPI: Interpreter enabled Jan 13 20:08:08.202362 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:08:08.202380 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:08:08.202399 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:08:08.202697 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:08:08.202921 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:08:08.203132 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:08:08.203411 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:08:08.203612 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:08:08.203637 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:08:08.203656 kernel: acpiphp: Slot [1] registered Jan 13 20:08:08.203675 kernel: acpiphp: Slot [2] registered Jan 13 20:08:08.203693 kernel: acpiphp: Slot [3] registered Jan 13 20:08:08.203718 kernel: acpiphp: Slot [4] registered Jan 13 20:08:08.203737 kernel: acpiphp: Slot [5] registered Jan 13 20:08:08.203755 kernel: acpiphp: Slot [6] registered Jan 13 20:08:08.203773 kernel: acpiphp: Slot [7] registered Jan 13 20:08:08.203791 kernel: acpiphp: Slot [8] registered Jan 13 20:08:08.203809 kernel: acpiphp: Slot [9] registered Jan 13 20:08:08.203828 kernel: acpiphp: Slot [10] registered Jan 13 20:08:08.203846 kernel: acpiphp: Slot [11] registered Jan 13 20:08:08.203864 kernel: acpiphp: Slot [12] registered Jan 13 20:08:08.203882 kernel: acpiphp: Slot [13] registered Jan 13 20:08:08.203905 kernel: acpiphp: Slot [14] registered Jan 13 20:08:08.203923 kernel: acpiphp: Slot [15] registered Jan 13 20:08:08.203958 kernel: acpiphp: Slot [16] registered Jan 13 20:08:08.203979 kernel: acpiphp: Slot [17] registered Jan 13 20:08:08.203997 kernel: acpiphp: Slot [18] registered Jan 13 20:08:08.204016 kernel: acpiphp: Slot [19] registered Jan 13 20:08:08.204035 kernel: acpiphp: Slot [20] registered Jan 13 20:08:08.204053 kernel: acpiphp: Slot [21] registered Jan 13 20:08:08.204071 kernel: acpiphp: Slot [22] registered Jan 13 20:08:08.204095 kernel: acpiphp: Slot [23] registered Jan 13 20:08:08.204114 kernel: acpiphp: Slot [24] registered Jan 13 20:08:08.204132 kernel: acpiphp: Slot [25] registered Jan 13 20:08:08.204150 kernel: acpiphp: Slot [26] registered Jan 13 20:08:08.204169 kernel: acpiphp: Slot [27] registered Jan 13 20:08:08.204187 kernel: acpiphp: Slot [28] registered Jan 13 20:08:08.204206 kernel: acpiphp: Slot [29] registered Jan 13 20:08:08.204224 kernel: acpiphp: Slot [30] registered Jan 13 20:08:08.204242 kernel: acpiphp: Slot [31] registered Jan 13 20:08:08.204282 kernel: PCI host bridge to bus 0000:00 Jan 13 20:08:08.204505 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:08:08.204695 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:08:08.204883 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:08:08.205070 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:08:08.205331 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:08:08.205563 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:08:08.205785 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:08:08.206012 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:08:08.206228 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:08:08.206493 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:08:08.206722 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:08:08.206933 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:08:08.207142 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:08:08.207410 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:08:08.207619 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:08:08.207825 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:08:08.208059 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:08:08.208302 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:08:08.208515 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:08:08.208729 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:08:08.208933 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:08:08.209119 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:08:08.209329 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:08:08.209355 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:08:08.209375 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:08:08.209394 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:08:08.209413 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:08:08.209432 kernel: iommu: Default domain type: Translated Jan 13 20:08:08.209457 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:08:08.209475 kernel: efivars: Registered efivars operations Jan 13 20:08:08.209494 kernel: vgaarb: loaded Jan 13 20:08:08.209512 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:08:08.209531 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:08:08.209549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:08:08.209592 kernel: pnp: PnP ACPI init Jan 13 20:08:08.209830 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:08:08.209863 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:08:08.209882 kernel: NET: Registered PF_INET protocol family Jan 13 20:08:08.209901 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:08:08.209924 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:08:08.209943 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:08:08.209962 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:08:08.209981 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:08:08.210000 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:08:08.210019 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:08:08.210041 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:08:08.210060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:08:08.210078 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:08:08.210097 kernel: kvm [1]: HYP mode not available Jan 13 20:08:08.210115 kernel: Initialise system trusted keyrings Jan 13 20:08:08.210134 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:08:08.210152 kernel: Key type asymmetric registered Jan 13 20:08:08.210170 kernel: Asymmetric key parser 'x509' registered Jan 13 20:08:08.210189 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:08:08.210211 kernel: io scheduler mq-deadline registered Jan 13 20:08:08.210230 kernel: io scheduler kyber registered Jan 13 20:08:08.210336 kernel: io scheduler bfq registered Jan 13 20:08:08.210569 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:08:08.210597 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:08:08.210616 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:08:08.210635 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:08:08.210654 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:08:08.210679 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:08:08.210699 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:08:08.210905 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:08:08.210930 kernel: printk: console [ttyS0] disabled Jan 13 20:08:08.210949 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:08:08.210967 kernel: printk: console [ttyS0] enabled Jan 13 20:08:08.210986 kernel: printk: bootconsole [uart0] disabled Jan 13 20:08:08.211005 kernel: thunder_xcv, ver 1.0 Jan 13 20:08:08.211023 kernel: thunder_bgx, ver 1.0 Jan 13 20:08:08.211041 kernel: nicpf, ver 1.0 Jan 13 20:08:08.211065 kernel: nicvf, ver 1.0 Jan 13 20:08:08.211362 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:08:08.211563 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:08:07 UTC (1736798887) Jan 13 20:08:08.211589 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:08:08.211608 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:08:08.211627 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:08:08.211646 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:08:08.211670 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:08:08.211689 kernel: Segment Routing with IPv6 Jan 13 20:08:08.211708 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:08:08.211726 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:08:08.211745 kernel: Key type dns_resolver registered Jan 13 20:08:08.211763 kernel: registered taskstats version 1 Jan 13 20:08:08.211781 kernel: Loading compiled-in X.509 certificates Jan 13 20:08:08.211800 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:08:08.211818 kernel: Key type .fscrypt registered Jan 13 20:08:08.211836 kernel: Key type fscrypt-provisioning registered Jan 13 20:08:08.211859 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:08:08.211878 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:08:08.211896 kernel: ima: No architecture policies found Jan 13 20:08:08.211914 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:08:08.211933 kernel: clk: Disabling unused clocks Jan 13 20:08:08.211969 kernel: Freeing unused kernel memory: 39680K Jan 13 20:08:08.211989 kernel: Run /init as init process Jan 13 20:08:08.212007 kernel: with arguments: Jan 13 20:08:08.212026 kernel: /init Jan 13 20:08:08.212049 kernel: with environment: Jan 13 20:08:08.212068 kernel: HOME=/ Jan 13 20:08:08.212086 kernel: TERM=linux Jan 13 20:08:08.212104 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:08:08.212127 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:08:08.212151 systemd[1]: Detected virtualization amazon. Jan 13 20:08:08.212171 systemd[1]: Detected architecture arm64. Jan 13 20:08:08.212195 systemd[1]: Running in initrd. Jan 13 20:08:08.212216 systemd[1]: No hostname configured, using default hostname. Jan 13 20:08:08.212236 systemd[1]: Hostname set to . Jan 13 20:08:08.212276 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:08:08.212298 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:08:08.212319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:08.212340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:08.212361 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:08:08.212388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:08:08.212409 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:08:08.212430 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:08:08.212454 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:08:08.212475 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:08:08.212495 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:08.212515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:08.212540 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:08:08.212561 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:08:08.212581 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:08:08.212601 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:08:08.212622 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:08:08.212642 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:08:08.212662 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:08:08.212683 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:08:08.212703 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:08.212728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:08.212748 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:08.212769 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:08:08.212789 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:08:08.212809 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:08:08.212829 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:08:08.212850 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:08:08.212870 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:08:08.212894 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:08:08.212915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:08.212936 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:08:08.212956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:08.213013 systemd-journald[252]: Collecting audit messages is disabled. Jan 13 20:08:08.213061 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:08:08.213083 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:08.213104 systemd-journald[252]: Journal started Jan 13 20:08:08.213144 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2f7e5f651540ab778fddad7ef2dc24) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:08:08.225796 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:08:08.225913 systemd-modules-load[253]: Inserted module 'overlay' Jan 13 20:08:08.238711 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:08.263282 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:08:08.266292 kernel: Bridge firewalling registered Jan 13 20:08:08.266279 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 13 20:08:08.271755 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:08:08.287558 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:08:08.288795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:08.292869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:08:08.307145 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:08.321816 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:08:08.335118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:08.348650 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:08:08.363173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:08.382071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:08.385839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:08.409639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:08:08.443964 dracut-cmdline[282]: dracut-dracut-053 Jan 13 20:08:08.455537 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:08:08.487526 systemd-resolved[289]: Positive Trust Anchors: Jan 13 20:08:08.487585 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:08:08.487648 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:08:08.632291 kernel: SCSI subsystem initialized Jan 13 20:08:08.640288 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:08:08.653298 kernel: iscsi: registered transport (tcp) Jan 13 20:08:08.675288 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:08:08.675361 kernel: QLogic iSCSI HBA Driver Jan 13 20:08:08.732406 kernel: random: crng init done Jan 13 20:08:08.732530 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 13 20:08:08.737931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:08:08.741470 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:08.767795 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:08:08.776718 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:08:08.811958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:08:08.812032 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:08:08.812058 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:08:08.879322 kernel: raid6: neonx8 gen() 6606 MB/s Jan 13 20:08:08.896294 kernel: raid6: neonx4 gen() 6445 MB/s Jan 13 20:08:08.913293 kernel: raid6: neonx2 gen() 5364 MB/s Jan 13 20:08:08.930303 kernel: raid6: neonx1 gen() 3928 MB/s Jan 13 20:08:08.947297 kernel: raid6: int64x8 gen() 3798 MB/s Jan 13 20:08:08.964320 kernel: raid6: int64x4 gen() 3678 MB/s Jan 13 20:08:08.981316 kernel: raid6: int64x2 gen() 3531 MB/s Jan 13 20:08:08.999097 kernel: raid6: int64x1 gen() 2748 MB/s Jan 13 20:08:08.999175 kernel: raid6: using algorithm neonx8 gen() 6606 MB/s Jan 13 20:08:09.017115 kernel: raid6: .... xor() 4896 MB/s, rmw enabled Jan 13 20:08:09.017204 kernel: raid6: using neon recovery algorithm Jan 13 20:08:09.025304 kernel: xor: measuring software checksum speed Jan 13 20:08:09.026300 kernel: 8regs : 10151 MB/sec Jan 13 20:08:09.028467 kernel: 32regs : 10699 MB/sec Jan 13 20:08:09.028530 kernel: arm64_neon : 9550 MB/sec Jan 13 20:08:09.028570 kernel: xor: using function: 32regs (10699 MB/sec) Jan 13 20:08:09.115318 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:08:09.138423 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:08:09.159579 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:09.193294 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 13 20:08:09.202193 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:09.224554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:08:09.267644 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jan 13 20:08:09.331244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:08:09.342801 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:08:09.455630 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:09.477604 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:08:09.527857 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:08:09.537050 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:08:09.547597 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:09.556883 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:08:09.573901 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:08:09.635063 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:08:09.647643 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:08:09.647683 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:08:09.676659 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:08:09.676909 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:08:09.677139 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:8e:96:31:14:b7 Jan 13 20:08:09.679505 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:08:09.684076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:09.684594 (udev-worker)[538]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:09.699995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:09.711823 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:08:09.711899 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:08:09.712245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:08:09.712594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:09.716534 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:09.745752 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:08:09.749880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:09.758429 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:08:09.758506 kernel: GPT:9289727 != 16777215 Jan 13 20:08:09.758532 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:08:09.761556 kernel: GPT:9289727 != 16777215 Jan 13 20:08:09.761624 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:08:09.761652 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:09.786559 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:09.799705 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:09.850639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:09.871403 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (528) Jan 13 20:08:09.929284 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (525) Jan 13 20:08:09.949048 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:08:09.990271 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:08:10.029319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:08:10.037771 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:08:10.064205 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:08:10.077571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:08:10.106321 disk-uuid[664]: Primary Header is updated. Jan 13 20:08:10.106321 disk-uuid[664]: Secondary Entries is updated. Jan 13 20:08:10.106321 disk-uuid[664]: Secondary Header is updated. Jan 13 20:08:10.117244 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:10.134299 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:11.135433 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:11.136107 disk-uuid[665]: The operation has completed successfully. Jan 13 20:08:11.329359 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:08:11.329576 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:08:11.395552 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:08:11.407513 sh[923]: Success Jan 13 20:08:11.433337 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:08:11.554025 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:08:11.575568 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:08:11.585350 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:08:11.626283 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:08:11.626363 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:11.626391 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:08:11.629275 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:08:11.629348 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:08:11.660307 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:08:11.676323 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:08:11.683302 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:08:11.697602 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:08:11.711666 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:08:11.739156 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:08:11.739264 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:11.739304 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:11.749315 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:11.767605 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:08:11.773456 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:08:11.785767 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:08:11.797721 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:08:11.927185 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:08:11.944653 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:08:12.006834 systemd-networkd[1117]: lo: Link UP Jan 13 20:08:12.007398 systemd-networkd[1117]: lo: Gained carrier Jan 13 20:08:12.011034 systemd-networkd[1117]: Enumeration completed Jan 13 20:08:12.011612 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:08:12.013328 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:12.013336 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:08:12.046190 ignition[1026]: Ignition 2.20.0 Jan 13 20:08:12.024646 systemd[1]: Reached target network.target - Network. Jan 13 20:08:12.046208 ignition[1026]: Stage: fetch-offline Jan 13 20:08:12.029054 systemd-networkd[1117]: eth0: Link UP Jan 13 20:08:12.046767 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:12.029063 systemd-networkd[1117]: eth0: Gained carrier Jan 13 20:08:12.046793 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:12.029082 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:12.052000 ignition[1026]: Ignition finished successfully Jan 13 20:08:12.053942 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:08:12.084453 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.25.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:08:12.091749 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:08:12.116090 ignition[1125]: Ignition 2.20.0 Jan 13 20:08:12.116119 ignition[1125]: Stage: fetch Jan 13 20:08:12.116805 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:12.116832 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:12.117003 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:12.135541 ignition[1125]: PUT result: OK Jan 13 20:08:12.139579 ignition[1125]: parsed url from cmdline: "" Jan 13 20:08:12.139599 ignition[1125]: no config URL provided Jan 13 20:08:12.139617 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:08:12.139645 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:08:12.139683 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:12.142245 ignition[1125]: PUT result: OK Jan 13 20:08:12.142472 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:08:12.154037 ignition[1125]: GET result: OK Jan 13 20:08:12.154450 ignition[1125]: parsing config with SHA512: aa6752df26fcbd8bdc120df67f69a3707afa97ad72d97a94263c6a3207835d3cde6799d01c54c8d840d00976ffa5f84950942ba6e546da8685f5d1cd542dce30 Jan 13 20:08:12.162539 unknown[1125]: fetched base config from "system" Jan 13 20:08:12.162562 unknown[1125]: fetched base config from "system" Jan 13 20:08:12.163624 ignition[1125]: fetch: fetch complete Jan 13 20:08:12.162577 unknown[1125]: fetched user config from "aws" Jan 13 20:08:12.163639 ignition[1125]: fetch: fetch passed Jan 13 20:08:12.169166 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:08:12.163750 ignition[1125]: Ignition finished successfully Jan 13 20:08:12.188532 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:08:12.221977 ignition[1132]: Ignition 2.20.0 Jan 13 20:08:12.222591 ignition[1132]: Stage: kargs Jan 13 20:08:12.223388 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:12.223419 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:12.223605 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:12.226318 ignition[1132]: PUT result: OK Jan 13 20:08:12.239194 ignition[1132]: kargs: kargs passed Jan 13 20:08:12.239363 ignition[1132]: Ignition finished successfully Jan 13 20:08:12.245142 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:08:12.257596 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:08:12.294199 ignition[1138]: Ignition 2.20.0 Jan 13 20:08:12.294227 ignition[1138]: Stage: disks Jan 13 20:08:12.294918 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:12.295003 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:12.295441 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:12.304043 ignition[1138]: PUT result: OK Jan 13 20:08:12.311836 ignition[1138]: disks: disks passed Jan 13 20:08:12.311979 ignition[1138]: Ignition finished successfully Jan 13 20:08:12.316391 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:08:12.323530 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:08:12.326661 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:08:12.329719 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:08:12.332151 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:08:12.336861 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:08:12.354665 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:08:12.407156 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:08:12.411592 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:08:12.431835 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:08:12.519336 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:08:12.520934 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:08:12.525370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:08:12.544541 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:08:12.554454 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:08:12.557549 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:08:12.557655 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:08:12.557712 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:08:12.589348 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Jan 13 20:08:12.593609 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:08:12.593659 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:12.595009 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:12.603805 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:08:12.619348 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:12.624660 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:08:12.629672 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:08:12.735288 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:08:12.745648 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:08:12.754940 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:08:12.763314 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:08:12.909031 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:08:12.928614 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:08:12.936914 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:08:12.951843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:08:12.955436 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:08:12.997125 ignition[1277]: INFO : Ignition 2.20.0 Jan 13 20:08:12.997125 ignition[1277]: INFO : Stage: mount Jan 13 20:08:13.001385 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:13.001385 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:13.001385 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:13.002627 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:08:13.013856 ignition[1277]: INFO : PUT result: OK Jan 13 20:08:13.017646 ignition[1277]: INFO : mount: mount passed Jan 13 20:08:13.019499 ignition[1277]: INFO : Ignition finished successfully Jan 13 20:08:13.022071 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:08:13.040628 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:08:13.059585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:08:13.087276 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Jan 13 20:08:13.091994 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:08:13.092041 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:13.092067 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:13.092226 systemd-networkd[1117]: eth0: Gained IPv6LL Jan 13 20:08:13.100529 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:13.103138 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:08:13.140843 ignition[1306]: INFO : Ignition 2.20.0 Jan 13 20:08:13.143697 ignition[1306]: INFO : Stage: files Jan 13 20:08:13.143697 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:13.143697 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:13.143697 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:13.152572 ignition[1306]: INFO : PUT result: OK Jan 13 20:08:13.159363 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:08:13.161969 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:08:13.161969 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:08:13.172493 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:08:13.176153 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:08:13.179961 unknown[1306]: wrote ssh authorized keys file for user: core Jan 13 20:08:13.182680 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:08:13.188629 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:08:13.192442 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:08:13.192442 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:08:13.192442 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:08:13.192442 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:13.192442 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:13.192442 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:13.192442 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:08:13.554675 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:08:13.949304 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:13.955498 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:08:13.955498 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:08:13.955498 ignition[1306]: INFO : files: files passed Jan 13 20:08:13.955498 ignition[1306]: INFO : Ignition finished successfully Jan 13 20:08:13.971060 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:08:13.985711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:08:13.995817 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:08:14.013840 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:08:14.020372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:08:14.032836 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:14.032836 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:14.041030 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:14.037800 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:08:14.045916 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:08:14.070520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:08:14.130336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:08:14.130542 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:08:14.134989 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:08:14.141177 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:08:14.145523 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:08:14.167556 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:08:14.202338 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:08:14.213745 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:08:14.242021 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:14.245481 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:14.249275 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:08:14.258194 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:08:14.259200 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:08:14.266420 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:08:14.269578 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:08:14.278651 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:08:14.281638 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:08:14.290407 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:08:14.293940 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:08:14.302342 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:08:14.305831 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:08:14.313824 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:08:14.316913 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:08:14.323687 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:08:14.324245 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:08:14.332596 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:14.335861 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:14.345012 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:08:14.347872 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:14.350816 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:08:14.351107 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:08:14.360713 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:08:14.361232 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:08:14.370983 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:08:14.371610 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:08:14.393712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:08:14.396001 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:08:14.396710 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:14.410778 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:08:14.412905 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:08:14.413852 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:14.418994 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:08:14.419614 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:08:14.452392 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:08:14.452633 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:08:14.469574 ignition[1358]: INFO : Ignition 2.20.0 Jan 13 20:08:14.474546 ignition[1358]: INFO : Stage: umount Jan 13 20:08:14.476951 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:14.476951 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:14.476951 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:14.478573 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:08:14.489659 ignition[1358]: INFO : PUT result: OK Jan 13 20:08:14.503946 ignition[1358]: INFO : umount: umount passed Jan 13 20:08:14.506342 ignition[1358]: INFO : Ignition finished successfully Jan 13 20:08:14.510131 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:08:14.512589 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:08:14.516813 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:08:14.516913 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:08:14.519628 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:08:14.519850 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:08:14.525872 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:08:14.525983 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:08:14.533871 systemd[1]: Stopped target network.target - Network. Jan 13 20:08:14.535845 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:08:14.535977 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:08:14.538858 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:08:14.541018 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:08:14.560976 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:14.563983 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:08:14.566101 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:08:14.568364 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:08:14.568445 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:08:14.570784 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:08:14.570851 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:08:14.573287 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:08:14.573379 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:08:14.576026 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:08:14.576129 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:08:14.578928 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:08:14.581551 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:08:14.585071 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:08:14.585318 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:08:14.596745 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:08:14.596950 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:08:14.604521 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:08:14.606433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:08:14.606530 systemd-networkd[1117]: eth0: DHCPv6 lease lost Jan 13 20:08:14.616750 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:08:14.617093 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:08:14.627432 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:08:14.627541 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:14.666542 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:08:14.668897 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:08:14.669025 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:08:14.677701 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:08:14.677820 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:14.680109 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:08:14.680206 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:14.683028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:08:14.683138 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:14.687181 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:14.728677 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:08:14.729133 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:08:14.738635 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:08:14.739080 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:14.744633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:08:14.744725 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:14.747377 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:08:14.747449 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:14.750151 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:08:14.750302 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:08:14.753238 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:08:14.753389 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:08:14.756224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:08:14.756356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:14.789926 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:08:14.802506 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:08:14.805081 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:14.808133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:08:14.808241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:14.818672 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:08:14.818997 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:08:14.829715 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:08:14.849666 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:08:14.863641 systemd[1]: Switching root. Jan 13 20:08:14.904177 systemd-journald[252]: Journal stopped Jan 13 20:08:16.991057 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 13 20:08:16.991197 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:08:16.991316 kernel: SELinux: policy capability open_perms=1 Jan 13 20:08:16.991354 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:08:16.991391 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:08:16.991425 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:08:16.991457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:08:16.991487 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:08:16.991517 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:08:16.991549 kernel: audit: type=1403 audit(1736798895.242:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:08:16.991591 systemd[1]: Successfully loaded SELinux policy in 51.449ms. Jan 13 20:08:16.991642 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.746ms. Jan 13 20:08:16.991680 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:08:16.991713 systemd[1]: Detected virtualization amazon. Jan 13 20:08:16.991745 systemd[1]: Detected architecture arm64. Jan 13 20:08:16.991774 systemd[1]: Detected first boot. Jan 13 20:08:16.991807 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:08:16.991838 zram_generator::config[1400]: No configuration found. Jan 13 20:08:16.991873 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:08:16.991934 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:08:16.991972 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:08:16.992005 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:08:16.992040 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:08:16.992074 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:08:16.992108 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:08:16.992139 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:08:16.992170 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:08:16.992202 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:08:16.992234 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:08:16.993769 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:08:16.993816 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:16.993848 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:16.993878 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:08:16.993920 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:08:16.993954 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:08:16.993987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:08:16.994019 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:08:16.994051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:16.994082 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:08:16.994111 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:08:16.994142 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:08:16.994187 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:08:16.994218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:16.994283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:08:16.994323 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:08:16.994355 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:08:16.994390 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:08:16.994421 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:08:16.994452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:16.994484 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:16.994519 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:16.994550 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:08:16.994580 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:08:16.994611 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:08:16.994643 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:08:16.994673 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:08:16.994702 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:08:16.994733 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:08:16.994765 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:08:16.994799 systemd[1]: Reached target machines.target - Containers. Jan 13 20:08:16.994829 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:08:16.994858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:16.994887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:08:16.994918 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:08:16.994947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:08:16.994977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:08:16.995008 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:08:16.995043 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:08:16.995074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:08:16.995108 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:08:16.995138 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:08:16.995167 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:08:16.995197 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:08:16.995229 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:08:16.995294 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:08:16.995330 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:08:16.995367 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:08:16.995402 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:08:16.995435 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:08:16.995469 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:08:16.995501 systemd[1]: Stopped verity-setup.service. Jan 13 20:08:16.995534 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:08:16.995568 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:08:16.995597 kernel: ACPI: bus type drm_connector registered Jan 13 20:08:16.995628 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:08:16.995662 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:08:16.995693 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:08:16.995722 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:08:16.995754 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:16.995788 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:08:16.995817 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:08:16.995847 kernel: fuse: init (API version 7.39) Jan 13 20:08:16.995876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:08:16.995931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:08:16.995961 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:08:16.995991 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:08:16.996019 kernel: loop: module loaded Jan 13 20:08:16.996049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:08:16.996083 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:08:16.996114 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:08:16.996143 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:08:16.996171 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:08:16.996202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:08:16.996235 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:16.996374 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:08:16.996428 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:08:16.998503 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:08:16.998534 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:08:16.998610 systemd-journald[1485]: Collecting audit messages is disabled. Jan 13 20:08:16.998673 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:08:16.998711 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:08:16.998742 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:08:16.998772 systemd-journald[1485]: Journal started Jan 13 20:08:16.998820 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec2f7e5f651540ab778fddad7ef2dc24) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:08:16.308170 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:08:16.336342 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:08:16.337392 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:08:17.007439 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:08:17.020767 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:08:17.037129 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:08:17.037325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:17.056613 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:08:17.056706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:08:17.065510 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:08:17.073413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:08:17.089288 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:17.098976 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:08:17.099066 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:08:17.108368 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:08:17.111686 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:08:17.125786 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:08:17.129089 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:08:17.177748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:17.208207 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:08:17.213280 kernel: loop0: detected capacity change from 0 to 113536 Jan 13 20:08:17.220998 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:08:17.231712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:08:17.240480 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:08:17.258714 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:08:17.281645 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec2f7e5f651540ab778fddad7ef2dc24 is 177.587ms for 895 entries. Jan 13 20:08:17.281645 systemd-journald[1485]: System Journal (/var/log/journal/ec2f7e5f651540ab778fddad7ef2dc24) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:08:17.472076 systemd-journald[1485]: Received client request to flush runtime journal. Jan 13 20:08:17.472163 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:08:17.472197 kernel: loop1: detected capacity change from 0 to 53784 Jan 13 20:08:17.472229 kernel: loop2: detected capacity change from 0 to 116808 Jan 13 20:08:17.343845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:17.359028 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:08:17.432473 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:08:17.433773 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:08:17.482934 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:08:17.490110 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:08:17.507296 kernel: loop3: detected capacity change from 0 to 194512 Jan 13 20:08:17.510758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:08:17.515376 udevadm[1545]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:08:17.601682 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Jan 13 20:08:17.604388 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Jan 13 20:08:17.622961 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:17.631309 kernel: loop4: detected capacity change from 0 to 113536 Jan 13 20:08:17.664322 kernel: loop5: detected capacity change from 0 to 53784 Jan 13 20:08:17.703297 kernel: loop6: detected capacity change from 0 to 116808 Jan 13 20:08:17.737208 kernel: loop7: detected capacity change from 0 to 194512 Jan 13 20:08:17.772445 (sd-merge)[1556]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:08:17.776309 (sd-merge)[1556]: Merged extensions into '/usr'. Jan 13 20:08:17.788287 systemd[1]: Reloading requested from client PID 1511 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:08:17.788459 systemd[1]: Reloading... Jan 13 20:08:18.001293 zram_generator::config[1585]: No configuration found. Jan 13 20:08:18.155779 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:08:18.376434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:18.513853 systemd[1]: Reloading finished in 723 ms. Jan 13 20:08:18.561330 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:08:18.564822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:08:18.569068 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:08:18.588630 systemd[1]: Starting ensure-sysext.service... Jan 13 20:08:18.593674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:08:18.602733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:18.626495 systemd[1]: Reloading requested from client PID 1635 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:08:18.626529 systemd[1]: Reloading... Jan 13 20:08:18.675310 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:08:18.676068 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:08:18.678472 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:08:18.679071 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jan 13 20:08:18.680238 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Jan 13 20:08:18.687911 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:08:18.687942 systemd-tmpfiles[1636]: Skipping /boot Jan 13 20:08:18.720210 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:08:18.720245 systemd-tmpfiles[1636]: Skipping /boot Jan 13 20:08:18.789736 systemd-udevd[1637]: Using default interface naming scheme 'v255'. Jan 13 20:08:18.843325 zram_generator::config[1666]: No configuration found. Jan 13 20:08:19.052549 (udev-worker)[1680]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:19.374339 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1738) Jan 13 20:08:19.380368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:19.598208 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:08:19.600067 systemd[1]: Reloading finished in 972 ms. Jan 13 20:08:19.653456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:19.657354 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:19.731235 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:08:19.765919 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:08:19.780797 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:08:19.787327 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:08:19.790760 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:19.795854 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:08:19.808842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:08:19.815860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:08:19.822327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:08:19.825700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:19.831048 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:08:19.841741 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:08:19.850771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:08:19.860326 lvm[1832]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:08:19.869759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:08:19.888895 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:08:19.919890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:19.931732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:08:19.934416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:08:19.938286 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:08:19.938630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:08:19.952837 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:19.961866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:08:19.972712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:08:19.975308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:19.981344 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:08:20.001654 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:08:20.002197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:08:20.007472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:20.017854 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:08:20.021370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:20.021694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:08:20.022084 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:08:20.038374 systemd[1]: Finished ensure-sysext.service. Jan 13 20:08:20.041220 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:08:20.048701 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:08:20.090514 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:08:20.111847 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:08:20.138673 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:08:20.144683 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:08:20.145163 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:08:20.151626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:08:20.152028 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:08:20.155796 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:08:20.156451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:08:20.161560 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:08:20.170520 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:20.184565 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:08:20.187471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:08:20.198525 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:08:20.206979 augenrules[1882]: No rules Jan 13 20:08:20.212064 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:08:20.213100 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:08:20.226696 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:08:20.235968 lvm[1879]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:08:20.273520 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:08:20.319387 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:08:20.343954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:20.389141 systemd-networkd[1839]: lo: Link UP Jan 13 20:08:20.389169 systemd-networkd[1839]: lo: Gained carrier Jan 13 20:08:20.392504 systemd-networkd[1839]: Enumeration completed Jan 13 20:08:20.392712 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:08:20.399148 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:20.399173 systemd-networkd[1839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:08:20.401550 systemd-networkd[1839]: eth0: Link UP Jan 13 20:08:20.402016 systemd-networkd[1839]: eth0: Gained carrier Jan 13 20:08:20.402056 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:20.404671 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:08:20.413432 systemd-networkd[1839]: eth0: DHCPv4 address 172.31.25.10/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:08:20.449221 systemd-resolved[1840]: Positive Trust Anchors: Jan 13 20:08:20.449336 systemd-resolved[1840]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:08:20.449400 systemd-resolved[1840]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:08:20.460035 systemd-resolved[1840]: Defaulting to hostname 'linux'. Jan 13 20:08:20.464217 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:08:20.467368 systemd[1]: Reached target network.target - Network. Jan 13 20:08:20.469813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:20.472518 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:08:20.475108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:08:20.478113 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:08:20.481108 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:08:20.483720 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:08:20.486355 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:08:20.490377 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:08:20.490450 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:08:20.492762 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:08:20.496395 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:08:20.501881 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:08:20.516104 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:08:20.519903 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:08:20.522996 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:08:20.525798 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:08:20.528136 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:08:20.528201 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:08:20.539658 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:08:20.548869 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:08:20.557662 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:08:20.566517 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:08:20.575682 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:08:20.578581 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:08:20.583651 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:08:20.593759 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:08:20.597589 jq[1906]: false Jan 13 20:08:20.603441 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:08:20.623396 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:08:20.631730 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:08:20.643679 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:08:20.647656 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:08:20.650172 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:08:20.659630 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:08:20.665553 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:08:20.673396 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:08:20.673861 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:08:20.692436 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:08:20.692975 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:08:20.753643 dbus-daemon[1905]: [system] SELinux support is enabled Jan 13 20:08:20.754005 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:08:20.762576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:08:20.762679 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:08:20.765609 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:08:20.765654 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:08:20.785843 dbus-daemon[1905]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1839 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:08:20.796190 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:08:20.799084 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:08:20.800777 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:08:20.802139 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:08:20.807131 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:08:20.807131 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: ---------------------------------------------------- Jan 13 20:08:20.807131 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:08:20.807131 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:08:20.807131 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: corporation. Support and training for ntp-4 are Jan 13 20:08:20.807131 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: available at https://www.nwtime.org/support Jan 13 20:08:20.807131 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: ---------------------------------------------------- Jan 13 20:08:20.802195 ntpd[1909]: ---------------------------------------------------- Jan 13 20:08:20.802217 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:08:20.802236 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:08:20.805336 ntpd[1909]: corporation. Support and training for ntp-4 are Jan 13 20:08:20.805384 ntpd[1909]: available at https://www.nwtime.org/support Jan 13 20:08:20.805406 ntpd[1909]: ---------------------------------------------------- Jan 13 20:08:20.814096 ntpd[1909]: proto: precision = 0.096 usec (-23) Jan 13 20:08:20.817497 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: proto: precision = 0.096 usec (-23) Jan 13 20:08:20.817497 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: basedate set to 2025-01-01 Jan 13 20:08:20.817497 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: gps base set to 2025-01-05 (week 2348) Jan 13 20:08:20.815705 ntpd[1909]: basedate set to 2025-01-01 Jan 13 20:08:20.815742 ntpd[1909]: gps base set to 2025-01-05 (week 2348) Jan 13 20:08:20.829012 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:08:20.829012 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:08:20.828665 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:08:20.828767 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:08:20.833569 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:08:20.835438 jq[1917]: true Jan 13 20:08:20.835920 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:08:20.835920 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Listen normally on 3 eth0 172.31.25.10:123 Jan 13 20:08:20.835920 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Listen normally on 4 lo [::1]:123 Jan 13 20:08:20.835920 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: bind(21) AF_INET6 fe80::48e:96ff:fe31:14b7%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:20.835920 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: unable to create socket on eth0 (5) for fe80::48e:96ff:fe31:14b7%2#123 Jan 13 20:08:20.835920 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: failed to init interface for address fe80::48e:96ff:fe31:14b7%2 Jan 13 20:08:20.835920 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Jan 13 20:08:20.833672 ntpd[1909]: Listen normally on 3 eth0 172.31.25.10:123 Jan 13 20:08:20.833749 ntpd[1909]: Listen normally on 4 lo [::1]:123 Jan 13 20:08:20.833842 ntpd[1909]: bind(21) AF_INET6 fe80::48e:96ff:fe31:14b7%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:20.833889 ntpd[1909]: unable to create socket on eth0 (5) for fe80::48e:96ff:fe31:14b7%2#123 Jan 13 20:08:20.833918 ntpd[1909]: failed to init interface for address fe80::48e:96ff:fe31:14b7%2 Jan 13 20:08:20.834002 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Jan 13 20:08:20.843549 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:20.848484 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:20.848484 ntpd[1909]: 13 Jan 20:08:20 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:20.843618 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:20.849642 update_engine[1916]: I20250113 20:08:20.849483 1916 main.cc:92] Flatcar Update Engine starting Jan 13 20:08:20.856369 extend-filesystems[1907]: Found loop4 Jan 13 20:08:20.856369 extend-filesystems[1907]: Found loop5 Jan 13 20:08:20.856369 extend-filesystems[1907]: Found loop6 Jan 13 20:08:20.856369 extend-filesystems[1907]: Found loop7 Jan 13 20:08:20.856369 extend-filesystems[1907]: Found nvme0n1 Jan 13 20:08:20.856369 extend-filesystems[1907]: Found nvme0n1p1 Jan 13 20:08:20.873468 extend-filesystems[1907]: Found nvme0n1p2 Jan 13 20:08:20.873468 extend-filesystems[1907]: Found nvme0n1p3 Jan 13 20:08:20.873468 extend-filesystems[1907]: Found usr Jan 13 20:08:20.873468 extend-filesystems[1907]: Found nvme0n1p4 Jan 13 20:08:20.873468 extend-filesystems[1907]: Found nvme0n1p6 Jan 13 20:08:20.873468 extend-filesystems[1907]: Found nvme0n1p7 Jan 13 20:08:20.873468 extend-filesystems[1907]: Found nvme0n1p9 Jan 13 20:08:20.873468 extend-filesystems[1907]: Checking size of /dev/nvme0n1p9 Jan 13 20:08:20.907472 update_engine[1916]: I20250113 20:08:20.882156 1916 update_check_scheduler.cc:74] Next update check in 3m39s Jan 13 20:08:20.883427 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:08:20.902801 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:08:20.932956 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:08:20.933412 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:08:20.939777 (ntainerd)[1936]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:08:20.990350 extend-filesystems[1907]: Resized partition /dev/nvme0n1p9 Jan 13 20:08:21.006292 extend-filesystems[1951]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:08:21.033514 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:08:21.033659 jq[1941]: true Jan 13 20:08:21.096383 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:08:21.104715 systemd-logind[1915]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:08:21.104780 systemd-logind[1915]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:08:21.105639 systemd-logind[1915]: New seat seat0. Jan 13 20:08:21.109841 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:08:21.166353 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:08:21.177899 coreos-metadata[1904]: Jan 13 20:08:21.177 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:08:21.196838 coreos-metadata[1904]: Jan 13 20:08:21.185 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:08:21.196838 coreos-metadata[1904]: Jan 13 20:08:21.187 INFO Fetch successful Jan 13 20:08:21.196838 coreos-metadata[1904]: Jan 13 20:08:21.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:08:21.196838 coreos-metadata[1904]: Jan 13 20:08:21.195 INFO Fetch successful Jan 13 20:08:21.196838 coreos-metadata[1904]: Jan 13 20:08:21.196 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:08:21.197909 coreos-metadata[1904]: Jan 13 20:08:21.197 INFO Fetch successful Jan 13 20:08:21.197909 coreos-metadata[1904]: Jan 13 20:08:21.197 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:08:21.198932 coreos-metadata[1904]: Jan 13 20:08:21.198 INFO Fetch successful Jan 13 20:08:21.198932 coreos-metadata[1904]: Jan 13 20:08:21.198 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:08:21.200674 extend-filesystems[1951]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:08:21.200674 extend-filesystems[1951]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:08:21.200674 extend-filesystems[1951]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:08:21.218927 extend-filesystems[1907]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:08:21.204714 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.203 INFO Fetch failed with 404: resource not found Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.206 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.208 INFO Fetch successful Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.208 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.210 INFO Fetch successful Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.210 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.218 INFO Fetch successful Jan 13 20:08:21.222627 coreos-metadata[1904]: Jan 13 20:08:21.218 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:08:21.205114 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:08:21.237299 coreos-metadata[1904]: Jan 13 20:08:21.230 INFO Fetch successful Jan 13 20:08:21.237299 coreos-metadata[1904]: Jan 13 20:08:21.230 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:08:21.237299 coreos-metadata[1904]: Jan 13 20:08:21.234 INFO Fetch successful Jan 13 20:08:21.277130 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1671) Jan 13 20:08:21.306880 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:08:21.337596 bash[1994]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:08:21.352456 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:08:21.403831 systemd[1]: Starting sshkeys.service... Jan 13 20:08:21.423388 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:08:21.430721 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:08:21.441945 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:08:21.442701 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:08:21.446015 dbus-daemon[1905]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1932 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:08:21.464640 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:08:21.477594 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:08:21.510979 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:08:21.538755 locksmithd[1944]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:08:21.543771 polkitd[2016]: Started polkitd version 121 Jan 13 20:08:21.566554 polkitd[2016]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:08:21.566707 polkitd[2016]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:08:21.574338 polkitd[2016]: Finished loading, compiling and executing 2 rules Jan 13 20:08:21.575327 dbus-daemon[1905]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:08:21.575805 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:08:21.579732 polkitd[2016]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:08:21.629019 systemd-hostnamed[1932]: Hostname set to (transient) Jan 13 20:08:21.631214 systemd-resolved[1840]: System hostname changed to 'ip-172-31-25-10'. Jan 13 20:08:21.668431 systemd-networkd[1839]: eth0: Gained IPv6LL Jan 13 20:08:21.681766 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:08:21.688495 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:08:21.699474 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:08:21.714788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:21.722951 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:08:21.888460 containerd[1936]: time="2025-01-13T20:08:21.885059219Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:08:21.896805 coreos-metadata[2019]: Jan 13 20:08:21.896 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:08:21.902347 coreos-metadata[2019]: Jan 13 20:08:21.900 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:08:21.902347 coreos-metadata[2019]: Jan 13 20:08:21.901 INFO Fetch successful Jan 13 20:08:21.902347 coreos-metadata[2019]: Jan 13 20:08:21.901 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:08:21.912337 coreos-metadata[2019]: Jan 13 20:08:21.909 INFO Fetch successful Jan 13 20:08:21.918151 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:08:21.921343 amazon-ssm-agent[2068]: Initializing new seelog logger Jan 13 20:08:21.922551 unknown[2019]: wrote ssh authorized keys file for user: core Jan 13 20:08:21.931930 amazon-ssm-agent[2068]: New Seelog Logger Creation Complete Jan 13 20:08:21.931930 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.931930 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.931930 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 processing appconfig overrides Jan 13 20:08:21.931930 amazon-ssm-agent[2068]: 2025-01-13 20:08:21 INFO Proxy environment variables: Jan 13 20:08:21.937378 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.951426 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.951426 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 processing appconfig overrides Jan 13 20:08:21.951426 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.951426 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.951426 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 processing appconfig overrides Jan 13 20:08:21.959585 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.960189 amazon-ssm-agent[2068]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:21.961820 amazon-ssm-agent[2068]: 2025/01/13 20:08:21 processing appconfig overrides Jan 13 20:08:22.023640 update-ssh-keys[2104]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:08:22.023368 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:08:22.041362 amazon-ssm-agent[2068]: 2025-01-13 20:08:21 INFO https_proxy: Jan 13 20:08:22.049490 systemd[1]: Finished sshkeys.service. Jan 13 20:08:22.134418 amazon-ssm-agent[2068]: 2025-01-13 20:08:21 INFO http_proxy: Jan 13 20:08:22.144187 containerd[1936]: time="2025-01-13T20:08:22.139317800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.149203244Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.149326964Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.149368916Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.149685440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.149725904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.149858408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.149887388Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.150193928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.150233192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.150296864Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151296 containerd[1936]: time="2025-01-13T20:08:22.150330500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151892 containerd[1936]: time="2025-01-13T20:08:22.150544244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:22.151892 containerd[1936]: time="2025-01-13T20:08:22.151029236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:22.157301 containerd[1936]: time="2025-01-13T20:08:22.154637132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:22.157301 containerd[1936]: time="2025-01-13T20:08:22.154703216Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:08:22.157301 containerd[1936]: time="2025-01-13T20:08:22.154942256Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:08:22.158606 containerd[1936]: time="2025-01-13T20:08:22.158522120Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:08:22.170865 containerd[1936]: time="2025-01-13T20:08:22.170239712Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:08:22.170865 containerd[1936]: time="2025-01-13T20:08:22.170414192Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:08:22.170865 containerd[1936]: time="2025-01-13T20:08:22.170459660Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:08:22.170865 containerd[1936]: time="2025-01-13T20:08:22.170499512Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:08:22.170865 containerd[1936]: time="2025-01-13T20:08:22.170549564Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:08:22.170865 containerd[1936]: time="2025-01-13T20:08:22.170862188Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.175633760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176045528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176112092Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176160956Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176205704Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176243900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176352392Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176397884Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176445560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176487908Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176532452Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176573912Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176636240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.178324 containerd[1936]: time="2025-01-13T20:08:22.176697152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.176750552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.176797340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.176838644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.176881652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.176967308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177015404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177059096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177107804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177148604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177192464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177234188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177319556Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177387896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177435068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179014 containerd[1936]: time="2025-01-13T20:08:22.177472844Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177652268Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177711800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177751928Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177798596Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177827804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177869300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177905144Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:08:22.179669 containerd[1936]: time="2025-01-13T20:08:22.177942080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:08:22.180051 containerd[1936]: time="2025-01-13T20:08:22.178629392Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:08:22.180051 containerd[1936]: time="2025-01-13T20:08:22.178815524Z" level=info msg="Connect containerd service" Jan 13 20:08:22.180051 containerd[1936]: time="2025-01-13T20:08:22.179027528Z" level=info msg="using legacy CRI server" Jan 13 20:08:22.180051 containerd[1936]: time="2025-01-13T20:08:22.179064512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:08:22.180051 containerd[1936]: time="2025-01-13T20:08:22.179715440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:08:22.188201 containerd[1936]: time="2025-01-13T20:08:22.188066036Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:08:22.189155 containerd[1936]: time="2025-01-13T20:08:22.188546888Z" level=info msg="Start subscribing containerd event" Jan 13 20:08:22.189155 containerd[1936]: time="2025-01-13T20:08:22.188647004Z" level=info msg="Start recovering state" Jan 13 20:08:22.189155 containerd[1936]: time="2025-01-13T20:08:22.188796908Z" level=info msg="Start event monitor" Jan 13 20:08:22.189155 containerd[1936]: time="2025-01-13T20:08:22.188826884Z" level=info msg="Start snapshots syncer" Jan 13 20:08:22.189155 containerd[1936]: time="2025-01-13T20:08:22.188849816Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:08:22.189155 containerd[1936]: time="2025-01-13T20:08:22.188872376Z" level=info msg="Start streaming server" Jan 13 20:08:22.190912 containerd[1936]: time="2025-01-13T20:08:22.189853532Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:08:22.190912 containerd[1936]: time="2025-01-13T20:08:22.189999140Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:08:22.190912 containerd[1936]: time="2025-01-13T20:08:22.190116656Z" level=info msg="containerd successfully booted in 0.315308s" Jan 13 20:08:22.190291 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:08:22.237292 amazon-ssm-agent[2068]: 2025-01-13 20:08:21 INFO no_proxy: Jan 13 20:08:22.336284 amazon-ssm-agent[2068]: 2025-01-13 20:08:21 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:08:22.433890 amazon-ssm-agent[2068]: 2025-01-13 20:08:21 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:08:22.532559 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO Agent will take identity from EC2 Jan 13 20:08:22.630988 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [Registrar] Starting registrar module Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:08:22.689146 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:08:22.731165 amazon-ssm-agent[2068]: 2025-01-13 20:08:22 INFO [CredentialRefresher] Next credential rotation will be in 30.08332134653333 minutes Jan 13 20:08:23.588175 sshd_keygen[1946]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:08:23.636618 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:08:23.656978 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:08:23.663811 systemd[1]: Started sshd@0-172.31.25.10:22-139.178.68.195:40254.service - OpenSSH per-connection server daemon (139.178.68.195:40254). Jan 13 20:08:23.672653 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:08:23.673470 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:08:23.685888 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:08:23.744361 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:08:23.751474 amazon-ssm-agent[2068]: 2025-01-13 20:08:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:08:23.757565 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:08:23.767907 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:08:23.770670 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:08:23.806460 ntpd[1909]: Listen normally on 6 eth0 [fe80::48e:96ff:fe31:14b7%2]:123 Jan 13 20:08:23.806917 ntpd[1909]: 13 Jan 20:08:23 ntpd[1909]: Listen normally on 6 eth0 [fe80::48e:96ff:fe31:14b7%2]:123 Jan 13 20:08:23.852021 amazon-ssm-agent[2068]: 2025-01-13 20:08:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2140) started Jan 13 20:08:23.922584 sshd[2132]: Accepted publickey for core from 139.178.68.195 port 40254 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:23.932918 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:23.952161 amazon-ssm-agent[2068]: 2025-01-13 20:08:23 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:08:23.955387 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:08:23.967884 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:08:24.001459 systemd-logind[1915]: New session 1 of user core. Jan 13 20:08:24.027383 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:08:24.038867 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:08:24.054395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:24.061414 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:08:24.065928 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:24.066889 (systemd)[2157]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:08:24.283786 systemd[2157]: Queued start job for default target default.target. Jan 13 20:08:24.291183 systemd[2157]: Created slice app.slice - User Application Slice. Jan 13 20:08:24.291473 systemd[2157]: Reached target paths.target - Paths. Jan 13 20:08:24.291606 systemd[2157]: Reached target timers.target - Timers. Jan 13 20:08:24.294350 systemd[2157]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:08:24.318707 systemd[2157]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:08:24.318836 systemd[2157]: Reached target sockets.target - Sockets. Jan 13 20:08:24.318868 systemd[2157]: Reached target basic.target - Basic System. Jan 13 20:08:24.318971 systemd[2157]: Reached target default.target - Main User Target. Jan 13 20:08:24.319038 systemd[2157]: Startup finished in 239ms. Jan 13 20:08:24.320409 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:08:24.330569 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:08:24.333160 systemd[1]: Startup finished in 1.093s (kernel) + 7.441s (initrd) + 9.140s (userspace) = 17.674s. Jan 13 20:08:24.500752 systemd[1]: Started sshd@1-172.31.25.10:22-139.178.68.195:40262.service - OpenSSH per-connection server daemon (139.178.68.195:40262). Jan 13 20:08:24.706659 sshd[2178]: Accepted publickey for core from 139.178.68.195 port 40262 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:24.709338 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:24.719887 systemd-logind[1915]: New session 2 of user core. Jan 13 20:08:24.726524 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:08:24.852689 sshd[2180]: Connection closed by 139.178.68.195 port 40262 Jan 13 20:08:24.854537 sshd-session[2178]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:24.861356 systemd[1]: sshd@1-172.31.25.10:22-139.178.68.195:40262.service: Deactivated successfully. Jan 13 20:08:24.865587 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:08:24.868925 systemd-logind[1915]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:08:24.871846 systemd-logind[1915]: Removed session 2. Jan 13 20:08:24.892577 systemd[1]: Started sshd@2-172.31.25.10:22-139.178.68.195:36492.service - OpenSSH per-connection server daemon (139.178.68.195:36492). Jan 13 20:08:25.080031 sshd[2185]: Accepted publickey for core from 139.178.68.195 port 36492 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:25.082781 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:25.094714 systemd-logind[1915]: New session 3 of user core. Jan 13 20:08:25.100128 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:08:25.222293 sshd[2187]: Connection closed by 139.178.68.195 port 36492 Jan 13 20:08:25.222565 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:25.233795 systemd[1]: sshd@2-172.31.25.10:22-139.178.68.195:36492.service: Deactivated successfully. Jan 13 20:08:25.237197 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:08:25.240401 systemd-logind[1915]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:08:25.242706 systemd-logind[1915]: Removed session 3. Jan 13 20:08:25.268928 systemd[1]: Started sshd@3-172.31.25.10:22-139.178.68.195:36500.service - OpenSSH per-connection server daemon (139.178.68.195:36500). Jan 13 20:08:25.453595 sshd[2194]: Accepted publickey for core from 139.178.68.195 port 36500 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:25.458024 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:25.469490 systemd-logind[1915]: New session 4 of user core. Jan 13 20:08:25.475961 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:08:25.588091 kubelet[2158]: E0113 20:08:25.587922 2158 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:25.593576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:25.593909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:08:25.595411 systemd[1]: kubelet.service: Consumed 1.380s CPU time. Jan 13 20:08:25.606317 sshd[2196]: Connection closed by 139.178.68.195 port 36500 Jan 13 20:08:25.607059 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:25.613855 systemd[1]: sshd@3-172.31.25.10:22-139.178.68.195:36500.service: Deactivated successfully. Jan 13 20:08:25.617199 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:08:25.619193 systemd-logind[1915]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:08:25.621452 systemd-logind[1915]: Removed session 4. Jan 13 20:08:25.650763 systemd[1]: Started sshd@4-172.31.25.10:22-139.178.68.195:36504.service - OpenSSH per-connection server daemon (139.178.68.195:36504). Jan 13 20:08:25.827865 sshd[2202]: Accepted publickey for core from 139.178.68.195 port 36504 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:25.830225 sshd-session[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:25.838602 systemd-logind[1915]: New session 5 of user core. Jan 13 20:08:25.848525 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:08:25.966822 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:08:25.967473 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:25.986329 sudo[2205]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:26.009352 sshd[2204]: Connection closed by 139.178.68.195 port 36504 Jan 13 20:08:26.010462 sshd-session[2202]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:26.018565 systemd[1]: sshd@4-172.31.25.10:22-139.178.68.195:36504.service: Deactivated successfully. Jan 13 20:08:26.021717 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:08:26.023166 systemd-logind[1915]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:08:26.025627 systemd-logind[1915]: Removed session 5. Jan 13 20:08:26.061777 systemd[1]: Started sshd@5-172.31.25.10:22-139.178.68.195:36512.service - OpenSSH per-connection server daemon (139.178.68.195:36512). Jan 13 20:08:26.243366 sshd[2211]: Accepted publickey for core from 139.178.68.195 port 36512 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:26.243855 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:26.259637 systemd-logind[1915]: New session 6 of user core. Jan 13 20:08:26.273544 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:08:26.376856 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:08:26.378018 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:26.384375 sudo[2215]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:26.394320 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:08:26.394949 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:26.422176 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:08:26.469134 augenrules[2237]: No rules Jan 13 20:08:26.471543 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:08:26.472469 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:08:26.474809 sudo[2214]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:26.497363 sshd[2213]: Connection closed by 139.178.68.195 port 36512 Jan 13 20:08:26.497741 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:26.503724 systemd-logind[1915]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:08:26.504657 systemd[1]: sshd@5-172.31.25.10:22-139.178.68.195:36512.service: Deactivated successfully. Jan 13 20:08:26.508007 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:08:26.511201 systemd-logind[1915]: Removed session 6. Jan 13 20:08:26.539703 systemd[1]: Started sshd@6-172.31.25.10:22-139.178.68.195:36518.service - OpenSSH per-connection server daemon (139.178.68.195:36518). Jan 13 20:08:26.715416 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 36518 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:26.717758 sshd-session[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:26.725016 systemd-logind[1915]: New session 7 of user core. Jan 13 20:08:26.734488 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:08:26.837711 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:08:26.838361 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:27.653870 systemd-resolved[1840]: Clock change detected. Flushing caches. Jan 13 20:08:27.935872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:27.936223 systemd[1]: kubelet.service: Consumed 1.380s CPU time. Jan 13 20:08:27.953500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:27.999168 systemd[1]: Reloading requested from client PID 2285 ('systemctl') (unit session-7.scope)... Jan 13 20:08:27.999196 systemd[1]: Reloading... Jan 13 20:08:28.215053 zram_generator::config[2325]: No configuration found. Jan 13 20:08:28.463924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:28.625347 systemd[1]: Reloading finished in 625 ms. Jan 13 20:08:28.720528 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:08:28.720928 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:08:28.722146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:28.728553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:29.048297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:29.051525 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:08:29.135054 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:29.135054 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:08:29.135054 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:29.137070 kubelet[2389]: I0113 20:08:29.134958 2389 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:08:30.332733 kubelet[2389]: I0113 20:08:30.332690 2389 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:08:30.333345 kubelet[2389]: I0113 20:08:30.333321 2389 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:08:30.333779 kubelet[2389]: I0113 20:08:30.333758 2389 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:08:30.371084 kubelet[2389]: I0113 20:08:30.371034 2389 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:08:30.385105 kubelet[2389]: I0113 20:08:30.385060 2389 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:08:30.385588 kubelet[2389]: I0113 20:08:30.385555 2389 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:08:30.385898 kubelet[2389]: I0113 20:08:30.385866 2389 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:08:30.386075 kubelet[2389]: I0113 20:08:30.385909 2389 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:08:30.386075 kubelet[2389]: I0113 20:08:30.385931 2389 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:08:30.387747 kubelet[2389]: I0113 20:08:30.387690 2389 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:30.392554 kubelet[2389]: I0113 20:08:30.392487 2389 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:08:30.392554 kubelet[2389]: I0113 20:08:30.392537 2389 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:08:30.392709 kubelet[2389]: I0113 20:08:30.392580 2389 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:08:30.392709 kubelet[2389]: I0113 20:08:30.392613 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:08:30.394252 kubelet[2389]: E0113 20:08:30.393372 2389 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:30.394252 kubelet[2389]: E0113 20:08:30.393625 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:30.395571 kubelet[2389]: I0113 20:08:30.395521 2389 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:08:30.396492 kubelet[2389]: I0113 20:08:30.396465 2389 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:08:30.396693 kubelet[2389]: W0113 20:08:30.396674 2389 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:08:30.398160 kubelet[2389]: I0113 20:08:30.398126 2389 server.go:1256] "Started kubelet" Jan 13 20:08:30.402248 kubelet[2389]: I0113 20:08:30.402207 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:08:30.406039 kubelet[2389]: I0113 20:08:30.405952 2389 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:08:30.407338 kubelet[2389]: I0113 20:08:30.407277 2389 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:08:30.408939 kubelet[2389]: I0113 20:08:30.408880 2389 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:08:30.409304 kubelet[2389]: I0113 20:08:30.409266 2389 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:08:30.411094 kubelet[2389]: I0113 20:08:30.410760 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:08:30.411577 kubelet[2389]: I0113 20:08:30.411531 2389 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:08:30.411759 kubelet[2389]: I0113 20:08:30.411721 2389 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:08:30.429058 kubelet[2389]: I0113 20:08:30.428188 2389 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:08:30.429058 kubelet[2389]: I0113 20:08:30.428226 2389 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:08:30.429058 kubelet[2389]: I0113 20:08:30.428374 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:08:30.446326 kubelet[2389]: E0113 20:08:30.446290 2389 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:08:30.460421 kubelet[2389]: W0113 20:08:30.460383 2389 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:08:30.461031 kubelet[2389]: E0113 20:08:30.460990 2389 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:08:30.463387 kubelet[2389]: E0113 20:08:30.463329 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.25.10\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:08:30.463539 kubelet[2389]: W0113 20:08:30.463498 2389 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.25.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:08:30.463539 kubelet[2389]: E0113 20:08:30.463529 2389 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.25.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:08:30.463665 kubelet[2389]: W0113 20:08:30.463628 2389 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:08:30.463718 kubelet[2389]: E0113 20:08:30.463678 2389 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:08:30.466048 kubelet[2389]: I0113 20:08:30.465998 2389 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:08:30.466316 kubelet[2389]: I0113 20:08:30.466275 2389 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:08:30.466748 kubelet[2389]: I0113 20:08:30.466437 2389 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:30.469493 kubelet[2389]: E0113 20:08:30.469434 2389 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.10.181a596834902589 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.10,UID:172.31.25.10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.25.10,},FirstTimestamp:2025-01-13 20:08:30.398080393 +0000 UTC m=+1.339229504,LastTimestamp:2025-01-13 20:08:30.398080393 +0000 UTC m=+1.339229504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.10,}" Jan 13 20:08:30.470059 kubelet[2389]: I0113 20:08:30.469903 2389 policy_none.go:49] "None policy: Start" Jan 13 20:08:30.473124 kubelet[2389]: I0113 20:08:30.472558 2389 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:08:30.473124 kubelet[2389]: I0113 20:08:30.472632 2389 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:08:30.473124 kubelet[2389]: E0113 20:08:30.473097 2389 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.10.181a5968376f61f5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.10,UID:172.31.25.10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.25.10,},FirstTimestamp:2025-01-13 20:08:30.446264821 +0000 UTC m=+1.387413932,LastTimestamp:2025-01-13 20:08:30.446264821 +0000 UTC m=+1.387413932,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.10,}" Jan 13 20:08:30.488569 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:08:30.510392 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:08:30.514533 kubelet[2389]: I0113 20:08:30.514498 2389 kubelet_node_status.go:73] "Attempting to register node" node="172.31.25.10" Jan 13 20:08:30.519349 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:08:30.528074 kubelet[2389]: I0113 20:08:30.526373 2389 kubelet_node_status.go:76] "Successfully registered node" node="172.31.25.10" Jan 13 20:08:30.531773 kubelet[2389]: I0113 20:08:30.531079 2389 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:08:30.531773 kubelet[2389]: I0113 20:08:30.531461 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:08:30.542312 kubelet[2389]: E0113 20:08:30.542277 2389 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.10\" not found" Jan 13 20:08:30.550093 kubelet[2389]: I0113 20:08:30.550057 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:08:30.552554 kubelet[2389]: I0113 20:08:30.552518 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:08:30.552733 kubelet[2389]: I0113 20:08:30.552712 2389 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:08:30.552887 kubelet[2389]: I0113 20:08:30.552865 2389 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:08:30.553072 kubelet[2389]: E0113 20:08:30.553050 2389 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:08:30.580587 kubelet[2389]: E0113 20:08:30.580522 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:30.681359 kubelet[2389]: E0113 20:08:30.681208 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:30.781752 kubelet[2389]: E0113 20:08:30.781680 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:30.882359 kubelet[2389]: E0113 20:08:30.882313 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:30.983106 kubelet[2389]: E0113 20:08:30.982954 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.083646 kubelet[2389]: E0113 20:08:31.083587 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.184251 kubelet[2389]: E0113 20:08:31.184196 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.284881 kubelet[2389]: E0113 20:08:31.284761 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.337440 kubelet[2389]: I0113 20:08:31.337390 2389 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:08:31.337911 kubelet[2389]: W0113 20:08:31.337609 2389 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:08:31.385837 kubelet[2389]: E0113 20:08:31.385783 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.394025 kubelet[2389]: E0113 20:08:31.393977 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:31.486571 kubelet[2389]: E0113 20:08:31.486514 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.586085 sudo[2248]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:31.588061 kubelet[2389]: E0113 20:08:31.587596 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.609165 sshd[2247]: Connection closed by 139.178.68.195 port 36518 Jan 13 20:08:31.609925 sshd-session[2245]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:31.614889 systemd[1]: sshd@6-172.31.25.10:22-139.178.68.195:36518.service: Deactivated successfully. Jan 13 20:08:31.618458 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:08:31.621821 systemd-logind[1915]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:08:31.624046 systemd-logind[1915]: Removed session 7. Jan 13 20:08:31.688719 kubelet[2389]: E0113 20:08:31.688644 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.789374 kubelet[2389]: E0113 20:08:31.789308 2389 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.10\" not found" Jan 13 20:08:31.890728 kubelet[2389]: I0113 20:08:31.890438 2389 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:08:31.891435 containerd[1936]: time="2025-01-13T20:08:31.891368309Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:08:31.892065 kubelet[2389]: I0113 20:08:31.891739 2389 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:08:32.394493 kubelet[2389]: I0113 20:08:32.394426 2389 apiserver.go:52] "Watching apiserver" Jan 13 20:08:32.395085 kubelet[2389]: E0113 20:08:32.394448 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:32.406600 kubelet[2389]: I0113 20:08:32.406468 2389 topology_manager.go:215] "Topology Admit Handler" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" podNamespace="kube-system" podName="cilium-fslrv" Jan 13 20:08:32.408087 kubelet[2389]: I0113 20:08:32.406966 2389 topology_manager.go:215] "Topology Admit Handler" podUID="484086bb-e963-4c57-a9f9-be0cc9d1be41" podNamespace="kube-system" podName="kube-proxy-czxrk" Jan 13 20:08:32.409877 kubelet[2389]: I0113 20:08:32.409804 2389 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:08:32.421057 kubelet[2389]: I0113 20:08:32.420177 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-lib-modules\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421057 kubelet[2389]: I0113 20:08:32.420268 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-net\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421057 kubelet[2389]: I0113 20:08:32.420318 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9rs\" (UniqueName: \"kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-kube-api-access-wx9rs\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421057 kubelet[2389]: I0113 20:08:32.420377 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cni-path\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421057 kubelet[2389]: I0113 20:08:32.420438 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-etc-cni-netd\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421057 kubelet[2389]: I0113 20:08:32.420482 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-hubble-tls\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421504 kubelet[2389]: I0113 20:08:32.420535 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/484086bb-e963-4c57-a9f9-be0cc9d1be41-lib-modules\") pod \"kube-proxy-czxrk\" (UID: \"484086bb-e963-4c57-a9f9-be0cc9d1be41\") " pod="kube-system/kube-proxy-czxrk" Jan 13 20:08:32.421504 kubelet[2389]: I0113 20:08:32.420578 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/484086bb-e963-4c57-a9f9-be0cc9d1be41-xtables-lock\") pod \"kube-proxy-czxrk\" (UID: \"484086bb-e963-4c57-a9f9-be0cc9d1be41\") " pod="kube-system/kube-proxy-czxrk" Jan 13 20:08:32.421504 kubelet[2389]: I0113 20:08:32.420630 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-run\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421504 kubelet[2389]: I0113 20:08:32.420687 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-hostproc\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421504 kubelet[2389]: I0113 20:08:32.420742 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-xtables-lock\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421504 kubelet[2389]: I0113 20:08:32.420806 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/498dd55e-0497-4b62-ade2-faf0414bf2e0-clustermesh-secrets\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421800 kubelet[2389]: I0113 20:08:32.420863 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/484086bb-e963-4c57-a9f9-be0cc9d1be41-kube-proxy\") pod \"kube-proxy-czxrk\" (UID: \"484086bb-e963-4c57-a9f9-be0cc9d1be41\") " pod="kube-system/kube-proxy-czxrk" Jan 13 20:08:32.421800 kubelet[2389]: I0113 20:08:32.420911 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-bpf-maps\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421800 kubelet[2389]: I0113 20:08:32.420955 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-cgroup\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421800 kubelet[2389]: I0113 20:08:32.420998 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-config-path\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.421800 kubelet[2389]: I0113 20:08:32.421099 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-kernel\") pod \"cilium-fslrv\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " pod="kube-system/cilium-fslrv" Jan 13 20:08:32.422123 kubelet[2389]: I0113 20:08:32.421164 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2cf\" (UniqueName: \"kubernetes.io/projected/484086bb-e963-4c57-a9f9-be0cc9d1be41-kube-api-access-ww2cf\") pod \"kube-proxy-czxrk\" (UID: \"484086bb-e963-4c57-a9f9-be0cc9d1be41\") " pod="kube-system/kube-proxy-czxrk" Jan 13 20:08:32.422397 systemd[1]: Created slice kubepods-burstable-pod498dd55e_0497_4b62_ade2_faf0414bf2e0.slice - libcontainer container kubepods-burstable-pod498dd55e_0497_4b62_ade2_faf0414bf2e0.slice. Jan 13 20:08:32.437899 systemd[1]: Created slice kubepods-besteffort-pod484086bb_e963_4c57_a9f9_be0cc9d1be41.slice - libcontainer container kubepods-besteffort-pod484086bb_e963_4c57_a9f9_be0cc9d1be41.slice. Jan 13 20:08:32.737038 containerd[1936]: time="2025-01-13T20:08:32.736044161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fslrv,Uid:498dd55e-0497-4b62-ade2-faf0414bf2e0,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:32.752869 containerd[1936]: time="2025-01-13T20:08:32.752756453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czxrk,Uid:484086bb-e963-4c57-a9f9-be0cc9d1be41,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:33.369387 containerd[1936]: time="2025-01-13T20:08:33.369300220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:33.373648 containerd[1936]: time="2025-01-13T20:08:33.373574296Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:08:33.375062 containerd[1936]: time="2025-01-13T20:08:33.374777452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:33.376528 containerd[1936]: time="2025-01-13T20:08:33.376468960Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:33.378744 containerd[1936]: time="2025-01-13T20:08:33.378665272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:08:33.384800 containerd[1936]: time="2025-01-13T20:08:33.384719284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:33.387068 containerd[1936]: time="2025-01-13T20:08:33.386514388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 633.629703ms" Jan 13 20:08:33.388776 containerd[1936]: time="2025-01-13T20:08:33.388708828Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 652.553067ms" Jan 13 20:08:33.395233 kubelet[2389]: E0113 20:08:33.395174 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:33.566680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998255366.mount: Deactivated successfully. Jan 13 20:08:33.677775 containerd[1936]: time="2025-01-13T20:08:33.676933613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:33.677775 containerd[1936]: time="2025-01-13T20:08:33.677109017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:33.677775 containerd[1936]: time="2025-01-13T20:08:33.677138825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:33.677775 containerd[1936]: time="2025-01-13T20:08:33.677308337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:33.681943 containerd[1936]: time="2025-01-13T20:08:33.681699569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:33.681943 containerd[1936]: time="2025-01-13T20:08:33.681846629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:33.681943 containerd[1936]: time="2025-01-13T20:08:33.681891809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:33.682858 containerd[1936]: time="2025-01-13T20:08:33.682128977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:33.840350 systemd[1]: Started cri-containerd-33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea.scope - libcontainer container 33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea. Jan 13 20:08:33.845201 systemd[1]: Started cri-containerd-4e94df8c2901179878626dfa87093826202c924a2c6517b580eff07902293d00.scope - libcontainer container 4e94df8c2901179878626dfa87093826202c924a2c6517b580eff07902293d00. Jan 13 20:08:33.907185 containerd[1936]: time="2025-01-13T20:08:33.906916687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fslrv,Uid:498dd55e-0497-4b62-ade2-faf0414bf2e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\"" Jan 13 20:08:33.912123 containerd[1936]: time="2025-01-13T20:08:33.911930251Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:08:33.917685 containerd[1936]: time="2025-01-13T20:08:33.917522635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czxrk,Uid:484086bb-e963-4c57-a9f9-be0cc9d1be41,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e94df8c2901179878626dfa87093826202c924a2c6517b580eff07902293d00\"" Jan 13 20:08:34.395625 kubelet[2389]: E0113 20:08:34.395570 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:35.396477 kubelet[2389]: E0113 20:08:35.396368 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:36.397379 kubelet[2389]: E0113 20:08:36.397319 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:37.398538 kubelet[2389]: E0113 20:08:37.398364 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:38.399963 kubelet[2389]: E0113 20:08:38.399880 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:39.400889 kubelet[2389]: E0113 20:08:39.400836 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:40.128349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068000973.mount: Deactivated successfully. Jan 13 20:08:40.401340 kubelet[2389]: E0113 20:08:40.401202 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:41.402163 kubelet[2389]: E0113 20:08:41.402119 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:42.403241 kubelet[2389]: E0113 20:08:42.403181 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:42.579549 containerd[1936]: time="2025-01-13T20:08:42.579447074Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:42.582394 containerd[1936]: time="2025-01-13T20:08:42.582312962Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651522" Jan 13 20:08:42.584118 containerd[1936]: time="2025-01-13T20:08:42.584000534Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:42.586834 containerd[1936]: time="2025-01-13T20:08:42.586567154Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.674570915s" Jan 13 20:08:42.586834 containerd[1936]: time="2025-01-13T20:08:42.586625678Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:08:42.587943 containerd[1936]: time="2025-01-13T20:08:42.587783882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:08:42.591365 containerd[1936]: time="2025-01-13T20:08:42.591284222Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:08:42.617325 containerd[1936]: time="2025-01-13T20:08:42.617267810Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\"" Jan 13 20:08:42.618542 containerd[1936]: time="2025-01-13T20:08:42.618419594Z" level=info msg="StartContainer for \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\"" Jan 13 20:08:42.678360 systemd[1]: Started cri-containerd-e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449.scope - libcontainer container e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449. Jan 13 20:08:42.730326 containerd[1936]: time="2025-01-13T20:08:42.728545538Z" level=info msg="StartContainer for \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\" returns successfully" Jan 13 20:08:42.749197 systemd[1]: cri-containerd-e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449.scope: Deactivated successfully. Jan 13 20:08:43.404228 kubelet[2389]: E0113 20:08:43.404167 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:43.604809 systemd[1]: run-containerd-runc-k8s.io-e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449-runc.cY1UBS.mount: Deactivated successfully. Jan 13 20:08:43.604961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449-rootfs.mount: Deactivated successfully. Jan 13 20:08:44.404894 kubelet[2389]: E0113 20:08:44.404791 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:44.445347 containerd[1936]: time="2025-01-13T20:08:44.445257243Z" level=info msg="shim disconnected" id=e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449 namespace=k8s.io Jan 13 20:08:44.445347 containerd[1936]: time="2025-01-13T20:08:44.445339131Z" level=warning msg="cleaning up after shim disconnected" id=e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449 namespace=k8s.io Jan 13 20:08:44.446118 containerd[1936]: time="2025-01-13T20:08:44.445422807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:44.464808 containerd[1936]: time="2025-01-13T20:08:44.464662251Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:08:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:08:44.657238 containerd[1936]: time="2025-01-13T20:08:44.656549380Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:08:44.698847 containerd[1936]: time="2025-01-13T20:08:44.698691844Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\"" Jan 13 20:08:44.700046 containerd[1936]: time="2025-01-13T20:08:44.699797080Z" level=info msg="StartContainer for \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\"" Jan 13 20:08:44.786556 systemd[1]: Started cri-containerd-027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898.scope - libcontainer container 027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898. Jan 13 20:08:44.862423 containerd[1936]: time="2025-01-13T20:08:44.862346897Z" level=info msg="StartContainer for \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\" returns successfully" Jan 13 20:08:44.902034 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:08:44.902543 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:44.902665 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:44.914469 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:44.914886 systemd[1]: cri-containerd-027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898.scope: Deactivated successfully. Jan 13 20:08:44.968870 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:44.998471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898-rootfs.mount: Deactivated successfully. Jan 13 20:08:45.052170 containerd[1936]: time="2025-01-13T20:08:45.051987986Z" level=info msg="shim disconnected" id=027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898 namespace=k8s.io Jan 13 20:08:45.052170 containerd[1936]: time="2025-01-13T20:08:45.052152446Z" level=warning msg="cleaning up after shim disconnected" id=027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898 namespace=k8s.io Jan 13 20:08:45.052170 containerd[1936]: time="2025-01-13T20:08:45.052176326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:45.405352 kubelet[2389]: E0113 20:08:45.405028 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:45.642701 containerd[1936]: time="2025-01-13T20:08:45.642209705Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:08:45.671918 containerd[1936]: time="2025-01-13T20:08:45.671397113Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\"" Jan 13 20:08:45.673158 containerd[1936]: time="2025-01-13T20:08:45.673110245Z" level=info msg="StartContainer for \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\"" Jan 13 20:08:45.680514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768232511.mount: Deactivated successfully. Jan 13 20:08:45.752310 systemd[1]: run-containerd-runc-k8s.io-f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78-runc.Q7gRQP.mount: Deactivated successfully. Jan 13 20:08:45.765335 systemd[1]: Started cri-containerd-f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78.scope - libcontainer container f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78. Jan 13 20:08:45.842427 containerd[1936]: time="2025-01-13T20:08:45.842344794Z" level=info msg="StartContainer for \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\" returns successfully" Jan 13 20:08:45.845348 systemd[1]: cri-containerd-f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78.scope: Deactivated successfully. Jan 13 20:08:45.982051 containerd[1936]: time="2025-01-13T20:08:45.981681283Z" level=info msg="shim disconnected" id=f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78 namespace=k8s.io Jan 13 20:08:45.983615 containerd[1936]: time="2025-01-13T20:08:45.982494259Z" level=warning msg="cleaning up after shim disconnected" id=f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78 namespace=k8s.io Jan 13 20:08:45.983615 containerd[1936]: time="2025-01-13T20:08:45.983205679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:46.182131 containerd[1936]: time="2025-01-13T20:08:46.181579420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:46.183042 containerd[1936]: time="2025-01-13T20:08:46.182961628Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 13 20:08:46.184158 containerd[1936]: time="2025-01-13T20:08:46.184076080Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:46.187631 containerd[1936]: time="2025-01-13T20:08:46.187544704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:46.189057 containerd[1936]: time="2025-01-13T20:08:46.188984344Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 3.601139502s" Jan 13 20:08:46.189644 containerd[1936]: time="2025-01-13T20:08:46.189060616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:08:46.192703 containerd[1936]: time="2025-01-13T20:08:46.192654868Z" level=info msg="CreateContainer within sandbox \"4e94df8c2901179878626dfa87093826202c924a2c6517b580eff07902293d00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:08:46.218437 containerd[1936]: time="2025-01-13T20:08:46.218254804Z" level=info msg="CreateContainer within sandbox \"4e94df8c2901179878626dfa87093826202c924a2c6517b580eff07902293d00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81fa7665b8975b543d0130caaf0916720c8de6c6c10ec2d752e8170dcc93059a\"" Jan 13 20:08:46.221067 containerd[1936]: time="2025-01-13T20:08:46.219506884Z" level=info msg="StartContainer for \"81fa7665b8975b543d0130caaf0916720c8de6c6c10ec2d752e8170dcc93059a\"" Jan 13 20:08:46.268333 systemd[1]: Started cri-containerd-81fa7665b8975b543d0130caaf0916720c8de6c6c10ec2d752e8170dcc93059a.scope - libcontainer container 81fa7665b8975b543d0130caaf0916720c8de6c6c10ec2d752e8170dcc93059a. Jan 13 20:08:46.329760 containerd[1936]: time="2025-01-13T20:08:46.329399092Z" level=info msg="StartContainer for \"81fa7665b8975b543d0130caaf0916720c8de6c6c10ec2d752e8170dcc93059a\" returns successfully" Jan 13 20:08:46.406256 kubelet[2389]: E0113 20:08:46.406191 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:46.662476 containerd[1936]: time="2025-01-13T20:08:46.660479574Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:08:46.681382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78-rootfs.mount: Deactivated successfully. Jan 13 20:08:46.698658 kubelet[2389]: I0113 20:08:46.698445 2389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-czxrk" podStartSLOduration=4.428628533 podStartE2EDuration="16.69837399s" podCreationTimestamp="2025-01-13 20:08:30 +0000 UTC" firstStartedPulling="2025-01-13 20:08:33.920128255 +0000 UTC m=+4.861277342" lastFinishedPulling="2025-01-13 20:08:46.1898737 +0000 UTC m=+17.131022799" observedRunningTime="2025-01-13 20:08:46.660067662 +0000 UTC m=+17.601216797" watchObservedRunningTime="2025-01-13 20:08:46.69837399 +0000 UTC m=+17.639523089" Jan 13 20:08:46.702651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674953117.mount: Deactivated successfully. Jan 13 20:08:46.705202 containerd[1936]: time="2025-01-13T20:08:46.704568186Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\"" Jan 13 20:08:46.706659 containerd[1936]: time="2025-01-13T20:08:46.706559082Z" level=info msg="StartContainer for \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\"" Jan 13 20:08:46.792398 systemd[1]: Started cri-containerd-86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56.scope - libcontainer container 86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56. Jan 13 20:08:46.845683 systemd[1]: cri-containerd-86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56.scope: Deactivated successfully. Jan 13 20:08:46.849351 containerd[1936]: time="2025-01-13T20:08:46.849095551Z" level=info msg="StartContainer for \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\" returns successfully" Jan 13 20:08:46.935579 containerd[1936]: time="2025-01-13T20:08:46.935372371Z" level=info msg="shim disconnected" id=86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56 namespace=k8s.io Jan 13 20:08:46.935579 containerd[1936]: time="2025-01-13T20:08:46.935476003Z" level=warning msg="cleaning up after shim disconnected" id=86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56 namespace=k8s.io Jan 13 20:08:46.935579 containerd[1936]: time="2025-01-13T20:08:46.935501119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:47.406864 kubelet[2389]: E0113 20:08:47.406805 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:47.668645 containerd[1936]: time="2025-01-13T20:08:47.666527371Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:08:47.680279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56-rootfs.mount: Deactivated successfully. Jan 13 20:08:47.706894 containerd[1936]: time="2025-01-13T20:08:47.706814983Z" level=info msg="CreateContainer within sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\"" Jan 13 20:08:47.707946 containerd[1936]: time="2025-01-13T20:08:47.707889487Z" level=info msg="StartContainer for \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\"" Jan 13 20:08:47.772547 systemd[1]: Started cri-containerd-89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92.scope - libcontainer container 89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92. Jan 13 20:08:47.827739 containerd[1936]: time="2025-01-13T20:08:47.827478656Z" level=info msg="StartContainer for \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\" returns successfully" Jan 13 20:08:48.004607 kubelet[2389]: I0113 20:08:48.004466 2389 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:08:48.407511 kubelet[2389]: E0113 20:08:48.407330 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:48.746145 kernel: Initializing XFRM netlink socket Jan 13 20:08:49.354672 kubelet[2389]: I0113 20:08:49.354587 2389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fslrv" podStartSLOduration=10.678320784 podStartE2EDuration="19.354533443s" podCreationTimestamp="2025-01-13 20:08:30 +0000 UTC" firstStartedPulling="2025-01-13 20:08:33.910996183 +0000 UTC m=+4.852145282" lastFinishedPulling="2025-01-13 20:08:42.587208842 +0000 UTC m=+13.528357941" observedRunningTime="2025-01-13 20:08:48.72817964 +0000 UTC m=+19.669328763" watchObservedRunningTime="2025-01-13 20:08:49.354533443 +0000 UTC m=+20.295682542" Jan 13 20:08:49.354966 kubelet[2389]: I0113 20:08:49.354892 2389 topology_manager.go:215] "Topology Admit Handler" podUID="d12b5312-1ced-4b54-b887-6ed93797b91d" podNamespace="default" podName="nginx-deployment-6d5f899847-94mhq" Jan 13 20:08:49.364624 systemd[1]: Created slice kubepods-besteffort-podd12b5312_1ced_4b54_b887_6ed93797b91d.slice - libcontainer container kubepods-besteffort-podd12b5312_1ced_4b54_b887_6ed93797b91d.slice. Jan 13 20:08:49.408569 kubelet[2389]: E0113 20:08:49.408479 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:49.466762 kubelet[2389]: I0113 20:08:49.466708 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6bc\" (UniqueName: \"kubernetes.io/projected/d12b5312-1ced-4b54-b887-6ed93797b91d-kube-api-access-mv6bc\") pod \"nginx-deployment-6d5f899847-94mhq\" (UID: \"d12b5312-1ced-4b54-b887-6ed93797b91d\") " pod="default/nginx-deployment-6d5f899847-94mhq" Jan 13 20:08:49.670461 containerd[1936]: time="2025-01-13T20:08:49.670229625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-94mhq,Uid:d12b5312-1ced-4b54-b887-6ed93797b91d,Namespace:default,Attempt:0,}" Jan 13 20:08:50.393656 kubelet[2389]: E0113 20:08:50.393583 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:50.408754 kubelet[2389]: E0113 20:08:50.408678 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:50.563537 systemd-networkd[1839]: cilium_host: Link UP Jan 13 20:08:50.563912 systemd-networkd[1839]: cilium_net: Link UP Jan 13 20:08:50.564759 systemd-networkd[1839]: cilium_net: Gained carrier Jan 13 20:08:50.566196 systemd-networkd[1839]: cilium_host: Gained carrier Jan 13 20:08:50.568181 (udev-worker)[2844]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:50.571212 (udev-worker)[2842]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:50.681294 systemd-networkd[1839]: cilium_net: Gained IPv6LL Jan 13 20:08:50.750086 (udev-worker)[3104]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:50.761726 systemd-networkd[1839]: cilium_vxlan: Link UP Jan 13 20:08:50.761745 systemd-networkd[1839]: cilium_vxlan: Gained carrier Jan 13 20:08:51.255037 kernel: NET: Registered PF_ALG protocol family Jan 13 20:08:51.409713 kubelet[2389]: E0113 20:08:51.409652 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:51.497456 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:08:51.593421 systemd-networkd[1839]: cilium_host: Gained IPv6LL Jan 13 20:08:51.849286 systemd-networkd[1839]: cilium_vxlan: Gained IPv6LL Jan 13 20:08:52.410805 kubelet[2389]: E0113 20:08:52.410736 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:52.523187 systemd-networkd[1839]: lxc_health: Link UP Jan 13 20:08:52.532909 systemd-networkd[1839]: lxc_health: Gained carrier Jan 13 20:08:53.236266 systemd-networkd[1839]: lxca63ed1f83ba8: Link UP Jan 13 20:08:53.243063 kernel: eth0: renamed from tmp60ced Jan 13 20:08:53.253681 systemd-networkd[1839]: lxca63ed1f83ba8: Gained carrier Jan 13 20:08:53.411833 kubelet[2389]: E0113 20:08:53.411761 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:53.577681 systemd-networkd[1839]: lxc_health: Gained IPv6LL Jan 13 20:08:54.291800 kubelet[2389]: I0113 20:08:54.291633 2389 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:08:54.412223 kubelet[2389]: E0113 20:08:54.412130 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:55.241332 systemd-networkd[1839]: lxca63ed1f83ba8: Gained IPv6LL Jan 13 20:08:55.413080 kubelet[2389]: E0113 20:08:55.412964 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:56.414289 kubelet[2389]: E0113 20:08:56.414215 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:57.414776 kubelet[2389]: E0113 20:08:57.414724 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:57.653828 ntpd[1909]: Listen normally on 7 cilium_host 192.168.1.87:123 Jan 13 20:08:57.654390 ntpd[1909]: 13 Jan 20:08:57 ntpd[1909]: Listen normally on 7 cilium_host 192.168.1.87:123 Jan 13 20:08:57.655156 ntpd[1909]: Listen normally on 8 cilium_net [fe80::bcce:5eff:fe69:5235%3]:123 Jan 13 20:08:57.655277 ntpd[1909]: 13 Jan 20:08:57 ntpd[1909]: Listen normally on 8 cilium_net [fe80::bcce:5eff:fe69:5235%3]:123 Jan 13 20:08:57.655428 ntpd[1909]: Listen normally on 9 cilium_host [fe80::1873:aeff:fe8f:9669%4]:123 Jan 13 20:08:57.655564 ntpd[1909]: 13 Jan 20:08:57 ntpd[1909]: Listen normally on 9 cilium_host [fe80::1873:aeff:fe8f:9669%4]:123 Jan 13 20:08:57.655685 ntpd[1909]: Listen normally on 10 cilium_vxlan [fe80::c419:bcff:fea2:7b53%5]:123 Jan 13 20:08:57.658106 ntpd[1909]: 13 Jan 20:08:57 ntpd[1909]: Listen normally on 10 cilium_vxlan [fe80::c419:bcff:fea2:7b53%5]:123 Jan 13 20:08:57.658344 ntpd[1909]: Listen normally on 11 lxc_health [fe80::8483:aaff:fe07:d303%7]:123 Jan 13 20:08:57.658650 ntpd[1909]: 13 Jan 20:08:57 ntpd[1909]: Listen normally on 11 lxc_health [fe80::8483:aaff:fe07:d303%7]:123 Jan 13 20:08:57.658650 ntpd[1909]: 13 Jan 20:08:57 ntpd[1909]: Listen normally on 12 lxca63ed1f83ba8 [fe80::d05b:21ff:fe02:2e1c%9]:123 Jan 13 20:08:57.658420 ntpd[1909]: Listen normally on 12 lxca63ed1f83ba8 [fe80::d05b:21ff:fe02:2e1c%9]:123 Jan 13 20:08:58.416326 kubelet[2389]: E0113 20:08:58.416259 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:08:59.417346 kubelet[2389]: E0113 20:08:59.417275 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:00.418280 kubelet[2389]: E0113 20:09:00.418206 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:01.271153 containerd[1936]: time="2025-01-13T20:09:01.270979650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:01.271796 containerd[1936]: time="2025-01-13T20:09:01.271327434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:01.272699 containerd[1936]: time="2025-01-13T20:09:01.272334078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:01.272699 containerd[1936]: time="2025-01-13T20:09:01.272528034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:01.312285 systemd[1]: Started cri-containerd-60ced13d9d3e183cf6ba035d34450b85967ff2d656e1f2cdb317201b4086c017.scope - libcontainer container 60ced13d9d3e183cf6ba035d34450b85967ff2d656e1f2cdb317201b4086c017. Jan 13 20:09:01.373490 containerd[1936]: time="2025-01-13T20:09:01.373413415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-94mhq,Uid:d12b5312-1ced-4b54-b887-6ed93797b91d,Namespace:default,Attempt:0,} returns sandbox id \"60ced13d9d3e183cf6ba035d34450b85967ff2d656e1f2cdb317201b4086c017\"" Jan 13 20:09:01.376774 containerd[1936]: time="2025-01-13T20:09:01.376424323Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:09:01.418615 kubelet[2389]: E0113 20:09:01.418536 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:02.419806 kubelet[2389]: E0113 20:09:02.419741 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:03.420279 kubelet[2389]: E0113 20:09:03.420180 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:04.420778 kubelet[2389]: E0113 20:09:04.420678 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:04.581395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount615475917.mount: Deactivated successfully. Jan 13 20:09:05.421945 kubelet[2389]: E0113 20:09:05.421801 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:05.510137 update_engine[1916]: I20250113 20:09:05.510058 1916 update_attempter.cc:509] Updating boot flags... Jan 13 20:09:05.616073 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3512) Jan 13 20:09:06.031074 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3512) Jan 13 20:09:06.277966 containerd[1936]: time="2025-01-13T20:09:06.275995895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:06.277966 containerd[1936]: time="2025-01-13T20:09:06.277872959Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 20:09:06.278855 containerd[1936]: time="2025-01-13T20:09:06.278806787Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:06.289299 containerd[1936]: time="2025-01-13T20:09:06.289143443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:06.297504 containerd[1936]: time="2025-01-13T20:09:06.296124287Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 4.919637612s" Jan 13 20:09:06.297504 containerd[1936]: time="2025-01-13T20:09:06.296197307Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:09:06.302447 containerd[1936]: time="2025-01-13T20:09:06.302299115Z" level=info msg="CreateContainer within sandbox \"60ced13d9d3e183cf6ba035d34450b85967ff2d656e1f2cdb317201b4086c017\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:09:06.325000 containerd[1936]: time="2025-01-13T20:09:06.324922644Z" level=info msg="CreateContainer within sandbox \"60ced13d9d3e183cf6ba035d34450b85967ff2d656e1f2cdb317201b4086c017\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7007e0069b7a5b07aa83e906d15d6afe0715395437e280305311dc0bd06db8e2\"" Jan 13 20:09:06.327235 containerd[1936]: time="2025-01-13T20:09:06.325803768Z" level=info msg="StartContainer for \"7007e0069b7a5b07aa83e906d15d6afe0715395437e280305311dc0bd06db8e2\"" Jan 13 20:09:06.379310 systemd[1]: Started cri-containerd-7007e0069b7a5b07aa83e906d15d6afe0715395437e280305311dc0bd06db8e2.scope - libcontainer container 7007e0069b7a5b07aa83e906d15d6afe0715395437e280305311dc0bd06db8e2. Jan 13 20:09:06.422435 kubelet[2389]: E0113 20:09:06.422382 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:06.424530 containerd[1936]: time="2025-01-13T20:09:06.424398108Z" level=info msg="StartContainer for \"7007e0069b7a5b07aa83e906d15d6afe0715395437e280305311dc0bd06db8e2\" returns successfully" Jan 13 20:09:06.743808 kubelet[2389]: I0113 20:09:06.743739 2389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-94mhq" podStartSLOduration=12.820970326 podStartE2EDuration="17.74368385s" podCreationTimestamp="2025-01-13 20:08:49 +0000 UTC" firstStartedPulling="2025-01-13 20:09:01.375176479 +0000 UTC m=+32.316325578" lastFinishedPulling="2025-01-13 20:09:06.297890003 +0000 UTC m=+37.239039102" observedRunningTime="2025-01-13 20:09:06.743285474 +0000 UTC m=+37.684434597" watchObservedRunningTime="2025-01-13 20:09:06.74368385 +0000 UTC m=+37.684832961" Jan 13 20:09:07.423892 kubelet[2389]: E0113 20:09:07.423805 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:08.424719 kubelet[2389]: E0113 20:09:08.424655 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:09.425225 kubelet[2389]: E0113 20:09:09.425164 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:10.392938 kubelet[2389]: E0113 20:09:10.392875 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:10.426359 kubelet[2389]: E0113 20:09:10.426300 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:11.426906 kubelet[2389]: E0113 20:09:11.426833 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:12.427408 kubelet[2389]: E0113 20:09:12.427347 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:13.428191 kubelet[2389]: E0113 20:09:13.428132 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:14.428695 kubelet[2389]: E0113 20:09:14.428623 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:15.257897 kubelet[2389]: I0113 20:09:15.257846 2389 topology_manager.go:215] "Topology Admit Handler" podUID="4e1f30eb-17bb-42a4-84ca-317da9e34873" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:09:15.268271 systemd[1]: Created slice kubepods-besteffort-pod4e1f30eb_17bb_42a4_84ca_317da9e34873.slice - libcontainer container kubepods-besteffort-pod4e1f30eb_17bb_42a4_84ca_317da9e34873.slice. Jan 13 20:09:15.428859 kubelet[2389]: E0113 20:09:15.428807 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:15.435104 kubelet[2389]: I0113 20:09:15.435068 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4e1f30eb-17bb-42a4-84ca-317da9e34873-data\") pod \"nfs-server-provisioner-0\" (UID: \"4e1f30eb-17bb-42a4-84ca-317da9e34873\") " pod="default/nfs-server-provisioner-0" Jan 13 20:09:15.435421 kubelet[2389]: I0113 20:09:15.435358 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd6pf\" (UniqueName: \"kubernetes.io/projected/4e1f30eb-17bb-42a4-84ca-317da9e34873-kube-api-access-fd6pf\") pod \"nfs-server-provisioner-0\" (UID: \"4e1f30eb-17bb-42a4-84ca-317da9e34873\") " pod="default/nfs-server-provisioner-0" Jan 13 20:09:15.575081 containerd[1936]: time="2025-01-13T20:09:15.574887322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4e1f30eb-17bb-42a4-84ca-317da9e34873,Namespace:default,Attempt:0,}" Jan 13 20:09:15.624490 (udev-worker)[3760]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:15.625701 (udev-worker)[3759]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:15.627001 systemd-networkd[1839]: lxcac5eceaa5d82: Link UP Jan 13 20:09:15.639500 kernel: eth0: renamed from tmpb8418 Jan 13 20:09:15.649161 systemd-networkd[1839]: lxcac5eceaa5d82: Gained carrier Jan 13 20:09:15.970513 containerd[1936]: time="2025-01-13T20:09:15.970179755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:15.970513 containerd[1936]: time="2025-01-13T20:09:15.970288307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:15.970513 containerd[1936]: time="2025-01-13T20:09:15.970329983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:15.970913 containerd[1936]: time="2025-01-13T20:09:15.970469279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:16.008345 systemd[1]: Started cri-containerd-b8418d820f2a5d0c9b20a9640037b488d372bccd6946ce0ac0dcf3f112d33c43.scope - libcontainer container b8418d820f2a5d0c9b20a9640037b488d372bccd6946ce0ac0dcf3f112d33c43. Jan 13 20:09:16.067512 containerd[1936]: time="2025-01-13T20:09:16.067460852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4e1f30eb-17bb-42a4-84ca-317da9e34873,Namespace:default,Attempt:0,} returns sandbox id \"b8418d820f2a5d0c9b20a9640037b488d372bccd6946ce0ac0dcf3f112d33c43\"" Jan 13 20:09:16.070999 containerd[1936]: time="2025-01-13T20:09:16.070680644Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:09:16.429135 kubelet[2389]: E0113 20:09:16.428973 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:16.938452 systemd-networkd[1839]: lxcac5eceaa5d82: Gained IPv6LL Jan 13 20:09:17.429377 kubelet[2389]: E0113 20:09:17.429312 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:18.430614 kubelet[2389]: E0113 20:09:18.430411 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:18.547724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827049030.mount: Deactivated successfully. Jan 13 20:09:19.431064 kubelet[2389]: E0113 20:09:19.430887 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:19.654114 ntpd[1909]: Listen normally on 13 lxcac5eceaa5d82 [fe80::14fe:8cff:fed8:a746%11]:123 Jan 13 20:09:19.655631 ntpd[1909]: 13 Jan 20:09:19 ntpd[1909]: Listen normally on 13 lxcac5eceaa5d82 [fe80::14fe:8cff:fed8:a746%11]:123 Jan 13 20:09:20.431845 kubelet[2389]: E0113 20:09:20.431753 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:21.426044 containerd[1936]: time="2025-01-13T20:09:21.425811267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:21.427814 containerd[1936]: time="2025-01-13T20:09:21.427715415Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 13 20:09:21.428491 containerd[1936]: time="2025-01-13T20:09:21.428401455Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:21.432202 kubelet[2389]: E0113 20:09:21.432138 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:21.433638 containerd[1936]: time="2025-01-13T20:09:21.433536039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:21.435906 containerd[1936]: time="2025-01-13T20:09:21.435709179Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.364966519s" Jan 13 20:09:21.435906 containerd[1936]: time="2025-01-13T20:09:21.435765207Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 20:09:21.440325 containerd[1936]: time="2025-01-13T20:09:21.440255967Z" level=info msg="CreateContainer within sandbox \"b8418d820f2a5d0c9b20a9640037b488d372bccd6946ce0ac0dcf3f112d33c43\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:09:21.460272 containerd[1936]: time="2025-01-13T20:09:21.460208019Z" level=info msg="CreateContainer within sandbox \"b8418d820f2a5d0c9b20a9640037b488d372bccd6946ce0ac0dcf3f112d33c43\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4ed36e772fd5a0c99d0aeb1016bc54bfbdcb0d985d3fe9058139af764a06e69c\"" Jan 13 20:09:21.461214 containerd[1936]: time="2025-01-13T20:09:21.461140023Z" level=info msg="StartContainer for \"4ed36e772fd5a0c99d0aeb1016bc54bfbdcb0d985d3fe9058139af764a06e69c\"" Jan 13 20:09:21.512331 systemd[1]: Started cri-containerd-4ed36e772fd5a0c99d0aeb1016bc54bfbdcb0d985d3fe9058139af764a06e69c.scope - libcontainer container 4ed36e772fd5a0c99d0aeb1016bc54bfbdcb0d985d3fe9058139af764a06e69c. Jan 13 20:09:21.563968 containerd[1936]: time="2025-01-13T20:09:21.563906067Z" level=info msg="StartContainer for \"4ed36e772fd5a0c99d0aeb1016bc54bfbdcb0d985d3fe9058139af764a06e69c\" returns successfully" Jan 13 20:09:22.432875 kubelet[2389]: E0113 20:09:22.432810 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:23.433297 kubelet[2389]: E0113 20:09:23.433233 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:24.433886 kubelet[2389]: E0113 20:09:24.433826 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:25.434482 kubelet[2389]: E0113 20:09:25.434410 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:26.434949 kubelet[2389]: E0113 20:09:26.434877 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:27.436069 kubelet[2389]: E0113 20:09:27.435979 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:28.436915 kubelet[2389]: E0113 20:09:28.436858 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:29.437402 kubelet[2389]: E0113 20:09:29.437356 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:30.393096 kubelet[2389]: E0113 20:09:30.393001 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:30.439173 kubelet[2389]: E0113 20:09:30.439119 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:31.439919 kubelet[2389]: E0113 20:09:31.439850 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:31.895823 kubelet[2389]: I0113 20:09:31.895756 2389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.52824926 podStartE2EDuration="16.895696491s" podCreationTimestamp="2025-01-13 20:09:15 +0000 UTC" firstStartedPulling="2025-01-13 20:09:16.069840572 +0000 UTC m=+47.010989683" lastFinishedPulling="2025-01-13 20:09:21.437287803 +0000 UTC m=+52.378436914" observedRunningTime="2025-01-13 20:09:21.802332352 +0000 UTC m=+52.743481511" watchObservedRunningTime="2025-01-13 20:09:31.895696491 +0000 UTC m=+62.836845614" Jan 13 20:09:31.896285 kubelet[2389]: I0113 20:09:31.896228 2389 topology_manager.go:215] "Topology Admit Handler" podUID="09994853-1554-475e-a6b5-162abb896682" podNamespace="default" podName="test-pod-1" Jan 13 20:09:31.907334 systemd[1]: Created slice kubepods-besteffort-pod09994853_1554_475e_a6b5_162abb896682.slice - libcontainer container kubepods-besteffort-pod09994853_1554_475e_a6b5_162abb896682.slice. Jan 13 20:09:32.035929 kubelet[2389]: I0113 20:09:32.035863 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdmgx\" (UniqueName: \"kubernetes.io/projected/09994853-1554-475e-a6b5-162abb896682-kube-api-access-rdmgx\") pod \"test-pod-1\" (UID: \"09994853-1554-475e-a6b5-162abb896682\") " pod="default/test-pod-1" Jan 13 20:09:32.036107 kubelet[2389]: I0113 20:09:32.035956 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc0df8e0-1599-48a7-b80f-885c289f7a52\" (UniqueName: \"kubernetes.io/nfs/09994853-1554-475e-a6b5-162abb896682-pvc-bc0df8e0-1599-48a7-b80f-885c289f7a52\") pod \"test-pod-1\" (UID: \"09994853-1554-475e-a6b5-162abb896682\") " pod="default/test-pod-1" Jan 13 20:09:32.171121 kernel: FS-Cache: Loaded Jan 13 20:09:32.215297 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:09:32.215428 kernel: RPC: Registered udp transport module. Jan 13 20:09:32.215470 kernel: RPC: Registered tcp transport module. Jan 13 20:09:32.216202 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:09:32.217069 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:09:32.440634 kubelet[2389]: E0113 20:09:32.440562 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:32.545289 kernel: NFS: Registering the id_resolver key type Jan 13 20:09:32.545438 kernel: Key type id_resolver registered Jan 13 20:09:32.545483 kernel: Key type id_legacy registered Jan 13 20:09:32.583521 nfsidmap[3941]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:09:32.589187 nfsidmap[3942]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 20:09:32.813994 containerd[1936]: time="2025-01-13T20:09:32.813871479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:09994853-1554-475e-a6b5-162abb896682,Namespace:default,Attempt:0,}" Jan 13 20:09:32.862634 systemd-networkd[1839]: lxcead76f2d7616: Link UP Jan 13 20:09:32.869134 kernel: eth0: renamed from tmp871e1 Jan 13 20:09:32.876605 (udev-worker)[3933]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:32.879282 systemd-networkd[1839]: lxcead76f2d7616: Gained carrier Jan 13 20:09:33.193367 containerd[1936]: time="2025-01-13T20:09:33.192841021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:33.193705 containerd[1936]: time="2025-01-13T20:09:33.192930025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:33.193705 containerd[1936]: time="2025-01-13T20:09:33.192963265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:33.193705 containerd[1936]: time="2025-01-13T20:09:33.193137481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:33.232375 systemd[1]: Started cri-containerd-871e15e8e04f756ce6ba96f886c606cfee503788ec2e95347d7d711d89771b15.scope - libcontainer container 871e15e8e04f756ce6ba96f886c606cfee503788ec2e95347d7d711d89771b15. Jan 13 20:09:33.292406 containerd[1936]: time="2025-01-13T20:09:33.292334582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:09994853-1554-475e-a6b5-162abb896682,Namespace:default,Attempt:0,} returns sandbox id \"871e15e8e04f756ce6ba96f886c606cfee503788ec2e95347d7d711d89771b15\"" Jan 13 20:09:33.295912 containerd[1936]: time="2025-01-13T20:09:33.295859150Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:09:33.441795 kubelet[2389]: E0113 20:09:33.441729 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:33.653577 containerd[1936]: time="2025-01-13T20:09:33.653072415Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:33.654969 containerd[1936]: time="2025-01-13T20:09:33.654882603Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:09:33.660726 containerd[1936]: time="2025-01-13T20:09:33.660476907Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 364.561105ms" Jan 13 20:09:33.660726 containerd[1936]: time="2025-01-13T20:09:33.660531471Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:09:33.663387 containerd[1936]: time="2025-01-13T20:09:33.663319635Z" level=info msg="CreateContainer within sandbox \"871e15e8e04f756ce6ba96f886c606cfee503788ec2e95347d7d711d89771b15\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:09:33.702462 containerd[1936]: time="2025-01-13T20:09:33.702315148Z" level=info msg="CreateContainer within sandbox \"871e15e8e04f756ce6ba96f886c606cfee503788ec2e95347d7d711d89771b15\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5a2ba3ab3dcc637e8a9616e6a1e9e21a42bad0a55c6ba0885c74ac756d6da70b\"" Jan 13 20:09:33.704153 containerd[1936]: time="2025-01-13T20:09:33.703689460Z" level=info msg="StartContainer for \"5a2ba3ab3dcc637e8a9616e6a1e9e21a42bad0a55c6ba0885c74ac756d6da70b\"" Jan 13 20:09:33.747337 systemd[1]: Started cri-containerd-5a2ba3ab3dcc637e8a9616e6a1e9e21a42bad0a55c6ba0885c74ac756d6da70b.scope - libcontainer container 5a2ba3ab3dcc637e8a9616e6a1e9e21a42bad0a55c6ba0885c74ac756d6da70b. Jan 13 20:09:33.793217 containerd[1936]: time="2025-01-13T20:09:33.793113796Z" level=info msg="StartContainer for \"5a2ba3ab3dcc637e8a9616e6a1e9e21a42bad0a55c6ba0885c74ac756d6da70b\" returns successfully" Jan 13 20:09:33.836046 kubelet[2389]: I0113 20:09:33.834477 2389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.468533859 podStartE2EDuration="18.8344198s" podCreationTimestamp="2025-01-13 20:09:15 +0000 UTC" firstStartedPulling="2025-01-13 20:09:33.29496077 +0000 UTC m=+64.236109869" lastFinishedPulling="2025-01-13 20:09:33.660846699 +0000 UTC m=+64.601995810" observedRunningTime="2025-01-13 20:09:33.829692928 +0000 UTC m=+64.770842027" watchObservedRunningTime="2025-01-13 20:09:33.8344198 +0000 UTC m=+64.775568923" Jan 13 20:09:33.961410 systemd-networkd[1839]: lxcead76f2d7616: Gained IPv6LL Jan 13 20:09:34.442587 kubelet[2389]: E0113 20:09:34.442436 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:35.442810 kubelet[2389]: E0113 20:09:35.442740 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:36.443495 kubelet[2389]: E0113 20:09:36.443423 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:36.653913 ntpd[1909]: Listen normally on 14 lxcead76f2d7616 [fe80::a495:3bff:fe59:970%13]:123 Jan 13 20:09:36.654409 ntpd[1909]: 13 Jan 20:09:36 ntpd[1909]: Listen normally on 14 lxcead76f2d7616 [fe80::a495:3bff:fe59:970%13]:123 Jan 13 20:09:37.444243 kubelet[2389]: E0113 20:09:37.444181 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:38.444829 kubelet[2389]: E0113 20:09:38.444755 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:39.445524 kubelet[2389]: E0113 20:09:39.445466 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:40.445715 kubelet[2389]: E0113 20:09:40.445652 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:41.446453 kubelet[2389]: E0113 20:09:41.446394 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:41.496207 containerd[1936]: time="2025-01-13T20:09:41.496097770Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:09:41.508216 containerd[1936]: time="2025-01-13T20:09:41.508162090Z" level=info msg="StopContainer for \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\" with timeout 2 (s)" Jan 13 20:09:41.508823 containerd[1936]: time="2025-01-13T20:09:41.508761058Z" level=info msg="Stop container \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\" with signal terminated" Jan 13 20:09:41.521164 systemd-networkd[1839]: lxc_health: Link DOWN Jan 13 20:09:41.521180 systemd-networkd[1839]: lxc_health: Lost carrier Jan 13 20:09:41.540723 systemd[1]: cri-containerd-89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92.scope: Deactivated successfully. Jan 13 20:09:41.541804 systemd[1]: cri-containerd-89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92.scope: Consumed 14.157s CPU time. Jan 13 20:09:41.579790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92-rootfs.mount: Deactivated successfully. Jan 13 20:09:41.853972 containerd[1936]: time="2025-01-13T20:09:41.853646064Z" level=info msg="shim disconnected" id=89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92 namespace=k8s.io Jan 13 20:09:41.853972 containerd[1936]: time="2025-01-13T20:09:41.853721436Z" level=warning msg="cleaning up after shim disconnected" id=89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92 namespace=k8s.io Jan 13 20:09:41.853972 containerd[1936]: time="2025-01-13T20:09:41.853741596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:41.878196 containerd[1936]: time="2025-01-13T20:09:41.877923120Z" level=info msg="StopContainer for \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\" returns successfully" Jan 13 20:09:41.879280 containerd[1936]: time="2025-01-13T20:09:41.878946252Z" level=info msg="StopPodSandbox for \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\"" Jan 13 20:09:41.879280 containerd[1936]: time="2025-01-13T20:09:41.879001584Z" level=info msg="Container to stop \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.879280 containerd[1936]: time="2025-01-13T20:09:41.879046836Z" level=info msg="Container to stop \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.879280 containerd[1936]: time="2025-01-13T20:09:41.879068544Z" level=info msg="Container to stop \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.879280 containerd[1936]: time="2025-01-13T20:09:41.879090228Z" level=info msg="Container to stop \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.879280 containerd[1936]: time="2025-01-13T20:09:41.879110280Z" level=info msg="Container to stop \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:41.883474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea-shm.mount: Deactivated successfully. Jan 13 20:09:41.893454 systemd[1]: cri-containerd-33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea.scope: Deactivated successfully. Jan 13 20:09:41.931240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea-rootfs.mount: Deactivated successfully. Jan 13 20:09:41.934109 containerd[1936]: time="2025-01-13T20:09:41.933967188Z" level=info msg="shim disconnected" id=33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea namespace=k8s.io Jan 13 20:09:41.934109 containerd[1936]: time="2025-01-13T20:09:41.934104468Z" level=warning msg="cleaning up after shim disconnected" id=33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea namespace=k8s.io Jan 13 20:09:41.934389 containerd[1936]: time="2025-01-13T20:09:41.934128324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:41.955199 containerd[1936]: time="2025-01-13T20:09:41.955086085Z" level=info msg="TearDown network for sandbox \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" successfully" Jan 13 20:09:41.955199 containerd[1936]: time="2025-01-13T20:09:41.955179337Z" level=info msg="StopPodSandbox for \"33c38eb7c747e9cbb2191fdd8f8d8f1298d66fe54d75b0be057d5f5c83027eea\" returns successfully" Jan 13 20:09:42.092325 kubelet[2389]: I0113 20:09:42.092256 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-cgroup\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092508 kubelet[2389]: I0113 20:09:42.092341 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-config-path\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092508 kubelet[2389]: I0113 20:09:42.092388 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-hostproc\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092508 kubelet[2389]: I0113 20:09:42.092428 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-xtables-lock\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092508 kubelet[2389]: I0113 20:09:42.092473 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-hubble-tls\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092728 kubelet[2389]: I0113 20:09:42.092513 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-net\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092728 kubelet[2389]: I0113 20:09:42.092556 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx9rs\" (UniqueName: \"kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-kube-api-access-wx9rs\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092728 kubelet[2389]: I0113 20:09:42.092599 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-etc-cni-netd\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092728 kubelet[2389]: I0113 20:09:42.092640 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-run\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092728 kubelet[2389]: I0113 20:09:42.092685 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/498dd55e-0497-4b62-ade2-faf0414bf2e0-clustermesh-secrets\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.092728 kubelet[2389]: I0113 20:09:42.092723 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-bpf-maps\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.093066 kubelet[2389]: I0113 20:09:42.092761 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-lib-modules\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.093066 kubelet[2389]: I0113 20:09:42.092805 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-kernel\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.093066 kubelet[2389]: I0113 20:09:42.092848 2389 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cni-path\") pod \"498dd55e-0497-4b62-ade2-faf0414bf2e0\" (UID: \"498dd55e-0497-4b62-ade2-faf0414bf2e0\") " Jan 13 20:09:42.093066 kubelet[2389]: I0113 20:09:42.092935 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.093066 kubelet[2389]: I0113 20:09:42.092999 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.095036 kubelet[2389]: I0113 20:09:42.093391 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.095036 kubelet[2389]: I0113 20:09:42.093446 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.095036 kubelet[2389]: I0113 20:09:42.093490 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.095569 kubelet[2389]: I0113 20:09:42.095528 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.096945 kubelet[2389]: I0113 20:09:42.096404 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.096945 kubelet[2389]: I0113 20:09:42.096483 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.096945 kubelet[2389]: I0113 20:09:42.096504 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.096945 kubelet[2389]: I0113 20:09:42.096540 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:42.107138 kubelet[2389]: I0113 20:09:42.105235 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-kube-api-access-wx9rs" (OuterVolumeSpecName: "kube-api-access-wx9rs") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "kube-api-access-wx9rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:42.107138 kubelet[2389]: I0113 20:09:42.104995 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:09:42.107231 systemd[1]: var-lib-kubelet-pods-498dd55e\x2d0497\x2d4b62\x2dade2\x2dfaf0414bf2e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwx9rs.mount: Deactivated successfully. Jan 13 20:09:42.108694 kubelet[2389]: I0113 20:09:42.108591 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498dd55e-0497-4b62-ade2-faf0414bf2e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:09:42.109267 kubelet[2389]: I0113 20:09:42.109148 2389 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "498dd55e-0497-4b62-ade2-faf0414bf2e0" (UID: "498dd55e-0497-4b62-ade2-faf0414bf2e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:42.193591 kubelet[2389]: I0113 20:09:42.193549 2389 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-kernel\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.193591 kubelet[2389]: I0113 20:09:42.193600 2389 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cni-path\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.193784 kubelet[2389]: I0113 20:09:42.193627 2389 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-cgroup\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.193784 kubelet[2389]: I0113 20:09:42.193652 2389 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-config-path\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.193784 kubelet[2389]: I0113 20:09:42.193677 2389 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-xtables-lock\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.193784 kubelet[2389]: I0113 20:09:42.193699 2389 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-hubble-tls\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.193784 kubelet[2389]: I0113 20:09:42.193722 2389 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-hostproc\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.193784 kubelet[2389]: I0113 20:09:42.193776 2389 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wx9rs\" (UniqueName: \"kubernetes.io/projected/498dd55e-0497-4b62-ade2-faf0414bf2e0-kube-api-access-wx9rs\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.194136 kubelet[2389]: I0113 20:09:42.193801 2389 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-etc-cni-netd\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.194136 kubelet[2389]: I0113 20:09:42.193823 2389 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-cilium-run\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.194136 kubelet[2389]: I0113 20:09:42.193847 2389 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/498dd55e-0497-4b62-ade2-faf0414bf2e0-clustermesh-secrets\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.194136 kubelet[2389]: I0113 20:09:42.193869 2389 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-bpf-maps\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.194136 kubelet[2389]: I0113 20:09:42.193891 2389 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-lib-modules\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.194136 kubelet[2389]: I0113 20:09:42.193915 2389 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/498dd55e-0497-4b62-ade2-faf0414bf2e0-host-proc-sys-net\") on node \"172.31.25.10\" DevicePath \"\"" Jan 13 20:09:42.447068 kubelet[2389]: E0113 20:09:42.446878 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:42.480120 systemd[1]: var-lib-kubelet-pods-498dd55e\x2d0497\x2d4b62\x2dade2\x2dfaf0414bf2e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:09:42.480306 systemd[1]: var-lib-kubelet-pods-498dd55e\x2d0497\x2d4b62\x2dade2\x2dfaf0414bf2e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:09:42.566603 systemd[1]: Removed slice kubepods-burstable-pod498dd55e_0497_4b62_ade2_faf0414bf2e0.slice - libcontainer container kubepods-burstable-pod498dd55e_0497_4b62_ade2_faf0414bf2e0.slice. Jan 13 20:09:42.566938 systemd[1]: kubepods-burstable-pod498dd55e_0497_4b62_ade2_faf0414bf2e0.slice: Consumed 14.314s CPU time. Jan 13 20:09:42.842233 kubelet[2389]: I0113 20:09:42.842183 2389 scope.go:117] "RemoveContainer" containerID="89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92" Jan 13 20:09:42.846880 containerd[1936]: time="2025-01-13T20:09:42.846816373Z" level=info msg="RemoveContainer for \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\"" Jan 13 20:09:42.852496 containerd[1936]: time="2025-01-13T20:09:42.852422029Z" level=info msg="RemoveContainer for \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\" returns successfully" Jan 13 20:09:42.853055 kubelet[2389]: I0113 20:09:42.852832 2389 scope.go:117] "RemoveContainer" containerID="86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56" Jan 13 20:09:42.855408 containerd[1936]: time="2025-01-13T20:09:42.855279745Z" level=info msg="RemoveContainer for \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\"" Jan 13 20:09:42.859324 containerd[1936]: time="2025-01-13T20:09:42.859262929Z" level=info msg="RemoveContainer for \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\" returns successfully" Jan 13 20:09:42.859915 kubelet[2389]: I0113 20:09:42.859606 2389 scope.go:117] "RemoveContainer" containerID="f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78" Jan 13 20:09:42.862195 containerd[1936]: time="2025-01-13T20:09:42.862090885Z" level=info msg="RemoveContainer for \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\"" Jan 13 20:09:42.866411 containerd[1936]: time="2025-01-13T20:09:42.866348449Z" level=info msg="RemoveContainer for \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\" returns successfully" Jan 13 20:09:42.867213 kubelet[2389]: I0113 20:09:42.867040 2389 scope.go:117] "RemoveContainer" containerID="027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898" Jan 13 20:09:42.870271 containerd[1936]: time="2025-01-13T20:09:42.869566921Z" level=info msg="RemoveContainer for \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\"" Jan 13 20:09:42.874465 containerd[1936]: time="2025-01-13T20:09:42.874396789Z" level=info msg="RemoveContainer for \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\" returns successfully" Jan 13 20:09:42.875195 kubelet[2389]: I0113 20:09:42.874984 2389 scope.go:117] "RemoveContainer" containerID="e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449" Jan 13 20:09:42.876997 containerd[1936]: time="2025-01-13T20:09:42.876948361Z" level=info msg="RemoveContainer for \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\"" Jan 13 20:09:42.880729 containerd[1936]: time="2025-01-13T20:09:42.880677481Z" level=info msg="RemoveContainer for \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\" returns successfully" Jan 13 20:09:42.881095 kubelet[2389]: I0113 20:09:42.881034 2389 scope.go:117] "RemoveContainer" containerID="89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92" Jan 13 20:09:42.881846 containerd[1936]: time="2025-01-13T20:09:42.881731489Z" level=error msg="ContainerStatus for \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\": not found" Jan 13 20:09:42.882397 kubelet[2389]: E0113 20:09:42.882352 2389 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\": not found" containerID="89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92" Jan 13 20:09:42.882529 kubelet[2389]: I0113 20:09:42.882491 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92"} err="failed to get container status \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\": rpc error: code = NotFound desc = an error occurred when try to find container \"89b8c8856fd55cafb5fbc3bf0f97091faa76d1cafd84df375eae1a6c778bfd92\": not found" Jan 13 20:09:42.882608 kubelet[2389]: I0113 20:09:42.882531 2389 scope.go:117] "RemoveContainer" containerID="86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56" Jan 13 20:09:42.883001 containerd[1936]: time="2025-01-13T20:09:42.882847825Z" level=error msg="ContainerStatus for \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\": not found" Jan 13 20:09:42.883230 kubelet[2389]: E0113 20:09:42.883191 2389 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\": not found" containerID="86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56" Jan 13 20:09:42.883328 kubelet[2389]: I0113 20:09:42.883255 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56"} err="failed to get container status \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\": rpc error: code = NotFound desc = an error occurred when try to find container \"86d09f9da0c28aac2cbbd54d89d25e012b52b875cdcbebfeb9eb1b8de344fe56\": not found" Jan 13 20:09:42.883328 kubelet[2389]: I0113 20:09:42.883280 2389 scope.go:117] "RemoveContainer" containerID="f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78" Jan 13 20:09:42.883777 containerd[1936]: time="2025-01-13T20:09:42.883576765Z" level=error msg="ContainerStatus for \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\": not found" Jan 13 20:09:42.883848 kubelet[2389]: E0113 20:09:42.883766 2389 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\": not found" containerID="f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78" Jan 13 20:09:42.883848 kubelet[2389]: I0113 20:09:42.883811 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78"} err="failed to get container status \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2759cc98769453da1ad8b707bb018301df5232ff1fa45250b30065070985a78\": not found" Jan 13 20:09:42.883848 kubelet[2389]: I0113 20:09:42.883832 2389 scope.go:117] "RemoveContainer" containerID="027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898" Jan 13 20:09:42.884359 containerd[1936]: time="2025-01-13T20:09:42.884237341Z" level=error msg="ContainerStatus for \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\": not found" Jan 13 20:09:42.884502 kubelet[2389]: E0113 20:09:42.884469 2389 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\": not found" containerID="027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898" Jan 13 20:09:42.884587 kubelet[2389]: I0113 20:09:42.884524 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898"} err="failed to get container status \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\": rpc error: code = NotFound desc = an error occurred when try to find container \"027cfccce60788fab0eea5f6f346dc6a3e3a4cf36b8e8e2c5560bfdb76073898\": not found" Jan 13 20:09:42.884587 kubelet[2389]: I0113 20:09:42.884550 2389 scope.go:117] "RemoveContainer" containerID="e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449" Jan 13 20:09:42.884892 containerd[1936]: time="2025-01-13T20:09:42.884822209Z" level=error msg="ContainerStatus for \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\": not found" Jan 13 20:09:42.885250 kubelet[2389]: E0113 20:09:42.885218 2389 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\": not found" containerID="e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449" Jan 13 20:09:42.885355 kubelet[2389]: I0113 20:09:42.885305 2389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449"} err="failed to get container status \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6fa1acf4f68462d9dbe13b725033696ec555c6cc1dc7e7bcaffc2350248d449\": not found" Jan 13 20:09:43.447255 kubelet[2389]: E0113 20:09:43.447193 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:43.653863 ntpd[1909]: Deleting interface #11 lxc_health, fe80::8483:aaff:fe07:d303%7#123, interface stats: received=0, sent=0, dropped=0, active_time=46 secs Jan 13 20:09:43.654373 ntpd[1909]: 13 Jan 20:09:43 ntpd[1909]: Deleting interface #11 lxc_health, fe80::8483:aaff:fe07:d303%7#123, interface stats: received=0, sent=0, dropped=0, active_time=46 secs Jan 13 20:09:44.448427 kubelet[2389]: E0113 20:09:44.448351 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:44.559066 kubelet[2389]: I0113 20:09:44.558429 2389 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" path="/var/lib/kubelet/pods/498dd55e-0497-4b62-ade2-faf0414bf2e0/volumes" Jan 13 20:09:45.314307 kubelet[2389]: I0113 20:09:45.314242 2389 topology_manager.go:215] "Topology Admit Handler" podUID="c5b50c50-0591-4af7-950c-ef93d95c8f37" podNamespace="kube-system" podName="cilium-operator-5cc964979-9k8wj" Jan 13 20:09:45.314442 kubelet[2389]: E0113 20:09:45.314327 2389 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" containerName="mount-cgroup" Jan 13 20:09:45.314442 kubelet[2389]: E0113 20:09:45.314351 2389 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" containerName="clean-cilium-state" Jan 13 20:09:45.314442 kubelet[2389]: E0113 20:09:45.314371 2389 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" containerName="apply-sysctl-overwrites" Jan 13 20:09:45.314442 kubelet[2389]: E0113 20:09:45.314389 2389 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" containerName="mount-bpf-fs" Jan 13 20:09:45.314442 kubelet[2389]: E0113 20:09:45.314406 2389 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" containerName="cilium-agent" Jan 13 20:09:45.314442 kubelet[2389]: I0113 20:09:45.314441 2389 memory_manager.go:354] "RemoveStaleState removing state" podUID="498dd55e-0497-4b62-ade2-faf0414bf2e0" containerName="cilium-agent" Jan 13 20:09:45.324270 systemd[1]: Created slice kubepods-besteffort-podc5b50c50_0591_4af7_950c_ef93d95c8f37.slice - libcontainer container kubepods-besteffort-podc5b50c50_0591_4af7_950c_ef93d95c8f37.slice. Jan 13 20:09:45.341168 kubelet[2389]: W0113 20:09:45.341130 2389 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.25.10" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.25.10' and this object Jan 13 20:09:45.341398 kubelet[2389]: E0113 20:09:45.341361 2389 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.25.10" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.25.10' and this object Jan 13 20:09:45.410758 kubelet[2389]: I0113 20:09:45.410589 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5b50c50-0591-4af7-950c-ef93d95c8f37-cilium-config-path\") pod \"cilium-operator-5cc964979-9k8wj\" (UID: \"c5b50c50-0591-4af7-950c-ef93d95c8f37\") " pod="kube-system/cilium-operator-5cc964979-9k8wj" Jan 13 20:09:45.410758 kubelet[2389]: I0113 20:09:45.410666 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zb65\" (UniqueName: \"kubernetes.io/projected/c5b50c50-0591-4af7-950c-ef93d95c8f37-kube-api-access-6zb65\") pod \"cilium-operator-5cc964979-9k8wj\" (UID: \"c5b50c50-0591-4af7-950c-ef93d95c8f37\") " pod="kube-system/cilium-operator-5cc964979-9k8wj" Jan 13 20:09:45.449179 kubelet[2389]: E0113 20:09:45.449120 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:45.451827 kubelet[2389]: I0113 20:09:45.451729 2389 topology_manager.go:215] "Topology Admit Handler" podUID="856754e6-cbb5-45eb-b952-6244d9222856" podNamespace="kube-system" podName="cilium-ggqjr" Jan 13 20:09:45.463046 systemd[1]: Created slice kubepods-burstable-pod856754e6_cbb5_45eb_b952_6244d9222856.slice - libcontainer container kubepods-burstable-pod856754e6_cbb5_45eb_b952_6244d9222856.slice. Jan 13 20:09:45.553781 kubelet[2389]: E0113 20:09:45.553650 2389 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:45.611756 kubelet[2389]: I0113 20:09:45.611438 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-cilium-cgroup\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.611756 kubelet[2389]: I0113 20:09:45.611511 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-cni-path\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.611756 kubelet[2389]: I0113 20:09:45.611607 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-lib-modules\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.611756 kubelet[2389]: I0113 20:09:45.611683 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/856754e6-cbb5-45eb-b952-6244d9222856-clustermesh-secrets\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.611756 kubelet[2389]: I0113 20:09:45.611755 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/856754e6-cbb5-45eb-b952-6244d9222856-cilium-config-path\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612135 kubelet[2389]: I0113 20:09:45.611805 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-host-proc-sys-net\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612135 kubelet[2389]: I0113 20:09:45.611913 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-etc-cni-netd\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612135 kubelet[2389]: I0113 20:09:45.611957 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-xtables-lock\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612135 kubelet[2389]: I0113 20:09:45.612056 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/856754e6-cbb5-45eb-b952-6244d9222856-cilium-ipsec-secrets\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612135 kubelet[2389]: I0113 20:09:45.612117 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-host-proc-sys-kernel\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612368 kubelet[2389]: I0113 20:09:45.612163 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-cilium-run\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612368 kubelet[2389]: I0113 20:09:45.612213 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/856754e6-cbb5-45eb-b952-6244d9222856-hubble-tls\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612368 kubelet[2389]: I0113 20:09:45.612296 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-bpf-maps\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612368 kubelet[2389]: I0113 20:09:45.612340 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/856754e6-cbb5-45eb-b952-6244d9222856-hostproc\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:45.612551 kubelet[2389]: I0113 20:09:45.612389 2389 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wg66\" (UniqueName: \"kubernetes.io/projected/856754e6-cbb5-45eb-b952-6244d9222856-kube-api-access-4wg66\") pod \"cilium-ggqjr\" (UID: \"856754e6-cbb5-45eb-b952-6244d9222856\") " pod="kube-system/cilium-ggqjr" Jan 13 20:09:46.449725 kubelet[2389]: E0113 20:09:46.449665 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:46.529917 containerd[1936]: time="2025-01-13T20:09:46.529845531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9k8wj,Uid:c5b50c50-0591-4af7-950c-ef93d95c8f37,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:46.568705 containerd[1936]: time="2025-01-13T20:09:46.568112451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:46.568705 containerd[1936]: time="2025-01-13T20:09:46.568216419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:46.568705 containerd[1936]: time="2025-01-13T20:09:46.568253103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:46.568705 containerd[1936]: time="2025-01-13T20:09:46.568408419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:46.610298 systemd[1]: Started cri-containerd-eb2ad19be4729f4baaf89720ecaffa00b96d11eef1d489d91b044292b1ddfd33.scope - libcontainer container eb2ad19be4729f4baaf89720ecaffa00b96d11eef1d489d91b044292b1ddfd33. Jan 13 20:09:46.671274 containerd[1936]: time="2025-01-13T20:09:46.671070184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-9k8wj,Uid:c5b50c50-0591-4af7-950c-ef93d95c8f37,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb2ad19be4729f4baaf89720ecaffa00b96d11eef1d489d91b044292b1ddfd33\"" Jan 13 20:09:46.672278 containerd[1936]: time="2025-01-13T20:09:46.672198808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggqjr,Uid:856754e6-cbb5-45eb-b952-6244d9222856,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:46.675667 containerd[1936]: time="2025-01-13T20:09:46.675437248Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:09:46.711904 containerd[1936]: time="2025-01-13T20:09:46.711400576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:46.711904 containerd[1936]: time="2025-01-13T20:09:46.711509332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:46.711904 containerd[1936]: time="2025-01-13T20:09:46.711546292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:46.711904 containerd[1936]: time="2025-01-13T20:09:46.711716920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:46.756354 systemd[1]: Started cri-containerd-e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b.scope - libcontainer container e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b. Jan 13 20:09:46.800193 containerd[1936]: time="2025-01-13T20:09:46.799755977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggqjr,Uid:856754e6-cbb5-45eb-b952-6244d9222856,Namespace:kube-system,Attempt:0,} returns sandbox id \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\"" Jan 13 20:09:46.806438 containerd[1936]: time="2025-01-13T20:09:46.806357897Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:09:46.828749 containerd[1936]: time="2025-01-13T20:09:46.828679853Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa\"" Jan 13 20:09:46.829702 containerd[1936]: time="2025-01-13T20:09:46.829391429Z" level=info msg="StartContainer for \"e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa\"" Jan 13 20:09:46.879342 systemd[1]: Started cri-containerd-e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa.scope - libcontainer container e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa. Jan 13 20:09:46.927281 containerd[1936]: time="2025-01-13T20:09:46.927209393Z" level=info msg="StartContainer for \"e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa\" returns successfully" Jan 13 20:09:46.942253 systemd[1]: cri-containerd-e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa.scope: Deactivated successfully. Jan 13 20:09:46.989116 containerd[1936]: time="2025-01-13T20:09:46.988738890Z" level=info msg="shim disconnected" id=e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa namespace=k8s.io Jan 13 20:09:46.989116 containerd[1936]: time="2025-01-13T20:09:46.988827366Z" level=warning msg="cleaning up after shim disconnected" id=e8ff4acf68712fecd22a5170fbe79319e2abd8ce845f000a5a4fd6894dc563fa namespace=k8s.io Jan 13 20:09:46.989116 containerd[1936]: time="2025-01-13T20:09:46.988846326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:47.450162 kubelet[2389]: E0113 20:09:47.450104 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:47.870577 containerd[1936]: time="2025-01-13T20:09:47.870443466Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:09:47.888418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612249942.mount: Deactivated successfully. Jan 13 20:09:47.889528 containerd[1936]: time="2025-01-13T20:09:47.889291134Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2\"" Jan 13 20:09:47.891188 containerd[1936]: time="2025-01-13T20:09:47.891123450Z" level=info msg="StartContainer for \"f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2\"" Jan 13 20:09:47.944319 systemd[1]: Started cri-containerd-f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2.scope - libcontainer container f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2. Jan 13 20:09:47.989914 containerd[1936]: time="2025-01-13T20:09:47.989849707Z" level=info msg="StartContainer for \"f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2\" returns successfully" Jan 13 20:09:48.003209 systemd[1]: cri-containerd-f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2.scope: Deactivated successfully. Jan 13 20:09:48.046392 containerd[1936]: time="2025-01-13T20:09:48.046081551Z" level=info msg="shim disconnected" id=f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2 namespace=k8s.io Jan 13 20:09:48.046392 containerd[1936]: time="2025-01-13T20:09:48.046235535Z" level=warning msg="cleaning up after shim disconnected" id=f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2 namespace=k8s.io Jan 13 20:09:48.046392 containerd[1936]: time="2025-01-13T20:09:48.046257099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:48.451103 kubelet[2389]: E0113 20:09:48.451039 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:48.574336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f637fb2e157f5a74df4ee6430b07043bd01f4e0fc0cde05722b1aacebbd4b8a2-rootfs.mount: Deactivated successfully. Jan 13 20:09:48.875177 containerd[1936]: time="2025-01-13T20:09:48.874930531Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:09:48.897646 containerd[1936]: time="2025-01-13T20:09:48.897507655Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27\"" Jan 13 20:09:48.898523 containerd[1936]: time="2025-01-13T20:09:48.898428451Z" level=info msg="StartContainer for \"d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27\"" Jan 13 20:09:48.957294 systemd[1]: Started cri-containerd-d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27.scope - libcontainer container d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27. Jan 13 20:09:49.011760 systemd[1]: cri-containerd-d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27.scope: Deactivated successfully. Jan 13 20:09:49.014711 containerd[1936]: time="2025-01-13T20:09:49.012607180Z" level=info msg="StartContainer for \"d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27\" returns successfully" Jan 13 20:09:49.056001 containerd[1936]: time="2025-01-13T20:09:49.055919668Z" level=info msg="shim disconnected" id=d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27 namespace=k8s.io Jan 13 20:09:49.056349 containerd[1936]: time="2025-01-13T20:09:49.055995580Z" level=warning msg="cleaning up after shim disconnected" id=d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27 namespace=k8s.io Jan 13 20:09:49.056349 containerd[1936]: time="2025-01-13T20:09:49.056129188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:49.451839 kubelet[2389]: E0113 20:09:49.451777 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:49.574436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01ac79c9c4fffbdeeafa6a30ce9b517d3bf047340066e886ce2ac57e32ecb27-rootfs.mount: Deactivated successfully. Jan 13 20:09:49.880485 containerd[1936]: time="2025-01-13T20:09:49.880327052Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:09:49.901494 containerd[1936]: time="2025-01-13T20:09:49.901418780Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7\"" Jan 13 20:09:49.902448 containerd[1936]: time="2025-01-13T20:09:49.902379872Z" level=info msg="StartContainer for \"14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7\"" Jan 13 20:09:49.957335 systemd[1]: Started cri-containerd-14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7.scope - libcontainer container 14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7. Jan 13 20:09:49.998469 systemd[1]: cri-containerd-14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7.scope: Deactivated successfully. Jan 13 20:09:50.002788 containerd[1936]: time="2025-01-13T20:09:50.002628617Z" level=info msg="StartContainer for \"14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7\" returns successfully" Jan 13 20:09:50.033916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7-rootfs.mount: Deactivated successfully. Jan 13 20:09:50.042629 containerd[1936]: time="2025-01-13T20:09:50.042427613Z" level=info msg="shim disconnected" id=14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7 namespace=k8s.io Jan 13 20:09:50.043667 containerd[1936]: time="2025-01-13T20:09:50.042989393Z" level=warning msg="cleaning up after shim disconnected" id=14a8aa31026ded585dda36aab2d9402706470fdea8ebb6e14aa6cad381d1faf7 namespace=k8s.io Jan 13 20:09:50.043667 containerd[1936]: time="2025-01-13T20:09:50.043138553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:50.393091 kubelet[2389]: E0113 20:09:50.393002 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:50.452513 kubelet[2389]: E0113 20:09:50.452458 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:50.554753 kubelet[2389]: E0113 20:09:50.554561 2389 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:50.887476 containerd[1936]: time="2025-01-13T20:09:50.887257689Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:09:50.912746 containerd[1936]: time="2025-01-13T20:09:50.912551853Z" level=info msg="CreateContainer within sandbox \"e917ac217c8654fffe22c5e71a9d93b5b4398a22992338dfa5237d5d8f43c70b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de78f422a4a643b56f8438256663ef76cd3ca919fa93081b7d2f8a595656dfbc\"" Jan 13 20:09:50.914166 containerd[1936]: time="2025-01-13T20:09:50.913252965Z" level=info msg="StartContainer for \"de78f422a4a643b56f8438256663ef76cd3ca919fa93081b7d2f8a595656dfbc\"" Jan 13 20:09:50.913343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount215455409.mount: Deactivated successfully. Jan 13 20:09:50.964822 systemd[1]: run-containerd-runc-k8s.io-de78f422a4a643b56f8438256663ef76cd3ca919fa93081b7d2f8a595656dfbc-runc.SSjjDR.mount: Deactivated successfully. Jan 13 20:09:50.976343 systemd[1]: Started cri-containerd-de78f422a4a643b56f8438256663ef76cd3ca919fa93081b7d2f8a595656dfbc.scope - libcontainer container de78f422a4a643b56f8438256663ef76cd3ca919fa93081b7d2f8a595656dfbc. Jan 13 20:09:51.029437 containerd[1936]: time="2025-01-13T20:09:51.028497402Z" level=info msg="StartContainer for \"de78f422a4a643b56f8438256663ef76cd3ca919fa93081b7d2f8a595656dfbc\" returns successfully" Jan 13 20:09:51.453065 kubelet[2389]: E0113 20:09:51.452993 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:51.774201 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:09:52.453562 kubelet[2389]: E0113 20:09:52.453502 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:52.479267 kubelet[2389]: I0113 20:09:52.479217 2389 setters.go:568] "Node became not ready" node="172.31.25.10" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:09:52Z","lastTransitionTime":"2025-01-13T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:09:53.454707 kubelet[2389]: E0113 20:09:53.454635 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:54.455702 kubelet[2389]: E0113 20:09:54.455629 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:55.456687 kubelet[2389]: E0113 20:09:55.456626 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:55.773729 systemd-networkd[1839]: lxc_health: Link UP Jan 13 20:09:55.777977 (udev-worker)[5015]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:55.785290 systemd-networkd[1839]: lxc_health: Gained carrier Jan 13 20:09:56.457667 kubelet[2389]: E0113 20:09:56.457592 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:56.715154 kubelet[2389]: I0113 20:09:56.713907 2389 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ggqjr" podStartSLOduration=11.713852582 podStartE2EDuration="11.713852582s" podCreationTimestamp="2025-01-13 20:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:51.92344495 +0000 UTC m=+82.864594061" watchObservedRunningTime="2025-01-13 20:09:56.713852582 +0000 UTC m=+87.655001681" Jan 13 20:09:57.458409 kubelet[2389]: E0113 20:09:57.458329 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:57.558219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4172473558.mount: Deactivated successfully. Jan 13 20:09:57.705368 systemd-networkd[1839]: lxc_health: Gained IPv6LL Jan 13 20:09:58.458546 kubelet[2389]: E0113 20:09:58.458480 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:09:59.458823 kubelet[2389]: E0113 20:09:59.458766 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:00.459401 kubelet[2389]: E0113 20:10:00.459347 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:00.654518 ntpd[1909]: Listen normally on 15 lxc_health [fe80::144f:63ff:fef3:c0cb%15]:123 Jan 13 20:10:00.655102 ntpd[1909]: 13 Jan 20:10:00 ntpd[1909]: Listen normally on 15 lxc_health [fe80::144f:63ff:fef3:c0cb%15]:123 Jan 13 20:10:01.460428 kubelet[2389]: E0113 20:10:01.460371 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:02.461709 kubelet[2389]: E0113 20:10:02.461629 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:03.462059 kubelet[2389]: E0113 20:10:03.461961 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:04.462816 kubelet[2389]: E0113 20:10:04.462746 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:05.463836 kubelet[2389]: E0113 20:10:05.463763 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:06.154375 systemd[1]: run-containerd-runc-k8s.io-de78f422a4a643b56f8438256663ef76cd3ca919fa93081b7d2f8a595656dfbc-runc.9SLrTm.mount: Deactivated successfully. Jan 13 20:10:06.464625 kubelet[2389]: E0113 20:10:06.464569 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:07.365931 containerd[1936]: time="2025-01-13T20:10:07.365852339Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:07.367685 containerd[1936]: time="2025-01-13T20:10:07.367612307Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137110" Jan 13 20:10:07.369086 containerd[1936]: time="2025-01-13T20:10:07.368987507Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:07.376539 containerd[1936]: time="2025-01-13T20:10:07.376451183Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 20.700908875s" Jan 13 20:10:07.376725 containerd[1936]: time="2025-01-13T20:10:07.376540895Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:10:07.382442 containerd[1936]: time="2025-01-13T20:10:07.382190963Z" level=info msg="CreateContainer within sandbox \"eb2ad19be4729f4baaf89720ecaffa00b96d11eef1d489d91b044292b1ddfd33\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:10:07.407570 containerd[1936]: time="2025-01-13T20:10:07.407503727Z" level=info msg="CreateContainer within sandbox \"eb2ad19be4729f4baaf89720ecaffa00b96d11eef1d489d91b044292b1ddfd33\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"295f0516ae2ef8a10e44aab3fca3544f65d86f2f6b3e909e3caca536fe2a2746\"" Jan 13 20:10:07.408459 containerd[1936]: time="2025-01-13T20:10:07.408402935Z" level=info msg="StartContainer for \"295f0516ae2ef8a10e44aab3fca3544f65d86f2f6b3e909e3caca536fe2a2746\"" Jan 13 20:10:07.459349 systemd[1]: Started cri-containerd-295f0516ae2ef8a10e44aab3fca3544f65d86f2f6b3e909e3caca536fe2a2746.scope - libcontainer container 295f0516ae2ef8a10e44aab3fca3544f65d86f2f6b3e909e3caca536fe2a2746. Jan 13 20:10:07.465569 kubelet[2389]: E0113 20:10:07.465468 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:07.503910 containerd[1936]: time="2025-01-13T20:10:07.503780987Z" level=info msg="StartContainer for \"295f0516ae2ef8a10e44aab3fca3544f65d86f2f6b3e909e3caca536fe2a2746\" returns successfully" Jan 13 20:10:08.466553 kubelet[2389]: E0113 20:10:08.466494 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:09.467296 kubelet[2389]: E0113 20:10:09.467233 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:10.393270 kubelet[2389]: E0113 20:10:10.393206 2389 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:10.467561 kubelet[2389]: E0113 20:10:10.467518 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:11.468145 kubelet[2389]: E0113 20:10:11.468078 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:12.469245 kubelet[2389]: E0113 20:10:12.469185 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:13.470103 kubelet[2389]: E0113 20:10:13.470046 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:14.471139 kubelet[2389]: E0113 20:10:14.471072 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:15.472205 kubelet[2389]: E0113 20:10:15.472136 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:16.473397 kubelet[2389]: E0113 20:10:16.473333 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:17.474355 kubelet[2389]: E0113 20:10:17.474289 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:18.475171 kubelet[2389]: E0113 20:10:18.475114 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:19.475934 kubelet[2389]: E0113 20:10:19.475869 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:20.476666 kubelet[2389]: E0113 20:10:20.476593 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:21.477279 kubelet[2389]: E0113 20:10:21.477214 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:22.478031 kubelet[2389]: E0113 20:10:22.477966 2389 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:10:22.786066 systemd-logind[1915]: Power key pressed short. Jan 13 20:10:22.786086 systemd-logind[1915]: Powering off...