Jan 13 21:12:47.173118 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 21:12:47.173162 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:12:47.173188 kernel: KASLR disabled due to lack of seed Jan 13 21:12:47.173205 kernel: efi: EFI v2.7 by EDK II Jan 13 21:12:47.173221 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 13 21:12:47.174309 kernel: ACPI: Early table checksum verification disabled Jan 13 21:12:47.174335 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 21:12:47.174352 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 21:12:47.174369 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:12:47.174385 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 21:12:47.174409 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:12:47.174425 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 21:12:47.174441 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 21:12:47.174457 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 21:12:47.174477 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:12:47.174498 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 21:12:47.174516 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 21:12:47.174533 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 21:12:47.174550 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 21:12:47.174566 kernel: printk: bootconsole [uart0] enabled Jan 13 21:12:47.174582 kernel: NUMA: Failed to initialise from firmware Jan 13 21:12:47.174599 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:12:47.174616 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 21:12:47.174632 kernel: Zone ranges: Jan 13 21:12:47.174649 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 21:12:47.174665 kernel: DMA32 empty Jan 13 21:12:47.174686 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 21:12:47.174702 kernel: Movable zone start for each node Jan 13 21:12:47.174718 kernel: Early memory node ranges Jan 13 21:12:47.174735 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 21:12:47.174751 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 21:12:47.174767 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 21:12:47.174783 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 21:12:47.174799 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 21:12:47.174816 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 21:12:47.174832 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 21:12:47.174848 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 21:12:47.174864 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:12:47.174885 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 21:12:47.174902 kernel: psci: probing for conduit method from ACPI. Jan 13 21:12:47.174926 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 21:12:47.174944 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:12:47.174961 kernel: psci: Trusted OS migration not required Jan 13 21:12:47.174983 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:12:47.175000 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:12:47.175018 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:12:47.175035 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 21:12:47.175053 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:12:47.175070 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:12:47.175087 kernel: CPU features: detected: Spectre-v2 Jan 13 21:12:47.175104 kernel: CPU features: detected: Spectre-v3a Jan 13 21:12:47.175121 kernel: CPU features: detected: Spectre-BHB Jan 13 21:12:47.175138 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 21:12:47.175155 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 21:12:47.175177 kernel: alternatives: applying boot alternatives Jan 13 21:12:47.175197 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:12:47.175216 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:12:47.176288 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:12:47.176321 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:12:47.176340 kernel: Fallback order for Node 0: 0 Jan 13 21:12:47.176358 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 21:12:47.176377 kernel: Policy zone: Normal Jan 13 21:12:47.176394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:12:47.176431 kernel: software IO TLB: area num 2. Jan 13 21:12:47.176450 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 21:12:47.176479 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 13 21:12:47.176497 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:12:47.176515 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:12:47.176534 kernel: rcu: RCU event tracing is enabled. Jan 13 21:12:47.176552 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:12:47.176570 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:12:47.176588 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:12:47.176605 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:12:47.176623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:12:47.176640 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:12:47.176657 kernel: GICv3: 96 SPIs implemented Jan 13 21:12:47.176679 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:12:47.176697 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:12:47.176714 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 21:12:47.176732 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 21:12:47.176749 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 21:12:47.176767 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:12:47.176784 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:12:47.176802 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 21:12:47.176819 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 21:12:47.176836 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 21:12:47.176854 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:12:47.176871 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 21:12:47.176894 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 21:12:47.176912 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 21:12:47.176929 kernel: Console: colour dummy device 80x25 Jan 13 21:12:47.176947 kernel: printk: console [tty1] enabled Jan 13 21:12:47.176965 kernel: ACPI: Core revision 20230628 Jan 13 21:12:47.176983 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 21:12:47.177001 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:12:47.177019 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:12:47.177036 kernel: landlock: Up and running. Jan 13 21:12:47.177058 kernel: SELinux: Initializing. Jan 13 21:12:47.177076 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:12:47.177094 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:12:47.177112 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:12:47.177130 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:12:47.177148 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:12:47.177166 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:12:47.177184 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 21:12:47.177201 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 21:12:47.177223 kernel: Remapping and enabling EFI services. Jan 13 21:12:47.177693 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:12:47.177714 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:12:47.177732 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 21:12:47.177751 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 21:12:47.177770 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 21:12:47.177787 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:12:47.177806 kernel: SMP: Total of 2 processors activated. Jan 13 21:12:47.177824 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:12:47.177850 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 21:12:47.177869 kernel: CPU features: detected: CRC32 instructions Jan 13 21:12:47.177887 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:12:47.177918 kernel: alternatives: applying system-wide alternatives Jan 13 21:12:47.177942 kernel: devtmpfs: initialized Jan 13 21:12:47.177961 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:12:47.177980 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:12:47.177999 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:12:47.178017 kernel: SMBIOS 3.0.0 present. Jan 13 21:12:47.178037 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 21:12:47.178060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:12:47.178079 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:12:47.178098 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:12:47.178117 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:12:47.178135 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:12:47.178154 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Jan 13 21:12:47.178173 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:12:47.178196 kernel: cpuidle: using governor menu Jan 13 21:12:47.178215 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:12:47.179349 kernel: ASID allocator initialised with 65536 entries Jan 13 21:12:47.179384 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:12:47.179404 kernel: Serial: AMBA PL011 UART driver Jan 13 21:12:47.179422 kernel: Modules: 17520 pages in range for non-PLT usage Jan 13 21:12:47.179441 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:12:47.179460 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:12:47.179480 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:12:47.179538 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:12:47.179570 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:12:47.179590 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:12:47.179609 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:12:47.179627 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:12:47.179646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:12:47.179666 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:12:47.179684 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:12:47.179703 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:12:47.179728 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:12:47.179747 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:12:47.179765 kernel: ACPI: Interpreter enabled Jan 13 21:12:47.179784 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:12:47.179803 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:12:47.179821 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 21:12:47.180120 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:12:47.180365 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:12:47.180577 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:12:47.180779 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 21:12:47.180979 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 21:12:47.181004 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 21:12:47.181024 kernel: acpiphp: Slot [1] registered Jan 13 21:12:47.181043 kernel: acpiphp: Slot [2] registered Jan 13 21:12:47.181061 kernel: acpiphp: Slot [3] registered Jan 13 21:12:47.181080 kernel: acpiphp: Slot [4] registered Jan 13 21:12:47.181104 kernel: acpiphp: Slot [5] registered Jan 13 21:12:47.181123 kernel: acpiphp: Slot [6] registered Jan 13 21:12:47.181141 kernel: acpiphp: Slot [7] registered Jan 13 21:12:47.181159 kernel: acpiphp: Slot [8] registered Jan 13 21:12:47.181177 kernel: acpiphp: Slot [9] registered Jan 13 21:12:47.181195 kernel: acpiphp: Slot [10] registered Jan 13 21:12:47.181214 kernel: acpiphp: Slot [11] registered Jan 13 21:12:47.182348 kernel: acpiphp: Slot [12] registered Jan 13 21:12:47.182396 kernel: acpiphp: Slot [13] registered Jan 13 21:12:47.182425 kernel: acpiphp: Slot [14] registered Jan 13 21:12:47.182459 kernel: acpiphp: Slot [15] registered Jan 13 21:12:47.182478 kernel: acpiphp: Slot [16] registered Jan 13 21:12:47.182497 kernel: acpiphp: Slot [17] registered Jan 13 21:12:47.182516 kernel: acpiphp: Slot [18] registered Jan 13 21:12:47.182534 kernel: acpiphp: Slot [19] registered Jan 13 21:12:47.182553 kernel: acpiphp: Slot [20] registered Jan 13 21:12:47.182572 kernel: acpiphp: Slot [21] registered Jan 13 21:12:47.182591 kernel: acpiphp: Slot [22] registered Jan 13 21:12:47.182610 kernel: acpiphp: Slot [23] registered Jan 13 21:12:47.182634 kernel: acpiphp: Slot [24] registered Jan 13 21:12:47.182653 kernel: acpiphp: Slot [25] registered Jan 13 21:12:47.182671 kernel: acpiphp: Slot [26] registered Jan 13 21:12:47.182690 kernel: acpiphp: Slot [27] registered Jan 13 21:12:47.182708 kernel: acpiphp: Slot [28] registered Jan 13 21:12:47.182726 kernel: acpiphp: Slot [29] registered Jan 13 21:12:47.182744 kernel: acpiphp: Slot [30] registered Jan 13 21:12:47.182763 kernel: acpiphp: Slot [31] registered Jan 13 21:12:47.182781 kernel: PCI host bridge to bus 0000:00 Jan 13 21:12:47.183032 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 21:12:47.183226 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:12:47.183484 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 21:12:47.183663 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 21:12:47.183887 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 21:12:47.184115 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 21:12:47.184816 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 21:12:47.185078 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:12:47.187424 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 21:12:47.187682 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:12:47.187920 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:12:47.188140 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 21:12:47.188436 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 21:12:47.188664 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 21:12:47.188886 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:12:47.189096 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 21:12:47.189345 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 21:12:47.189577 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 21:12:47.189782 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 21:12:47.189994 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 21:12:47.190191 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 21:12:47.190425 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:12:47.190613 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 21:12:47.190639 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:12:47.190659 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:12:47.190678 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:12:47.190697 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:12:47.190715 kernel: iommu: Default domain type: Translated Jan 13 21:12:47.190734 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:12:47.190760 kernel: efivars: Registered efivars operations Jan 13 21:12:47.190779 kernel: vgaarb: loaded Jan 13 21:12:47.190797 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:12:47.190816 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:12:47.190834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:12:47.190853 kernel: pnp: PnP ACPI init Jan 13 21:12:47.191068 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 21:12:47.191095 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:12:47.191120 kernel: NET: Registered PF_INET protocol family Jan 13 21:12:47.191139 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:12:47.191158 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:12:47.191177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:12:47.191196 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:12:47.191215 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:12:47.192331 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:12:47.192373 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:12:47.192393 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:12:47.192421 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:12:47.192440 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:12:47.192459 kernel: kvm [1]: HYP mode not available Jan 13 21:12:47.192477 kernel: Initialise system trusted keyrings Jan 13 21:12:47.192496 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:12:47.192515 kernel: Key type asymmetric registered Jan 13 21:12:47.192534 kernel: Asymmetric key parser 'x509' registered Jan 13 21:12:47.192552 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:12:47.192570 kernel: io scheduler mq-deadline registered Jan 13 21:12:47.192595 kernel: io scheduler kyber registered Jan 13 21:12:47.192614 kernel: io scheduler bfq registered Jan 13 21:12:47.192871 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 21:12:47.192901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:12:47.192921 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:12:47.192940 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 21:12:47.192959 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 21:12:47.192977 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:12:47.193003 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 21:12:47.195033 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 21:12:47.195079 kernel: printk: console [ttyS0] disabled Jan 13 21:12:47.195100 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 21:12:47.195119 kernel: printk: console [ttyS0] enabled Jan 13 21:12:47.195138 kernel: printk: bootconsole [uart0] disabled Jan 13 21:12:47.195157 kernel: thunder_xcv, ver 1.0 Jan 13 21:12:47.195175 kernel: thunder_bgx, ver 1.0 Jan 13 21:12:47.195194 kernel: nicpf, ver 1.0 Jan 13 21:12:47.195222 kernel: nicvf, ver 1.0 Jan 13 21:12:47.196320 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:12:47.196522 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:12:46 UTC (1736802766) Jan 13 21:12:47.196549 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:12:47.196569 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 21:12:47.196588 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:12:47.196607 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:12:47.196625 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:12:47.196650 kernel: Segment Routing with IPv6 Jan 13 21:12:47.196670 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:12:47.196688 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:12:47.196707 kernel: Key type dns_resolver registered Jan 13 21:12:47.196725 kernel: registered taskstats version 1 Jan 13 21:12:47.196744 kernel: Loading compiled-in X.509 certificates Jan 13 21:12:47.196762 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:12:47.196781 kernel: Key type .fscrypt registered Jan 13 21:12:47.196799 kernel: Key type fscrypt-provisioning registered Jan 13 21:12:47.196822 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:12:47.196841 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:12:47.196859 kernel: ima: No architecture policies found Jan 13 21:12:47.196878 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:12:47.196897 kernel: clk: Disabling unused clocks Jan 13 21:12:47.196915 kernel: Freeing unused kernel memory: 39360K Jan 13 21:12:47.196933 kernel: Run /init as init process Jan 13 21:12:47.196951 kernel: with arguments: Jan 13 21:12:47.196969 kernel: /init Jan 13 21:12:47.196987 kernel: with environment: Jan 13 21:12:47.197010 kernel: HOME=/ Jan 13 21:12:47.197029 kernel: TERM=linux Jan 13 21:12:47.197046 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:12:47.197069 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:12:47.197092 systemd[1]: Detected virtualization amazon. Jan 13 21:12:47.197113 systemd[1]: Detected architecture arm64. Jan 13 21:12:47.197132 systemd[1]: Running in initrd. Jan 13 21:12:47.197157 systemd[1]: No hostname configured, using default hostname. Jan 13 21:12:47.197177 systemd[1]: Hostname set to . Jan 13 21:12:47.197197 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:12:47.197217 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:12:47.198393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:47.198423 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:47.198445 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:12:47.198467 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:12:47.198495 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:12:47.198517 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:12:47.198541 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:12:47.198562 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:12:47.198583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:47.198603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:47.198623 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:12:47.198648 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:12:47.198668 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:12:47.198688 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:12:47.198708 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:12:47.198728 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:12:47.198749 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:12:47.198769 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:12:47.198789 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:47.198810 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:47.198835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:47.198855 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:12:47.198875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:12:47.198895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:12:47.198916 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:12:47.198935 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:12:47.198956 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:12:47.198976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:12:47.199001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:47.199021 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:12:47.199042 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:47.199101 systemd-journald[251]: Collecting audit messages is disabled. Jan 13 21:12:47.199150 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:12:47.199172 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:12:47.199193 systemd-journald[251]: Journal started Jan 13 21:12:47.201273 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2d4383855bb36e930b16830d82ab0d) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:12:47.173807 systemd-modules-load[252]: Inserted module 'overlay' Jan 13 21:12:47.204984 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:12:47.214094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:47.223425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:12:47.226306 kernel: Bridge firewalling registered Jan 13 21:12:47.227332 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 13 21:12:47.229402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:12:47.246573 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:12:47.254316 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:47.258547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:12:47.262467 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:47.266583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:12:47.294085 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:47.312027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:47.314992 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:47.315805 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:47.337731 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:12:47.344145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:12:47.379754 dracut-cmdline[288]: dracut-dracut-053 Jan 13 21:12:47.386043 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:12:47.424904 systemd-resolved[289]: Positive Trust Anchors: Jan 13 21:12:47.424941 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:12:47.425004 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:12:47.515258 kernel: SCSI subsystem initialized Jan 13 21:12:47.522270 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:12:47.535275 kernel: iscsi: registered transport (tcp) Jan 13 21:12:47.557275 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:12:47.557344 kernel: QLogic iSCSI HBA Driver Jan 13 21:12:47.649261 kernel: random: crng init done Jan 13 21:12:47.649443 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 13 21:12:47.652918 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:12:47.657599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:47.681175 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:12:47.690555 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:12:47.736760 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:12:47.736873 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:12:47.736904 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:12:47.804295 kernel: raid6: neonx8 gen() 6733 MB/s Jan 13 21:12:47.821264 kernel: raid6: neonx4 gen() 6550 MB/s Jan 13 21:12:47.838264 kernel: raid6: neonx2 gen() 5453 MB/s Jan 13 21:12:47.855265 kernel: raid6: neonx1 gen() 3941 MB/s Jan 13 21:12:47.872264 kernel: raid6: int64x8 gen() 3807 MB/s Jan 13 21:12:47.889264 kernel: raid6: int64x4 gen() 3707 MB/s Jan 13 21:12:47.906264 kernel: raid6: int64x2 gen() 3594 MB/s Jan 13 21:12:47.924005 kernel: raid6: int64x1 gen() 2746 MB/s Jan 13 21:12:47.924037 kernel: raid6: using algorithm neonx8 gen() 6733 MB/s Jan 13 21:12:47.941978 kernel: raid6: .... xor() 4757 MB/s, rmw enabled Jan 13 21:12:47.942023 kernel: raid6: using neon recovery algorithm Jan 13 21:12:47.949268 kernel: xor: measuring software checksum speed Jan 13 21:12:47.951391 kernel: 8regs : 9891 MB/sec Jan 13 21:12:47.951423 kernel: 32regs : 11912 MB/sec Jan 13 21:12:47.952542 kernel: arm64_neon : 9570 MB/sec Jan 13 21:12:47.952574 kernel: xor: using function: 32regs (11912 MB/sec) Jan 13 21:12:48.037298 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:12:48.056660 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:12:48.066531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:48.098885 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 13 21:12:48.107052 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:48.131637 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:12:48.159730 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jan 13 21:12:48.216307 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:12:48.224641 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:12:48.341939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:48.355911 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:12:48.390912 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:12:48.393801 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:12:48.410755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:48.411217 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:12:48.437735 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:12:48.470659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:12:48.544161 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:12:48.544227 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 21:12:48.576641 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:12:48.576904 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:12:48.577135 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:50:43:5f:df:9d Jan 13 21:12:48.553858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:12:48.554083 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:48.556748 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:12:48.558896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:12:48.559146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:48.561304 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:48.578820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:48.592218 (udev-worker)[543]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:48.629709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:48.637569 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 21:12:48.637628 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:12:48.643959 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:12:48.652291 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:12:48.665272 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:12:48.665342 kernel: GPT:9289727 != 16777215 Jan 13 21:12:48.665378 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:12:48.668086 kernel: GPT:9289727 != 16777215 Jan 13 21:12:48.668148 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:12:48.670001 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:48.688133 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:48.758278 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (522) Jan 13 21:12:48.791323 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (544) Jan 13 21:12:48.815017 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:12:48.884202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:12:48.911255 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:12:48.913837 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:12:48.930508 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:12:48.950614 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:12:48.963417 disk-uuid[664]: Primary Header is updated. Jan 13 21:12:48.963417 disk-uuid[664]: Secondary Entries is updated. Jan 13 21:12:48.963417 disk-uuid[664]: Secondary Header is updated. Jan 13 21:12:48.972279 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:48.984334 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:49.993286 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:49.994207 disk-uuid[665]: The operation has completed successfully. Jan 13 21:12:50.172384 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:12:50.172585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:12:50.227474 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:12:50.236863 sh[923]: Success Jan 13 21:12:50.260299 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:12:50.384604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:12:50.391539 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:12:50.400480 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:12:50.465595 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:12:50.465668 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:50.465696 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:12:50.467269 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:12:50.468431 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:12:50.515276 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:12:50.520961 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:12:50.524043 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:12:50.541631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:12:50.547519 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:12:50.580815 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:50.580880 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:50.582668 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:12:50.590289 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:12:50.607621 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:12:50.610305 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:50.619397 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:12:50.636672 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:12:50.742324 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:12:50.756518 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:12:50.819513 systemd-networkd[1116]: lo: Link UP Jan 13 21:12:50.819533 systemd-networkd[1116]: lo: Gained carrier Jan 13 21:12:50.824760 systemd-networkd[1116]: Enumeration completed Jan 13 21:12:50.825612 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:50.825619 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:12:50.828752 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:12:50.839950 systemd[1]: Reached target network.target - Network. Jan 13 21:12:50.843870 systemd-networkd[1116]: eth0: Link UP Jan 13 21:12:50.843888 systemd-networkd[1116]: eth0: Gained carrier Jan 13 21:12:50.843906 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:50.857361 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.29.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:12:50.872536 ignition[1034]: Ignition 2.19.0 Jan 13 21:12:50.872565 ignition[1034]: Stage: fetch-offline Jan 13 21:12:50.873609 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:50.873634 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:50.874070 ignition[1034]: Ignition finished successfully Jan 13 21:12:50.878769 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:12:50.893725 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:12:50.919526 ignition[1124]: Ignition 2.19.0 Jan 13 21:12:50.920006 ignition[1124]: Stage: fetch Jan 13 21:12:50.920652 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:50.920676 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:50.920850 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:50.942658 ignition[1124]: PUT result: OK Jan 13 21:12:50.945746 ignition[1124]: parsed url from cmdline: "" Jan 13 21:12:50.945763 ignition[1124]: no config URL provided Jan 13 21:12:50.945778 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:12:50.945803 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:12:50.945833 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:50.949687 ignition[1124]: PUT result: OK Jan 13 21:12:50.949764 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:12:50.951625 ignition[1124]: GET result: OK Jan 13 21:12:50.951732 ignition[1124]: parsing config with SHA512: 1afecf00e750b23c0ffca01abd3dfa489f7244bf348c3be2a1c60cbe134cff2ffda59b9d16afc82f5613b19d0acc5541c9ef47800135455670a1ae93a931a54e Jan 13 21:12:50.962330 unknown[1124]: fetched base config from "system" Jan 13 21:12:50.962748 ignition[1124]: fetch: fetch complete Jan 13 21:12:50.962346 unknown[1124]: fetched base config from "system" Jan 13 21:12:50.962759 ignition[1124]: fetch: fetch passed Jan 13 21:12:50.962360 unknown[1124]: fetched user config from "aws" Jan 13 21:12:50.962830 ignition[1124]: Ignition finished successfully Jan 13 21:12:50.969696 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:12:50.988603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:12:51.012705 ignition[1131]: Ignition 2.19.0 Jan 13 21:12:51.012734 ignition[1131]: Stage: kargs Jan 13 21:12:51.014379 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:51.014404 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:51.014908 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:51.021408 ignition[1131]: PUT result: OK Jan 13 21:12:51.025618 ignition[1131]: kargs: kargs passed Jan 13 21:12:51.025714 ignition[1131]: Ignition finished successfully Jan 13 21:12:51.031093 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:12:51.039522 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:12:51.072432 ignition[1137]: Ignition 2.19.0 Jan 13 21:12:51.072959 ignition[1137]: Stage: disks Jan 13 21:12:51.073641 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:51.073682 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:51.073836 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:51.078021 ignition[1137]: PUT result: OK Jan 13 21:12:51.085347 ignition[1137]: disks: disks passed Jan 13 21:12:51.085440 ignition[1137]: Ignition finished successfully Jan 13 21:12:51.088719 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:12:51.094455 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:12:51.096604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:12:51.098945 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:12:51.100758 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:12:51.102621 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:12:51.118155 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:12:51.166398 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:12:51.172609 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:12:51.189429 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:12:51.277270 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:12:51.277954 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:12:51.280144 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:12:51.297487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:12:51.307736 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:12:51.312149 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:12:51.312254 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:12:51.312306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:12:51.331544 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:12:51.339423 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1164) Jan 13 21:12:51.343268 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:51.343342 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:51.343380 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:12:51.344670 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:12:51.360298 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:12:51.362551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:12:51.439739 initrd-setup-root[1188]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:12:51.449088 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:12:51.457688 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:12:51.466349 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:12:51.609374 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:12:51.618500 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:12:51.623506 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:12:51.652790 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:12:51.658305 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:51.676792 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:12:51.699859 ignition[1279]: INFO : Ignition 2.19.0 Jan 13 21:12:51.699859 ignition[1279]: INFO : Stage: mount Jan 13 21:12:51.703157 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:51.703157 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:51.703157 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:51.709391 ignition[1279]: INFO : PUT result: OK Jan 13 21:12:51.713523 ignition[1279]: INFO : mount: mount passed Jan 13 21:12:51.715797 ignition[1279]: INFO : Ignition finished successfully Jan 13 21:12:51.719802 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:12:51.729438 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:12:51.763959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:12:51.788270 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1289) Jan 13 21:12:51.791592 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:51.791630 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:51.791657 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:12:51.797266 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:12:51.801689 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:12:51.839123 ignition[1305]: INFO : Ignition 2.19.0 Jan 13 21:12:51.839123 ignition[1305]: INFO : Stage: files Jan 13 21:12:51.842349 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:51.842349 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:51.842349 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:51.849100 ignition[1305]: INFO : PUT result: OK Jan 13 21:12:51.854353 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:12:51.857252 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:12:51.857252 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:12:51.865801 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:12:51.868625 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:12:51.871437 unknown[1305]: wrote ssh authorized keys file for user: core Jan 13 21:12:51.873737 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:12:51.877012 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:12:51.880198 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:12:51.880198 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:12:51.880198 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:12:51.880198 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:12:51.880198 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:12:51.880198 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:12:51.880198 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 21:12:52.257126 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:12:52.552416 systemd-networkd[1116]: eth0: Gained IPv6LL Jan 13 21:12:52.595271 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:12:52.599349 ignition[1305]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:12:52.599349 ignition[1305]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:12:52.599349 ignition[1305]: INFO : files: files passed Jan 13 21:12:52.599349 ignition[1305]: INFO : Ignition finished successfully Jan 13 21:12:52.609942 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:12:52.620562 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:12:52.624906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:12:52.654085 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:12:52.656191 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:12:52.668669 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:52.668669 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:52.677599 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:52.684321 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:12:52.687673 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:12:52.702541 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:12:52.757466 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:12:52.757684 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:12:52.761897 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:12:52.764542 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:12:52.766456 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:12:52.786608 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:12:52.813890 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:12:52.825548 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:12:52.855848 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:52.860611 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:52.863196 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:12:52.868674 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:12:52.868914 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:12:52.871759 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:12:52.877275 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:12:52.880528 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:12:52.886894 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:12:52.898103 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:12:52.903922 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:12:52.918056 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:12:52.933800 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:12:52.934049 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:12:52.934771 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:12:52.935394 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:12:52.935623 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:12:52.936852 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:52.938782 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:52.961860 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:12:52.965337 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:52.970380 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:12:52.970786 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:12:52.976602 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:12:52.977307 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:12:52.983630 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:12:52.984028 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:12:53.000458 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:12:53.002969 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:12:53.003473 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:53.025732 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:12:53.030404 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:12:53.032448 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:53.038571 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:12:53.040634 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:12:53.061954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:12:53.065378 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:12:53.079303 ignition[1358]: INFO : Ignition 2.19.0 Jan 13 21:12:53.081179 ignition[1358]: INFO : Stage: umount Jan 13 21:12:53.083470 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:53.086476 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:53.086476 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:53.093206 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:12:53.095327 ignition[1358]: INFO : PUT result: OK Jan 13 21:12:53.099151 ignition[1358]: INFO : umount: umount passed Jan 13 21:12:53.100963 ignition[1358]: INFO : Ignition finished successfully Jan 13 21:12:53.105163 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:12:53.105618 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:12:53.110485 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:12:53.110577 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:12:53.113517 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:12:53.113607 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:12:53.117140 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:12:53.117226 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:12:53.129679 systemd[1]: Stopped target network.target - Network. Jan 13 21:12:53.131360 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:12:53.131464 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:12:53.133697 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:12:53.135352 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:12:53.145682 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:53.149904 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:12:53.153078 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:12:53.157083 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:12:53.157165 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:12:53.160724 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:12:53.160796 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:12:53.160961 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:12:53.161041 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:12:53.161285 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:12:53.161357 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:12:53.161801 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:12:53.162539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:12:53.183294 systemd-networkd[1116]: eth0: DHCPv6 lease lost Jan 13 21:12:53.194528 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:12:53.194790 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:12:53.212840 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:12:53.214018 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:12:53.238315 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:12:53.238394 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:53.253842 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:12:53.260292 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:12:53.260407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:12:53.262758 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:12:53.262840 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:53.264820 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:12:53.264897 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:53.267186 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:12:53.267374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:53.278168 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:53.281097 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:12:53.283017 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:12:53.308296 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:12:53.310194 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:53.318936 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:12:53.319048 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:53.324439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:12:53.324512 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:53.327698 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:12:53.327788 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:12:53.330016 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:12:53.330099 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:12:53.342631 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:12:53.342728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:53.345451 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:12:53.345553 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:12:53.364407 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:12:53.372913 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:12:53.373041 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:53.378530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:12:53.378630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:53.390619 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:12:53.390955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:12:53.396703 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:12:53.396874 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:12:53.403827 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:12:53.419616 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:12:53.436636 systemd[1]: Switching root. Jan 13 21:12:53.477933 systemd-journald[251]: Journal stopped Jan 13 21:12:55.243130 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 13 21:12:55.245141 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:12:55.245202 kernel: SELinux: policy capability open_perms=1 Jan 13 21:12:55.245251 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:12:55.245422 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:12:55.245464 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:12:55.245517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:12:55.245551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:12:55.245582 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:12:55.245625 kernel: audit: type=1403 audit(1736802773.676:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:12:55.245665 systemd[1]: Successfully loaded SELinux policy in 50.274ms. Jan 13 21:12:55.245715 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.396ms. Jan 13 21:12:55.245751 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:12:55.245782 systemd[1]: Detected virtualization amazon. Jan 13 21:12:55.245815 systemd[1]: Detected architecture arm64. Jan 13 21:12:55.245846 systemd[1]: Detected first boot. Jan 13 21:12:55.245876 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:12:55.245912 zram_generator::config[1401]: No configuration found. Jan 13 21:12:55.245950 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:12:55.245985 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:12:55.246021 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:12:55.246053 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:12:55.246085 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:12:55.246117 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:12:55.246152 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:12:55.246185 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:12:55.246215 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:12:55.252925 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:12:55.252972 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:12:55.253007 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:12:55.253041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:55.253075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:55.253107 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:12:55.253146 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:12:55.253189 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:12:55.253227 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:12:55.253290 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:12:55.253322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:55.253354 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:12:55.253386 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:12:55.253421 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:12:55.253458 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:12:55.253506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:55.253546 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:12:55.253580 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:12:55.253612 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:12:55.253644 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:12:55.253697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:12:55.253730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:55.253760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:55.253796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:55.253827 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:12:55.253859 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:12:55.253889 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:12:55.253919 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:12:55.253951 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:12:55.253983 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:12:55.254013 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:12:55.254047 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:12:55.254082 systemd[1]: Reached target machines.target - Containers. Jan 13 21:12:55.254112 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:12:55.254143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:12:55.254175 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:12:55.254207 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:12:55.256104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:12:55.256154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:12:55.256187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:12:55.256223 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:12:55.256272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:12:55.256304 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:12:55.256334 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:12:55.256366 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:12:55.256396 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:12:55.256428 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:12:55.256459 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:12:55.256499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:12:55.256535 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:12:55.256566 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:12:55.256596 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:12:55.256628 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:12:55.256660 systemd[1]: Stopped verity-setup.service. Jan 13 21:12:55.256691 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:12:55.256721 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:12:55.256751 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:12:55.256782 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:12:55.256817 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:12:55.256851 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:12:55.256881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:55.256910 kernel: fuse: init (API version 7.39) Jan 13 21:12:55.256942 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:12:55.256976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:12:55.257010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:12:55.257039 kernel: loop: module loaded Jan 13 21:12:55.257071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:12:55.257101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:12:55.257131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:12:55.257163 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:12:55.257195 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:12:55.257250 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:12:55.257286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:12:55.257317 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:55.257354 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:12:55.257385 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:12:55.257415 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:12:55.257449 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:12:55.257479 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:12:55.257529 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:12:55.257568 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:12:55.257600 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:12:55.257629 kernel: ACPI: bus type drm_connector registered Jan 13 21:12:55.257658 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:12:55.257738 systemd-journald[1483]: Collecting audit messages is disabled. Jan 13 21:12:55.257793 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:12:55.257825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:12:55.257856 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:12:55.257887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:12:55.257917 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:12:55.257948 systemd-journald[1483]: Journal started Jan 13 21:12:55.258000 systemd-journald[1483]: Runtime Journal (/run/log/journal/ec2d4383855bb36e930b16830d82ab0d) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:12:54.622540 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:12:54.652511 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:12:54.653298 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:12:55.266262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:12:55.278261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:55.287463 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:12:55.296206 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:12:55.301384 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:12:55.301794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:12:55.307050 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:12:55.310790 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:12:55.313606 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:12:55.337293 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:12:55.408587 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:12:55.418544 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:12:55.421133 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:12:55.424761 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:12:55.428314 kernel: loop0: detected capacity change from 0 to 114328 Jan 13 21:12:55.441660 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:12:55.471931 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:12:55.481952 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:12:55.495601 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:12:55.514133 systemd-journald[1483]: Time spent on flushing to /var/log/journal/ec2d4383855bb36e930b16830d82ab0d is 39.103ms for 896 entries. Jan 13 21:12:55.514133 systemd-journald[1483]: System Journal (/var/log/journal/ec2d4383855bb36e930b16830d82ab0d) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:12:55.566155 systemd-journald[1483]: Received client request to flush runtime journal. Jan 13 21:12:55.566303 kernel: loop1: detected capacity change from 0 to 194096 Jan 13 21:12:55.528107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:55.570863 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:12:55.597445 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:55.609484 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:12:55.626289 kernel: loop2: detected capacity change from 0 to 52536 Jan 13 21:12:55.633452 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:12:55.643541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:12:55.671673 udevadm[1549]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:12:55.736689 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Jan 13 21:12:55.736730 systemd-tmpfiles[1551]: ACLs are not supported, ignoring. Jan 13 21:12:55.763478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:55.769278 kernel: loop3: detected capacity change from 0 to 114432 Jan 13 21:12:55.829350 kernel: loop4: detected capacity change from 0 to 114328 Jan 13 21:12:55.856270 kernel: loop5: detected capacity change from 0 to 194096 Jan 13 21:12:55.891271 kernel: loop6: detected capacity change from 0 to 52536 Jan 13 21:12:55.926282 kernel: loop7: detected capacity change from 0 to 114432 Jan 13 21:12:55.962220 (sd-merge)[1557]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:12:55.963193 (sd-merge)[1557]: Merged extensions into '/usr'. Jan 13 21:12:55.978058 systemd[1]: Reloading requested from client PID 1512 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:12:55.978092 systemd[1]: Reloading... Jan 13 21:12:56.213485 zram_generator::config[1592]: No configuration found. Jan 13 21:12:56.277051 ldconfig[1505]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:12:56.492940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:12:56.616152 systemd[1]: Reloading finished in 635 ms. Jan 13 21:12:56.656297 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:12:56.659030 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:12:56.661856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:12:56.676602 systemd[1]: Starting ensure-sysext.service... Jan 13 21:12:56.685692 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:12:56.691614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:56.716486 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:12:56.716525 systemd[1]: Reloading... Jan 13 21:12:56.760080 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:12:56.762993 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:12:56.767855 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:12:56.768798 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 13 21:12:56.769178 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 13 21:12:56.781086 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:12:56.783697 systemd-tmpfiles[1637]: Skipping /boot Jan 13 21:12:56.795047 systemd-udevd[1638]: Using default interface naming scheme 'v255'. Jan 13 21:12:56.821286 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:12:56.821313 systemd-tmpfiles[1637]: Skipping /boot Jan 13 21:12:56.906327 zram_generator::config[1677]: No configuration found. Jan 13 21:12:57.084438 (udev-worker)[1671]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:57.215553 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1701) Jan 13 21:12:57.304664 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:12:57.455047 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:12:57.455455 systemd[1]: Reloading finished in 738 ms. Jan 13 21:12:57.493616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:57.498400 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:57.600916 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:12:57.610612 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:12:57.613109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:12:57.617683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:12:57.629779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:12:57.637857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:12:57.641139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:12:57.658843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:12:57.667804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:12:57.678789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:12:57.685361 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:12:57.694348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:12:57.694715 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:12:57.697809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:12:57.698116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:12:57.735898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:12:57.738281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:12:57.750756 systemd[1]: Finished ensure-sysext.service. Jan 13 21:12:57.759186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:12:57.776887 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:12:57.787159 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:12:57.792635 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:12:57.803951 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:12:57.807652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:12:57.814765 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:12:57.818175 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:12:57.831373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:12:57.838150 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:57.844340 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:12:57.847814 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:12:57.867520 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:12:57.876030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:12:57.876953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:12:57.881049 augenrules[1869]: No rules Jan 13 21:12:57.893582 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:12:57.895839 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:12:57.900769 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:12:57.907139 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:12:57.911372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:12:57.911791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:12:57.914960 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:12:57.916304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:12:57.919000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:12:57.931940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:12:57.949743 lvm[1875]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:12:58.001897 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:12:58.014265 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:12:58.018083 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:58.029751 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:12:58.041618 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:12:58.057954 lvm[1885]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:12:58.078353 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:12:58.079210 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:12:58.124683 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:12:58.152972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:58.186500 systemd-networkd[1844]: lo: Link UP Jan 13 21:12:58.186516 systemd-networkd[1844]: lo: Gained carrier Jan 13 21:12:58.189774 systemd-networkd[1844]: Enumeration completed Jan 13 21:12:58.190084 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:12:58.192820 systemd-networkd[1844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:58.192955 systemd-networkd[1844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:12:58.195158 systemd-networkd[1844]: eth0: Link UP Jan 13 21:12:58.195708 systemd-networkd[1844]: eth0: Gained carrier Jan 13 21:12:58.195750 systemd-networkd[1844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:58.202664 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:12:58.210371 systemd-networkd[1844]: eth0: DHCPv4 address 172.31.29.222/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:12:58.217408 systemd-resolved[1845]: Positive Trust Anchors: Jan 13 21:12:58.217447 systemd-resolved[1845]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:12:58.217535 systemd-resolved[1845]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:12:58.225197 systemd-resolved[1845]: Defaulting to hostname 'linux'. Jan 13 21:12:58.228319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:12:58.230635 systemd[1]: Reached target network.target - Network. Jan 13 21:12:58.232397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:58.234633 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:12:58.236722 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:12:58.239002 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:12:58.241594 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:12:58.243832 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:12:58.246158 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:12:58.248447 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:12:58.248503 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:12:58.250187 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:12:58.253087 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:12:58.257768 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:12:58.268466 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:12:58.271596 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:12:58.273898 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:12:58.276053 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:12:58.278183 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:12:58.278363 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:12:58.285441 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:12:58.297161 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:12:58.304572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:12:58.325632 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:12:58.341627 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:12:58.341965 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:12:58.346048 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:12:58.354423 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:12:58.380401 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:12:58.389433 jq[1904]: false Jan 13 21:12:58.395856 dbus-daemon[1903]: [system] SELinux support is enabled Jan 13 21:12:58.398853 dbus-daemon[1903]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1844 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:12:58.399580 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:12:58.406694 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:12:58.419968 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:12:58.423279 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:12:58.424158 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:12:58.431575 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:12:58.438807 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:12:58.443716 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:12:58.456946 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:12:58.457308 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:12:58.463001 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:12:58.464417 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:12:58.490130 dbus-daemon[1903]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:12:58.483020 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:12:58.483114 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:12:58.486521 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:12:58.486583 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:12:58.520812 update_engine[1915]: I20250113 21:12:58.514740 1915 main.cc:92] Flatcar Update Engine starting Jan 13 21:12:58.520812 update_engine[1915]: I20250113 21:12:58.519694 1915 update_check_scheduler.cc:74] Next update check in 11m4s Jan 13 21:12:58.550409 extend-filesystems[1906]: Found loop4 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found loop5 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found loop6 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found loop7 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1p1 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1p2 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1p3 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found usr Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1p4 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1p6 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1p7 Jan 13 21:12:58.550409 extend-filesystems[1906]: Found nvme0n1p9 Jan 13 21:12:58.550409 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9 Jan 13 21:12:58.527773 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:12:58.601214 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: ---------------------------------------------------- Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: corporation. Support and training for ntp-4 are Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: available at https://www.nwtime.org/support Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: ---------------------------------------------------- Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: proto: precision = 0.096 usec (-23) Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: basedate set to 2025-01-01 Jan 13 21:12:58.625499 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: gps base set to 2025-01-05 (week 2348) Jan 13 21:12:58.647415 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:12:58.647460 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9 Jan 13 21:12:58.653501 jq[1916]: true Jan 13 21:12:58.529774 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:12:58.601283 ntpd[1909]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Listen normally on 3 eth0 172.31.29.222:123 Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Listen normally on 4 lo [::1]:123 Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: bind(21) AF_INET6 fe80::450:43ff:fe5f:df9d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: unable to create socket on eth0 (5) for fe80::450:43ff:fe5f:df9d%2#123 Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: failed to init interface for address fe80::450:43ff:fe5f:df9d%2 Jan 13 21:12:58.657795 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Jan 13 21:12:58.670339 extend-filesystems[1942]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:12:58.537744 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:12:58.601307 ntpd[1909]: ---------------------------------------------------- Jan 13 21:12:58.627397 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:12:58.601326 ntpd[1909]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:12:58.627764 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:12:58.601346 ntpd[1909]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:12:58.681645 jq[1937]: true Jan 13 21:12:58.601365 ntpd[1909]: corporation. Support and training for ntp-4 are Jan 13 21:12:58.601383 ntpd[1909]: available at https://www.nwtime.org/support Jan 13 21:12:58.601401 ntpd[1909]: ---------------------------------------------------- Jan 13 21:12:58.611064 ntpd[1909]: proto: precision = 0.096 usec (-23) Jan 13 21:12:58.620315 ntpd[1909]: basedate set to 2025-01-01 Jan 13 21:12:58.620355 ntpd[1909]: gps base set to 2025-01-05 (week 2348) Jan 13 21:12:58.645393 ntpd[1909]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:12:58.645470 ntpd[1909]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:12:58.645752 ntpd[1909]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:12:58.645814 ntpd[1909]: Listen normally on 3 eth0 172.31.29.222:123 Jan 13 21:12:58.645880 ntpd[1909]: Listen normally on 4 lo [::1]:123 Jan 13 21:12:58.645950 ntpd[1909]: bind(21) AF_INET6 fe80::450:43ff:fe5f:df9d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:12:58.645988 ntpd[1909]: unable to create socket on eth0 (5) for fe80::450:43ff:fe5f:df9d%2#123 Jan 13 21:12:58.646019 ntpd[1909]: failed to init interface for address fe80::450:43ff:fe5f:df9d%2 Jan 13 21:12:58.646071 ntpd[1909]: Listening on routing socket on fd #21 for interface updates Jan 13 21:12:58.691842 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:58.694503 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:58.694503 ntpd[1909]: 13 Jan 21:12:58 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:58.691901 ntpd[1909]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:58.732836 (ntainerd)[1944]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:12:58.759498 coreos-metadata[1902]: Jan 13 21:12:58.759 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:12:58.763361 coreos-metadata[1902]: Jan 13 21:12:58.762 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:12:58.763988 coreos-metadata[1902]: Jan 13 21:12:58.763 INFO Fetch successful Jan 13 21:12:58.763988 coreos-metadata[1902]: Jan 13 21:12:58.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:12:58.765527 coreos-metadata[1902]: Jan 13 21:12:58.764 INFO Fetch successful Jan 13 21:12:58.765527 coreos-metadata[1902]: Jan 13 21:12:58.765 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:12:58.766268 coreos-metadata[1902]: Jan 13 21:12:58.766 INFO Fetch successful Jan 13 21:12:58.766996 coreos-metadata[1902]: Jan 13 21:12:58.766 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:12:58.771394 coreos-metadata[1902]: Jan 13 21:12:58.768 INFO Fetch successful Jan 13 21:12:58.771394 coreos-metadata[1902]: Jan 13 21:12:58.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:12:58.771394 coreos-metadata[1902]: Jan 13 21:12:58.771 INFO Fetch failed with 404: resource not found Jan 13 21:12:58.771394 coreos-metadata[1902]: Jan 13 21:12:58.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:12:58.777876 coreos-metadata[1902]: Jan 13 21:12:58.774 INFO Fetch successful Jan 13 21:12:58.777876 coreos-metadata[1902]: Jan 13 21:12:58.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:12:58.780365 coreos-metadata[1902]: Jan 13 21:12:58.779 INFO Fetch successful Jan 13 21:12:58.780365 coreos-metadata[1902]: Jan 13 21:12:58.779 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:12:58.783352 coreos-metadata[1902]: Jan 13 21:12:58.780 INFO Fetch successful Jan 13 21:12:58.783352 coreos-metadata[1902]: Jan 13 21:12:58.783 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:12:58.784990 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:12:58.787730 coreos-metadata[1902]: Jan 13 21:12:58.787 INFO Fetch successful Jan 13 21:12:58.788994 coreos-metadata[1902]: Jan 13 21:12:58.787 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:12:58.798899 coreos-metadata[1902]: Jan 13 21:12:58.795 INFO Fetch successful Jan 13 21:12:58.801290 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:12:58.825940 extend-filesystems[1942]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:12:58.825940 extend-filesystems[1942]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:12:58.825940 extend-filesystems[1942]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:12:58.841497 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:12:58.874322 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:12:58.874694 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:12:58.898612 systemd-logind[1914]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:12:58.898658 systemd-logind[1914]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 21:12:58.905508 systemd-logind[1914]: New seat seat0. Jan 13 21:12:58.909395 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1719) Jan 13 21:12:58.914498 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:12:58.962647 bash[1981]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:12:58.973331 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:12:59.006119 systemd[1]: Starting sshkeys.service... Jan 13 21:12:59.010039 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:12:59.016525 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:12:59.070778 dbus-daemon[1903]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:12:59.074976 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:12:59.075923 dbus-daemon[1903]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1925 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:12:59.103899 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:12:59.161167 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:12:59.170992 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:12:59.268514 polkitd[2029]: Started polkitd version 121 Jan 13 21:12:59.300449 polkitd[2029]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:12:59.300584 polkitd[2029]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:12:59.303367 coreos-metadata[2014]: Jan 13 21:12:59.303 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:12:59.305524 coreos-metadata[2014]: Jan 13 21:12:59.305 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:12:59.305386 polkitd[2029]: Finished loading, compiling and executing 2 rules Jan 13 21:12:59.305878 coreos-metadata[2014]: Jan 13 21:12:59.305 INFO Fetch successful Jan 13 21:12:59.306079 coreos-metadata[2014]: Jan 13 21:12:59.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:12:59.308357 coreos-metadata[2014]: Jan 13 21:12:59.308 INFO Fetch successful Jan 13 21:12:59.308781 dbus-daemon[1903]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:12:59.309080 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:12:59.312647 polkitd[2029]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:12:59.316088 unknown[2014]: wrote ssh authorized keys file for user: core Jan 13 21:12:59.382658 containerd[1944]: time="2025-01-13T21:12:59.382226863Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:12:59.388555 update-ssh-keys[2068]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:12:59.393194 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:12:59.398097 systemd-hostnamed[1925]: Hostname set to (transient) Jan 13 21:12:59.398810 systemd-resolved[1845]: System hostname changed to 'ip-172-31-29-222'. Jan 13 21:12:59.416410 systemd[1]: Finished sshkeys.service. Jan 13 21:12:59.453377 locksmithd[1930]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:12:59.554998 containerd[1944]: time="2025-01-13T21:12:59.554875604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:59.557813 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.565142396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.565214552Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.565272164Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.565609988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.565646744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.565766624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.565798292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.566076908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.566109488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.566139248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567415 containerd[1944]: time="2025-01-13T21:12:59.566164220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567918 containerd[1944]: time="2025-01-13T21:12:59.566354432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567918 containerd[1944]: time="2025-01-13T21:12:59.566750144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567918 containerd[1944]: time="2025-01-13T21:12:59.566937128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:59.567918 containerd[1944]: time="2025-01-13T21:12:59.566967476Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:12:59.567918 containerd[1944]: time="2025-01-13T21:12:59.567133292Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:12:59.567918 containerd[1944]: time="2025-01-13T21:12:59.567341300Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:12:59.574147 containerd[1944]: time="2025-01-13T21:12:59.573684164Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:12:59.574147 containerd[1944]: time="2025-01-13T21:12:59.573796304Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:12:59.575326 containerd[1944]: time="2025-01-13T21:12:59.575280968Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:12:59.576250 containerd[1944]: time="2025-01-13T21:12:59.575371772Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:12:59.576250 containerd[1944]: time="2025-01-13T21:12:59.575438888Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:12:59.576250 containerd[1944]: time="2025-01-13T21:12:59.575810636Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:12:59.576620 containerd[1944]: time="2025-01-13T21:12:59.576559136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:12:59.578578 containerd[1944]: time="2025-01-13T21:12:59.578503316Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:12:59.578668 containerd[1944]: time="2025-01-13T21:12:59.578584016Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:12:59.578668 containerd[1944]: time="2025-01-13T21:12:59.578641484Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:12:59.578753 containerd[1944]: time="2025-01-13T21:12:59.578675120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.578753 containerd[1944]: time="2025-01-13T21:12:59.578738084Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.578836 containerd[1944]: time="2025-01-13T21:12:59.578775992Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.578836 containerd[1944]: time="2025-01-13T21:12:59.578809508Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.578947 containerd[1944]: time="2025-01-13T21:12:59.578842088Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.578947 containerd[1944]: time="2025-01-13T21:12:59.578899592Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.578947 containerd[1944]: time="2025-01-13T21:12:59.578931788Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.579067 containerd[1944]: time="2025-01-13T21:12:59.578961104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:12:59.579067 containerd[1944]: time="2025-01-13T21:12:59.579015704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.579067 containerd[1944]: time="2025-01-13T21:12:59.579048308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.579207 containerd[1944]: time="2025-01-13T21:12:59.579077444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.579207 containerd[1944]: time="2025-01-13T21:12:59.579138056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.579207 containerd[1944]: time="2025-01-13T21:12:59.579168284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.579207 containerd[1944]: time="2025-01-13T21:12:59.579198992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579227312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579296492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579327356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579360176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579388988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579423152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579455456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579519656Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579577076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579615944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579643448Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579766592Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.579803084Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:12:59.581515 containerd[1944]: time="2025-01-13T21:12:59.580252232Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:12:59.582097 containerd[1944]: time="2025-01-13T21:12:59.580288844Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:12:59.582097 containerd[1944]: time="2025-01-13T21:12:59.580313324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.582097 containerd[1944]: time="2025-01-13T21:12:59.580342532Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:12:59.582097 containerd[1944]: time="2025-01-13T21:12:59.580376516Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:12:59.582097 containerd[1944]: time="2025-01-13T21:12:59.580409264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:12:59.582754 containerd[1944]: time="2025-01-13T21:12:59.580907948Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:12:59.582754 containerd[1944]: time="2025-01-13T21:12:59.581028416Z" level=info msg="Connect containerd service" Jan 13 21:12:59.582754 containerd[1944]: time="2025-01-13T21:12:59.581085968Z" level=info msg="using legacy CRI server" Jan 13 21:12:59.582754 containerd[1944]: time="2025-01-13T21:12:59.581103632Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:12:59.582754 containerd[1944]: time="2025-01-13T21:12:59.581284856Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:12:59.583787 containerd[1944]: time="2025-01-13T21:12:59.583727504Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:12:59.584011 containerd[1944]: time="2025-01-13T21:12:59.583940264Z" level=info msg="Start subscribing containerd event" Jan 13 21:12:59.584067 containerd[1944]: time="2025-01-13T21:12:59.584032892Z" level=info msg="Start recovering state" Jan 13 21:12:59.584188 containerd[1944]: time="2025-01-13T21:12:59.584152136Z" level=info msg="Start event monitor" Jan 13 21:12:59.585321 containerd[1944]: time="2025-01-13T21:12:59.584185172Z" level=info msg="Start snapshots syncer" Jan 13 21:12:59.585321 containerd[1944]: time="2025-01-13T21:12:59.584210828Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:12:59.585321 containerd[1944]: time="2025-01-13T21:12:59.584258048Z" level=info msg="Start streaming server" Jan 13 21:12:59.585321 containerd[1944]: time="2025-01-13T21:12:59.585041372Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:12:59.585321 containerd[1944]: time="2025-01-13T21:12:59.585203756Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:12:59.586196 containerd[1944]: time="2025-01-13T21:12:59.585935456Z" level=info msg="containerd successfully booted in 0.214287s" Jan 13 21:12:59.586053 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:12:59.601954 ntpd[1909]: bind(24) AF_INET6 fe80::450:43ff:fe5f:df9d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:12:59.603639 ntpd[1909]: 13 Jan 21:12:59 ntpd[1909]: bind(24) AF_INET6 fe80::450:43ff:fe5f:df9d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:12:59.603639 ntpd[1909]: 13 Jan 21:12:59 ntpd[1909]: unable to create socket on eth0 (6) for fe80::450:43ff:fe5f:df9d%2#123 Jan 13 21:12:59.603639 ntpd[1909]: 13 Jan 21:12:59 ntpd[1909]: failed to init interface for address fe80::450:43ff:fe5f:df9d%2 Jan 13 21:12:59.602018 ntpd[1909]: unable to create socket on eth0 (6) for fe80::450:43ff:fe5f:df9d%2#123 Jan 13 21:12:59.602048 ntpd[1909]: failed to init interface for address fe80::450:43ff:fe5f:df9d%2 Jan 13 21:13:00.134691 sshd_keygen[1940]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:13:00.167482 systemd-networkd[1844]: eth0: Gained IPv6LL Jan 13 21:13:00.172049 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:13:00.177100 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:13:00.183166 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:13:00.190776 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:13:00.207384 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:13:00.217692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:00.227785 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:13:00.243379 systemd[1]: Started sshd@0-172.31.29.222:22-139.178.89.65:41578.service - OpenSSH per-connection server daemon (139.178.89.65:41578). Jan 13 21:13:00.249144 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:13:00.250726 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:13:00.272787 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:13:00.318623 amazon-ssm-agent[2115]: Initializing new seelog logger Jan 13 21:13:00.319111 amazon-ssm-agent[2115]: New Seelog Logger Creation Complete Jan 13 21:13:00.319111 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.319111 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.323765 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 processing appconfig overrides Jan 13 21:13:00.326535 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.326535 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.328171 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 processing appconfig overrides Jan 13 21:13:00.328171 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.328171 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.328171 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 processing appconfig overrides Jan 13 21:13:00.328171 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO Proxy environment variables: Jan 13 21:13:00.334490 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.334490 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:13:00.334490 amazon-ssm-agent[2115]: 2025/01/13 21:13:00 processing appconfig overrides Jan 13 21:13:00.339972 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:13:00.344916 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:13:00.360951 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:13:00.374761 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:13:00.378789 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:13:00.432365 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO no_proxy: Jan 13 21:13:00.506325 sshd[2121]: Accepted publickey for core from 139.178.89.65 port 41578 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:00.511648 sshd[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:00.531000 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO https_proxy: Jan 13 21:13:00.540889 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:13:00.552583 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:13:00.564510 systemd-logind[1914]: New session 1 of user core. Jan 13 21:13:00.591534 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:13:00.604839 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:13:00.622877 (systemd)[2144]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:13:00.630382 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO http_proxy: Jan 13 21:13:00.730350 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:13:00.827587 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:13:00.859162 systemd[2144]: Queued start job for default target default.target. Jan 13 21:13:00.866025 systemd[2144]: Created slice app.slice - User Application Slice. Jan 13 21:13:00.866422 systemd[2144]: Reached target paths.target - Paths. Jan 13 21:13:00.866462 systemd[2144]: Reached target timers.target - Timers. Jan 13 21:13:00.873297 systemd[2144]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:13:00.907763 systemd[2144]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:13:00.908031 systemd[2144]: Reached target sockets.target - Sockets. Jan 13 21:13:00.908066 systemd[2144]: Reached target basic.target - Basic System. Jan 13 21:13:00.908153 systemd[2144]: Reached target default.target - Main User Target. Jan 13 21:13:00.908220 systemd[2144]: Startup finished in 271ms. Jan 13 21:13:00.908484 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:13:00.920572 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:13:00.926625 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO Agent will take identity from EC2 Jan 13 21:13:01.025450 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:13:01.090824 systemd[1]: Started sshd@1-172.31.29.222:22-139.178.89.65:41584.service - OpenSSH per-connection server daemon (139.178.89.65:41584). Jan 13 21:13:01.126157 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:13:01.223740 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:13:01.317136 sshd[2157]: Accepted publickey for core from 139.178.89.65 port 41584 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:01.323492 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:13:01.323281 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:01.339152 systemd-logind[1914]: New session 2 of user core. Jan 13 21:13:01.345568 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:13:01.423387 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 21:13:01.483333 sshd[2157]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:01.496675 systemd[1]: sshd@1-172.31.29.222:22-139.178.89.65:41584.service: Deactivated successfully. Jan 13 21:13:01.503189 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:13:01.509627 systemd-logind[1914]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:13:01.525363 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:13:01.527722 systemd[1]: Started sshd@2-172.31.29.222:22-139.178.89.65:48116.service - OpenSSH per-connection server daemon (139.178.89.65:48116). Jan 13 21:13:01.533333 systemd-logind[1914]: Removed session 2. Jan 13 21:13:01.627563 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:13:01.727278 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [Registrar] Starting registrar module Jan 13 21:13:01.751298 sshd[2164]: Accepted publickey for core from 139.178.89.65 port 48116 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:01.752972 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:01.768330 systemd-logind[1914]: New session 3 of user core. Jan 13 21:13:01.774842 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:13:01.828028 amazon-ssm-agent[2115]: 2025-01-13 21:13:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:13:01.923724 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:01.931701 systemd[1]: sshd@2-172.31.29.222:22-139.178.89.65:48116.service: Deactivated successfully. Jan 13 21:13:01.937900 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:13:01.946506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:01.957149 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:13:01.960404 systemd-logind[1914]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:13:01.960884 systemd[1]: Startup finished in 1.140s (kernel) + 6.878s (initrd) + 8.332s (userspace) = 16.352s. Jan 13 21:13:01.964223 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:13:01.971308 systemd-logind[1914]: Removed session 3. Jan 13 21:13:02.268448 amazon-ssm-agent[2115]: 2025-01-13 21:13:02 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:13:02.303144 amazon-ssm-agent[2115]: 2025-01-13 21:13:02 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:13:02.303144 amazon-ssm-agent[2115]: 2025-01-13 21:13:02 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:13:02.304388 amazon-ssm-agent[2115]: 2025-01-13 21:13:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:13:02.369925 amazon-ssm-agent[2115]: 2025-01-13 21:13:02 INFO [CredentialRefresher] Next credential rotation will be in 31.558325497633334 minutes Jan 13 21:13:02.601970 ntpd[1909]: Listen normally on 7 eth0 [fe80::450:43ff:fe5f:df9d%2]:123 Jan 13 21:13:02.602875 ntpd[1909]: 13 Jan 21:13:02 ntpd[1909]: Listen normally on 7 eth0 [fe80::450:43ff:fe5f:df9d%2]:123 Jan 13 21:13:02.962684 kubelet[2173]: E0113 21:13:02.962224 2173 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:13:02.967460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:13:02.967839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:13:02.969482 systemd[1]: kubelet.service: Consumed 1.333s CPU time. Jan 13 21:13:03.337322 amazon-ssm-agent[2115]: 2025-01-13 21:13:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:13:03.437216 amazon-ssm-agent[2115]: 2025-01-13 21:13:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2188) started Jan 13 21:13:03.537801 amazon-ssm-agent[2115]: 2025-01-13 21:13:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:13:05.319160 systemd-resolved[1845]: Clock change detected. Flushing caches. Jan 13 21:13:11.683760 systemd[1]: Started sshd@3-172.31.29.222:22-139.178.89.65:54774.service - OpenSSH per-connection server daemon (139.178.89.65:54774). Jan 13 21:13:11.856719 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 54774 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:11.859668 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:11.868318 systemd-logind[1914]: New session 4 of user core. Jan 13 21:13:11.881573 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:13:12.013514 sshd[2199]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:12.021505 systemd[1]: sshd@3-172.31.29.222:22-139.178.89.65:54774.service: Deactivated successfully. Jan 13 21:13:12.025490 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:13:12.028329 systemd-logind[1914]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:13:12.030603 systemd-logind[1914]: Removed session 4. Jan 13 21:13:12.052822 systemd[1]: Started sshd@4-172.31.29.222:22-139.178.89.65:54780.service - OpenSSH per-connection server daemon (139.178.89.65:54780). Jan 13 21:13:12.225952 sshd[2206]: Accepted publickey for core from 139.178.89.65 port 54780 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:12.228986 sshd[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:12.237700 systemd-logind[1914]: New session 5 of user core. Jan 13 21:13:12.247573 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:13:12.369562 sshd[2206]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:12.377859 systemd[1]: sshd@4-172.31.29.222:22-139.178.89.65:54780.service: Deactivated successfully. Jan 13 21:13:12.382803 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:13:12.384887 systemd-logind[1914]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:13:12.387611 systemd-logind[1914]: Removed session 5. Jan 13 21:13:12.413828 systemd[1]: Started sshd@5-172.31.29.222:22-139.178.89.65:54788.service - OpenSSH per-connection server daemon (139.178.89.65:54788). Jan 13 21:13:12.583969 sshd[2213]: Accepted publickey for core from 139.178.89.65 port 54788 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:12.586966 sshd[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:12.595056 systemd-logind[1914]: New session 6 of user core. Jan 13 21:13:12.601562 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:13:12.709123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:13:12.717769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:12.734164 sshd[2213]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:12.744045 systemd[1]: sshd@5-172.31.29.222:22-139.178.89.65:54788.service: Deactivated successfully. Jan 13 21:13:12.753334 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:13:12.757946 systemd-logind[1914]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:13:12.788515 systemd[1]: Started sshd@6-172.31.29.222:22-139.178.89.65:54800.service - OpenSSH per-connection server daemon (139.178.89.65:54800). Jan 13 21:13:12.789674 systemd-logind[1914]: Removed session 6. Jan 13 21:13:12.970379 sshd[2223]: Accepted publickey for core from 139.178.89.65 port 54800 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:12.973622 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:12.983673 systemd-logind[1914]: New session 7 of user core. Jan 13 21:13:12.992604 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:13:13.065770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:13.081869 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:13:13.120421 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:13:13.121215 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:13:13.145100 sudo[2236]: pam_unix(sudo:session): session closed for user root Jan 13 21:13:13.174783 sshd[2223]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:13.186853 systemd[1]: sshd@6-172.31.29.222:22-139.178.89.65:54800.service: Deactivated successfully. Jan 13 21:13:13.192100 kubelet[2231]: E0113 21:13:13.191884 2231 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:13:13.192476 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:13:13.195465 systemd-logind[1914]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:13:13.209156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:13:13.209784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:13:13.213920 systemd-logind[1914]: Removed session 7. Jan 13 21:13:13.220844 systemd[1]: Started sshd@7-172.31.29.222:22-139.178.89.65:54806.service - OpenSSH per-connection server daemon (139.178.89.65:54806). Jan 13 21:13:13.407811 sshd[2244]: Accepted publickey for core from 139.178.89.65 port 54806 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:13.410794 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:13.420772 systemd-logind[1914]: New session 8 of user core. Jan 13 21:13:13.430609 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:13:13.539294 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:13:13.540013 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:13:13.547891 sudo[2248]: pam_unix(sudo:session): session closed for user root Jan 13 21:13:13.559292 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:13:13.560028 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:13:13.589839 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:13:13.592662 auditctl[2251]: No rules Jan 13 21:13:13.594868 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:13:13.595529 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:13:13.604167 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:13:13.667612 augenrules[2269]: No rules Jan 13 21:13:13.669937 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:13:13.672205 sudo[2247]: pam_unix(sudo:session): session closed for user root Jan 13 21:13:13.696067 sshd[2244]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:13.703049 systemd-logind[1914]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:13:13.704821 systemd[1]: sshd@7-172.31.29.222:22-139.178.89.65:54806.service: Deactivated successfully. Jan 13 21:13:13.708662 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:13:13.710378 systemd-logind[1914]: Removed session 8. Jan 13 21:13:13.736765 systemd[1]: Started sshd@8-172.31.29.222:22-139.178.89.65:54818.service - OpenSSH per-connection server daemon (139.178.89.65:54818). Jan 13 21:13:13.911833 sshd[2277]: Accepted publickey for core from 139.178.89.65 port 54818 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:13.915005 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:13.923819 systemd-logind[1914]: New session 9 of user core. Jan 13 21:13:13.932596 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:13:14.041828 sudo[2280]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:13:14.042823 sudo[2280]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:13:15.331699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:15.343760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:15.394892 systemd[1]: Reloading requested from client PID 2318 ('systemctl') (unit session-9.scope)... Jan 13 21:13:15.395105 systemd[1]: Reloading... Jan 13 21:13:15.657287 zram_generator::config[2358]: No configuration found. Jan 13 21:13:15.931522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:13:16.117487 systemd[1]: Reloading finished in 721 ms. Jan 13 21:13:16.214106 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:13:16.214347 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:13:16.214898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:16.223987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:13:16.524737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:13:16.543912 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:13:16.631533 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:13:16.631533 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:13:16.631533 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:13:16.633412 kubelet[2420]: I0113 21:13:16.633293 2420 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:13:17.912280 kubelet[2420]: I0113 21:13:17.911511 2420 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:13:17.912280 kubelet[2420]: I0113 21:13:17.911563 2420 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:13:17.912280 kubelet[2420]: I0113 21:13:17.911925 2420 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:13:17.939528 kubelet[2420]: I0113 21:13:17.939467 2420 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:13:17.954432 kubelet[2420]: I0113 21:13:17.954365 2420 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:13:17.955278 kubelet[2420]: I0113 21:13:17.955096 2420 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:13:17.955470 kubelet[2420]: I0113 21:13:17.955157 2420 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.29.222","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:13:17.955648 kubelet[2420]: I0113 21:13:17.955502 2420 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:13:17.955648 kubelet[2420]: I0113 21:13:17.955525 2420 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:13:17.955803 kubelet[2420]: I0113 21:13:17.955769 2420 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:13:17.958658 kubelet[2420]: I0113 21:13:17.957371 2420 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:13:17.958658 kubelet[2420]: I0113 21:13:17.957420 2420 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:13:17.958658 kubelet[2420]: I0113 21:13:17.957516 2420 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:13:17.958658 kubelet[2420]: I0113 21:13:17.957559 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:13:17.958658 kubelet[2420]: E0113 21:13:17.958030 2420 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:17.958658 kubelet[2420]: E0113 21:13:17.958131 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:17.959781 kubelet[2420]: I0113 21:13:17.959737 2420 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:13:17.960567 kubelet[2420]: I0113 21:13:17.960531 2420 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:13:17.960887 kubelet[2420]: W0113 21:13:17.960860 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:13:17.962691 kubelet[2420]: I0113 21:13:17.962651 2420 server.go:1264] "Started kubelet" Jan 13 21:13:17.968337 kubelet[2420]: I0113 21:13:17.968297 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:13:17.980383 kubelet[2420]: I0113 21:13:17.980292 2420 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:13:17.981963 kubelet[2420]: E0113 21:13:17.981702 2420 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.29.222.181a5cf159774b16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.29.222,UID:172.31.29.222,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.29.222,},FirstTimestamp:2025-01-13 21:13:17.962611478 +0000 UTC m=+1.411193588,LastTimestamp:2025-01-13 21:13:17.962611478 +0000 UTC m=+1.411193588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.29.222,}" Jan 13 21:13:17.982219 kubelet[2420]: W0113 21:13:17.982023 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.29.222" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:13:17.982219 kubelet[2420]: E0113 21:13:17.982069 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.29.222" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:13:17.982219 kubelet[2420]: W0113 21:13:17.982492 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:13:17.982219 kubelet[2420]: E0113 21:13:17.982587 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:13:17.995285 kubelet[2420]: I0113 21:13:17.994355 2420 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:13:17.996742 kubelet[2420]: I0113 21:13:17.996632 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:13:17.997142 kubelet[2420]: I0113 21:13:17.997086 2420 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:13:18.002671 kubelet[2420]: I0113 21:13:18.002608 2420 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:13:18.004267 kubelet[2420]: I0113 21:13:18.003709 2420 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:13:18.007287 kubelet[2420]: I0113 21:13:18.007194 2420 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:13:18.012281 kubelet[2420]: E0113 21:13:18.012163 2420 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:13:18.015790 kubelet[2420]: E0113 21:13:18.014452 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.29.222\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:13:18.019948 kubelet[2420]: I0113 21:13:18.019886 2420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:13:18.023293 kubelet[2420]: I0113 21:13:18.023218 2420 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:13:18.023293 kubelet[2420]: I0113 21:13:18.023277 2420 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:13:18.049882 kubelet[2420]: I0113 21:13:18.049737 2420 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:13:18.049882 kubelet[2420]: I0113 21:13:18.049771 2420 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:13:18.049882 kubelet[2420]: I0113 21:13:18.049799 2420 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:13:18.053507 kubelet[2420]: I0113 21:13:18.053095 2420 policy_none.go:49] "None policy: Start" Jan 13 21:13:18.056780 kubelet[2420]: I0113 21:13:18.056022 2420 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:13:18.056780 kubelet[2420]: I0113 21:13:18.056076 2420 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:13:18.076789 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:13:18.097371 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:13:18.105342 kubelet[2420]: I0113 21:13:18.105274 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:13:18.106628 kubelet[2420]: I0113 21:13:18.106547 2420 kubelet_node_status.go:73] "Attempting to register node" node="172.31.29.222" Jan 13 21:13:18.111490 kubelet[2420]: I0113 21:13:18.109443 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:13:18.111490 kubelet[2420]: I0113 21:13:18.109504 2420 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:13:18.111490 kubelet[2420]: I0113 21:13:18.109533 2420 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:13:18.111490 kubelet[2420]: E0113 21:13:18.109598 2420 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:13:18.109910 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:13:18.120702 kubelet[2420]: I0113 21:13:18.120179 2420 kubelet_node_status.go:76] "Successfully registered node" node="172.31.29.222" Jan 13 21:13:18.123274 kubelet[2420]: I0113 21:13:18.122548 2420 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:13:18.124941 kubelet[2420]: I0113 21:13:18.124839 2420 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:13:18.125138 kubelet[2420]: I0113 21:13:18.125099 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:13:18.133641 kubelet[2420]: E0113 21:13:18.133594 2420 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.29.222\" not found" Jan 13 21:13:18.206822 kubelet[2420]: E0113 21:13:18.206681 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.307477 kubelet[2420]: E0113 21:13:18.307395 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.408122 kubelet[2420]: E0113 21:13:18.408058 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.509249 kubelet[2420]: E0113 21:13:18.509074 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.570941 sudo[2280]: pam_unix(sudo:session): session closed for user root Jan 13 21:13:18.595336 sshd[2277]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:18.602417 systemd[1]: sshd@8-172.31.29.222:22-139.178.89.65:54818.service: Deactivated successfully. Jan 13 21:13:18.607370 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:13:18.609361 kubelet[2420]: E0113 21:13:18.609299 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.612429 systemd-logind[1914]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:13:18.615086 systemd-logind[1914]: Removed session 9. Jan 13 21:13:18.710042 kubelet[2420]: E0113 21:13:18.709952 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.811305 kubelet[2420]: E0113 21:13:18.810631 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.911191 kubelet[2420]: E0113 21:13:18.911111 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:18.916407 kubelet[2420]: I0113 21:13:18.916335 2420 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:13:18.917150 kubelet[2420]: W0113 21:13:18.916630 2420 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:13:18.917150 kubelet[2420]: W0113 21:13:18.916638 2420 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:13:18.958927 kubelet[2420]: E0113 21:13:18.958861 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:19.011581 kubelet[2420]: E0113 21:13:19.011516 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:19.112162 kubelet[2420]: E0113 21:13:19.111997 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:19.212657 kubelet[2420]: E0113 21:13:19.212582 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:19.313482 kubelet[2420]: E0113 21:13:19.313405 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:19.414074 kubelet[2420]: E0113 21:13:19.413930 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:19.515027 kubelet[2420]: E0113 21:13:19.514917 2420 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.29.222\" not found" Jan 13 21:13:19.616719 kubelet[2420]: I0113 21:13:19.616652 2420 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:13:19.617375 containerd[1944]: time="2025-01-13T21:13:19.617092815Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:13:19.617965 kubelet[2420]: I0113 21:13:19.617439 2420 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:13:19.960025 kubelet[2420]: I0113 21:13:19.959620 2420 apiserver.go:52] "Watching apiserver" Jan 13 21:13:19.960025 kubelet[2420]: E0113 21:13:19.959972 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:19.965790 kubelet[2420]: I0113 21:13:19.965720 2420 topology_manager.go:215] "Topology Admit Handler" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" podNamespace="kube-system" podName="cilium-txs5x" Jan 13 21:13:19.966562 kubelet[2420]: I0113 21:13:19.966497 2420 topology_manager.go:215] "Topology Admit Handler" podUID="51ccdf30-db6e-46a6-90c6-f78defa172dc" podNamespace="kube-system" podName="kube-proxy-2xkxd" Jan 13 21:13:19.982404 systemd[1]: Created slice kubepods-besteffort-pod51ccdf30_db6e_46a6_90c6_f78defa172dc.slice - libcontainer container kubepods-besteffort-pod51ccdf30_db6e_46a6_90c6_f78defa172dc.slice. Jan 13 21:13:20.007836 systemd[1]: Created slice kubepods-burstable-pod514ec45b_bcfb_46a7_a921_65de721e8974.slice - libcontainer container kubepods-burstable-pod514ec45b_bcfb_46a7_a921_65de721e8974.slice. Jan 13 21:13:20.011273 kubelet[2420]: I0113 21:13:20.011193 2420 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:13:20.019302 kubelet[2420]: I0113 21:13:20.017208 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-run\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019302 kubelet[2420]: I0113 21:13:20.018386 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-xtables-lock\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019302 kubelet[2420]: I0113 21:13:20.018447 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-net\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019302 kubelet[2420]: I0113 21:13:20.018492 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5m8z\" (UniqueName: \"kubernetes.io/projected/51ccdf30-db6e-46a6-90c6-f78defa172dc-kube-api-access-x5m8z\") pod \"kube-proxy-2xkxd\" (UID: \"51ccdf30-db6e-46a6-90c6-f78defa172dc\") " pod="kube-system/kube-proxy-2xkxd" Jan 13 21:13:20.019302 kubelet[2420]: I0113 21:13:20.018533 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-cgroup\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019302 kubelet[2420]: I0113 21:13:20.018569 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-hubble-tls\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019732 kubelet[2420]: I0113 21:13:20.018607 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51ccdf30-db6e-46a6-90c6-f78defa172dc-xtables-lock\") pod \"kube-proxy-2xkxd\" (UID: \"51ccdf30-db6e-46a6-90c6-f78defa172dc\") " pod="kube-system/kube-proxy-2xkxd" Jan 13 21:13:20.019732 kubelet[2420]: I0113 21:13:20.018644 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mr5\" (UniqueName: \"kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-kube-api-access-c4mr5\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019732 kubelet[2420]: I0113 21:13:20.018680 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51ccdf30-db6e-46a6-90c6-f78defa172dc-lib-modules\") pod \"kube-proxy-2xkxd\" (UID: \"51ccdf30-db6e-46a6-90c6-f78defa172dc\") " pod="kube-system/kube-proxy-2xkxd" Jan 13 21:13:20.019732 kubelet[2420]: I0113 21:13:20.018715 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-bpf-maps\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019732 kubelet[2420]: I0113 21:13:20.018756 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-lib-modules\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.019732 kubelet[2420]: I0113 21:13:20.018794 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/514ec45b-bcfb-46a7-a921-65de721e8974-clustermesh-secrets\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.020035 kubelet[2420]: I0113 21:13:20.018830 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-config-path\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.020035 kubelet[2420]: I0113 21:13:20.018865 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-kernel\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.020035 kubelet[2420]: I0113 21:13:20.018900 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51ccdf30-db6e-46a6-90c6-f78defa172dc-kube-proxy\") pod \"kube-proxy-2xkxd\" (UID: \"51ccdf30-db6e-46a6-90c6-f78defa172dc\") " pod="kube-system/kube-proxy-2xkxd" Jan 13 21:13:20.020035 kubelet[2420]: I0113 21:13:20.018940 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-hostproc\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.020035 kubelet[2420]: I0113 21:13:20.018977 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cni-path\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.020035 kubelet[2420]: I0113 21:13:20.019013 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-etc-cni-netd\") pod \"cilium-txs5x\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " pod="kube-system/cilium-txs5x" Jan 13 21:13:20.300544 containerd[1944]: time="2025-01-13T21:13:20.300381110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xkxd,Uid:51ccdf30-db6e-46a6-90c6-f78defa172dc,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:20.321607 containerd[1944]: time="2025-01-13T21:13:20.321400466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txs5x,Uid:514ec45b-bcfb-46a7-a921-65de721e8974,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:20.960625 kubelet[2420]: E0113 21:13:20.960538 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:21.002309 containerd[1944]: time="2025-01-13T21:13:21.001549537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:21.003389 containerd[1944]: time="2025-01-13T21:13:21.003311965Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:21.005793 containerd[1944]: time="2025-01-13T21:13:21.005731717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:13:21.008199 containerd[1944]: time="2025-01-13T21:13:21.008132473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 21:13:21.011249 containerd[1944]: time="2025-01-13T21:13:21.011155729Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:21.016927 containerd[1944]: time="2025-01-13T21:13:21.016852321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:13:21.019310 containerd[1944]: time="2025-01-13T21:13:21.018599629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 697.080267ms" Jan 13 21:13:21.022969 containerd[1944]: time="2025-01-13T21:13:21.022893506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 722.385112ms" Jan 13 21:13:21.145777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183587456.mount: Deactivated successfully. Jan 13 21:13:21.189366 containerd[1944]: time="2025-01-13T21:13:21.188897402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:21.189366 containerd[1944]: time="2025-01-13T21:13:21.189095834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:21.189366 containerd[1944]: time="2025-01-13T21:13:21.189134522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:21.190910 containerd[1944]: time="2025-01-13T21:13:21.190766198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:21.190910 containerd[1944]: time="2025-01-13T21:13:21.190856570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:21.191553 containerd[1944]: time="2025-01-13T21:13:21.191386802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:21.191553 containerd[1944]: time="2025-01-13T21:13:21.190883666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:21.192298 containerd[1944]: time="2025-01-13T21:13:21.192168494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:21.341554 systemd[1]: Started cri-containerd-601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c.scope - libcontainer container 601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c. Jan 13 21:13:21.349566 systemd[1]: Started cri-containerd-68d2dcb5c20eb7cc3adb3652fce07c7303fe8201974eaf5e2bfd9694620fe8fa.scope - libcontainer container 68d2dcb5c20eb7cc3adb3652fce07c7303fe8201974eaf5e2bfd9694620fe8fa. Jan 13 21:13:21.403409 containerd[1944]: time="2025-01-13T21:13:21.403127547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txs5x,Uid:514ec45b-bcfb-46a7-a921-65de721e8974,Namespace:kube-system,Attempt:0,} returns sandbox id \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\"" Jan 13 21:13:21.408773 containerd[1944]: time="2025-01-13T21:13:21.408392307Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:13:21.417136 containerd[1944]: time="2025-01-13T21:13:21.417062343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2xkxd,Uid:51ccdf30-db6e-46a6-90c6-f78defa172dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"68d2dcb5c20eb7cc3adb3652fce07c7303fe8201974eaf5e2bfd9694620fe8fa\"" Jan 13 21:13:21.961709 kubelet[2420]: E0113 21:13:21.961648 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:22.962690 kubelet[2420]: E0113 21:13:22.962621 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:23.963510 kubelet[2420]: E0113 21:13:23.963464 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:24.964564 kubelet[2420]: E0113 21:13:24.964509 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:25.965577 kubelet[2420]: E0113 21:13:25.965502 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:26.966087 kubelet[2420]: E0113 21:13:26.966026 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:27.966605 kubelet[2420]: E0113 21:13:27.966537 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:28.966949 kubelet[2420]: E0113 21:13:28.966884 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:29.136431 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:13:29.967400 kubelet[2420]: E0113 21:13:29.967346 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:30.967855 kubelet[2420]: E0113 21:13:30.967777 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:31.968240 kubelet[2420]: E0113 21:13:31.968117 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:32.968542 kubelet[2420]: E0113 21:13:32.968478 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:33.969143 kubelet[2420]: E0113 21:13:33.969091 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:34.178603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583299618.mount: Deactivated successfully. Jan 13 21:13:34.969740 kubelet[2420]: E0113 21:13:34.969651 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:35.970005 kubelet[2420]: E0113 21:13:35.969954 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:36.970508 kubelet[2420]: E0113 21:13:36.970441 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:37.958135 kubelet[2420]: E0113 21:13:37.958067 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:37.971488 kubelet[2420]: E0113 21:13:37.971417 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:38.972656 kubelet[2420]: E0113 21:13:38.972578 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:39.007303 containerd[1944]: time="2025-01-13T21:13:39.006372031Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:39.008459 containerd[1944]: time="2025-01-13T21:13:39.008372515Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650954" Jan 13 21:13:39.011129 containerd[1944]: time="2025-01-13T21:13:39.011035291Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:39.015167 containerd[1944]: time="2025-01-13T21:13:39.014881903Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 17.606421976s" Jan 13 21:13:39.015167 containerd[1944]: time="2025-01-13T21:13:39.014953423Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 21:13:39.018639 containerd[1944]: time="2025-01-13T21:13:39.018496027Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:13:39.020718 containerd[1944]: time="2025-01-13T21:13:39.020440855Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:13:39.051938 containerd[1944]: time="2025-01-13T21:13:39.051837943Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\"" Jan 13 21:13:39.054329 containerd[1944]: time="2025-01-13T21:13:39.053825599Z" level=info msg="StartContainer for \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\"" Jan 13 21:13:39.115570 systemd[1]: Started cri-containerd-b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2.scope - libcontainer container b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2. Jan 13 21:13:39.167506 containerd[1944]: time="2025-01-13T21:13:39.167371100Z" level=info msg="StartContainer for \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\" returns successfully" Jan 13 21:13:39.194437 systemd[1]: cri-containerd-b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2.scope: Deactivated successfully. Jan 13 21:13:39.241494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2-rootfs.mount: Deactivated successfully. Jan 13 21:13:39.304385 containerd[1944]: time="2025-01-13T21:13:39.304035524Z" level=info msg="shim disconnected" id=b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2 namespace=k8s.io Jan 13 21:13:39.304385 containerd[1944]: time="2025-01-13T21:13:39.304115996Z" level=warning msg="cleaning up after shim disconnected" id=b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2 namespace=k8s.io Jan 13 21:13:39.304385 containerd[1944]: time="2025-01-13T21:13:39.304137056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:39.973493 kubelet[2420]: E0113 21:13:39.973336 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:40.196563 containerd[1944]: time="2025-01-13T21:13:40.195430185Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:13:40.234032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106741288.mount: Deactivated successfully. Jan 13 21:13:40.240635 containerd[1944]: time="2025-01-13T21:13:40.240566385Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\"" Jan 13 21:13:40.241646 containerd[1944]: time="2025-01-13T21:13:40.241571949Z" level=info msg="StartContainer for \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\"" Jan 13 21:13:40.312583 systemd[1]: Started cri-containerd-0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267.scope - libcontainer container 0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267. Jan 13 21:13:40.384829 containerd[1944]: time="2025-01-13T21:13:40.384745030Z" level=info msg="StartContainer for \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\" returns successfully" Jan 13 21:13:40.406572 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:13:40.408170 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:13:40.409583 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:13:40.419577 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:13:40.420290 systemd[1]: cri-containerd-0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267.scope: Deactivated successfully. Jan 13 21:13:40.472546 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:13:40.577878 containerd[1944]: time="2025-01-13T21:13:40.577714967Z" level=info msg="shim disconnected" id=0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267 namespace=k8s.io Jan 13 21:13:40.579012 containerd[1944]: time="2025-01-13T21:13:40.578957147Z" level=warning msg="cleaning up after shim disconnected" id=0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267 namespace=k8s.io Jan 13 21:13:40.579441 containerd[1944]: time="2025-01-13T21:13:40.579147167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:40.974576 kubelet[2420]: E0113 21:13:40.974483 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:41.041826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267-rootfs.mount: Deactivated successfully. Jan 13 21:13:41.042540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2907551494.mount: Deactivated successfully. Jan 13 21:13:41.171357 containerd[1944]: time="2025-01-13T21:13:41.170468734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:41.173083 containerd[1944]: time="2025-01-13T21:13:41.172994134Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Jan 13 21:13:41.175705 containerd[1944]: time="2025-01-13T21:13:41.175609786Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:41.180791 containerd[1944]: time="2025-01-13T21:13:41.180659350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:41.182420 containerd[1944]: time="2025-01-13T21:13:41.182336134Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 2.163190451s" Jan 13 21:13:41.182420 containerd[1944]: time="2025-01-13T21:13:41.182410678Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 21:13:41.187425 containerd[1944]: time="2025-01-13T21:13:41.187356694Z" level=info msg="CreateContainer within sandbox \"68d2dcb5c20eb7cc3adb3652fce07c7303fe8201974eaf5e2bfd9694620fe8fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:13:41.220251 containerd[1944]: time="2025-01-13T21:13:41.220079902Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:13:41.226538 containerd[1944]: time="2025-01-13T21:13:41.226366438Z" level=info msg="CreateContainer within sandbox \"68d2dcb5c20eb7cc3adb3652fce07c7303fe8201974eaf5e2bfd9694620fe8fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bef3d9c7f2e1ffaf3ecb6324463f557114c977a0a1abf8d5ca3e2bb40995b61e\"" Jan 13 21:13:41.228407 containerd[1944]: time="2025-01-13T21:13:41.228280618Z" level=info msg="StartContainer for \"bef3d9c7f2e1ffaf3ecb6324463f557114c977a0a1abf8d5ca3e2bb40995b61e\"" Jan 13 21:13:41.264841 containerd[1944]: time="2025-01-13T21:13:41.264619270Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\"" Jan 13 21:13:41.268636 containerd[1944]: time="2025-01-13T21:13:41.266990146Z" level=info msg="StartContainer for \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\"" Jan 13 21:13:41.301869 systemd[1]: Started cri-containerd-bef3d9c7f2e1ffaf3ecb6324463f557114c977a0a1abf8d5ca3e2bb40995b61e.scope - libcontainer container bef3d9c7f2e1ffaf3ecb6324463f557114c977a0a1abf8d5ca3e2bb40995b61e. Jan 13 21:13:41.340512 systemd[1]: Started cri-containerd-f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41.scope - libcontainer container f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41. Jan 13 21:13:41.391058 containerd[1944]: time="2025-01-13T21:13:41.390833627Z" level=info msg="StartContainer for \"bef3d9c7f2e1ffaf3ecb6324463f557114c977a0a1abf8d5ca3e2bb40995b61e\" returns successfully" Jan 13 21:13:41.420310 containerd[1944]: time="2025-01-13T21:13:41.419675747Z" level=info msg="StartContainer for \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\" returns successfully" Jan 13 21:13:41.428791 systemd[1]: cri-containerd-f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41.scope: Deactivated successfully. Jan 13 21:13:41.572216 containerd[1944]: time="2025-01-13T21:13:41.571945620Z" level=info msg="shim disconnected" id=f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41 namespace=k8s.io Jan 13 21:13:41.572216 containerd[1944]: time="2025-01-13T21:13:41.572022060Z" level=warning msg="cleaning up after shim disconnected" id=f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41 namespace=k8s.io Jan 13 21:13:41.572216 containerd[1944]: time="2025-01-13T21:13:41.572044284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:41.975057 kubelet[2420]: E0113 21:13:41.974954 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:42.228372 containerd[1944]: time="2025-01-13T21:13:42.228198611Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:13:42.235591 kubelet[2420]: I0113 21:13:42.235487 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2xkxd" podStartSLOduration=4.470361448 podStartE2EDuration="24.235466363s" podCreationTimestamp="2025-01-13 21:13:18 +0000 UTC" firstStartedPulling="2025-01-13 21:13:21.419808387 +0000 UTC m=+4.868390497" lastFinishedPulling="2025-01-13 21:13:41.184913302 +0000 UTC m=+24.633495412" observedRunningTime="2025-01-13 21:13:42.233912123 +0000 UTC m=+25.682494257" watchObservedRunningTime="2025-01-13 21:13:42.235466363 +0000 UTC m=+25.684048473" Jan 13 21:13:42.258824 containerd[1944]: time="2025-01-13T21:13:42.258743915Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\"" Jan 13 21:13:42.259633 containerd[1944]: time="2025-01-13T21:13:42.259568831Z" level=info msg="StartContainer for \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\"" Jan 13 21:13:42.306611 systemd[1]: Started cri-containerd-83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9.scope - libcontainer container 83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9. Jan 13 21:13:42.348076 systemd[1]: cri-containerd-83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9.scope: Deactivated successfully. Jan 13 21:13:42.350986 containerd[1944]: time="2025-01-13T21:13:42.350520011Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514ec45b_bcfb_46a7_a921_65de721e8974.slice/cri-containerd-83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9.scope/memory.events\": no such file or directory" Jan 13 21:13:42.354965 containerd[1944]: time="2025-01-13T21:13:42.354852659Z" level=info msg="StartContainer for \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\" returns successfully" Jan 13 21:13:42.389096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9-rootfs.mount: Deactivated successfully. Jan 13 21:13:42.399536 containerd[1944]: time="2025-01-13T21:13:42.399435948Z" level=info msg="shim disconnected" id=83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9 namespace=k8s.io Jan 13 21:13:42.399536 containerd[1944]: time="2025-01-13T21:13:42.399510372Z" level=warning msg="cleaning up after shim disconnected" id=83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9 namespace=k8s.io Jan 13 21:13:42.399536 containerd[1944]: time="2025-01-13T21:13:42.399532812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:42.975994 kubelet[2420]: E0113 21:13:42.975899 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:43.237319 containerd[1944]: time="2025-01-13T21:13:43.235590060Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:13:43.270574 containerd[1944]: time="2025-01-13T21:13:43.270509340Z" level=info msg="CreateContainer within sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\"" Jan 13 21:13:43.272041 containerd[1944]: time="2025-01-13T21:13:43.271984596Z" level=info msg="StartContainer for \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\"" Jan 13 21:13:43.326122 systemd[1]: Started cri-containerd-f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda.scope - libcontainer container f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda. Jan 13 21:13:43.330372 update_engine[1915]: I20250113 21:13:43.326678 1915 update_attempter.cc:509] Updating boot flags... Jan 13 21:13:43.416866 containerd[1944]: time="2025-01-13T21:13:43.415460761Z" level=info msg="StartContainer for \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\" returns successfully" Jan 13 21:13:43.475403 systemd[1]: run-containerd-runc-k8s.io-f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda-runc.Y2IBpV.mount: Deactivated successfully. Jan 13 21:13:43.533306 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3045) Jan 13 21:13:43.873544 kubelet[2420]: I0113 21:13:43.872321 2420 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:13:43.976934 kubelet[2420]: E0113 21:13:43.976089 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:44.001851 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2874) Jan 13 21:13:44.517357 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2874) Jan 13 21:13:44.977023 kubelet[2420]: E0113 21:13:44.976933 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:44.981682 kernel: Initializing XFRM netlink socket Jan 13 21:13:45.977565 kubelet[2420]: E0113 21:13:45.977495 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:46.812185 (udev-worker)[2873]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:46.813294 (udev-worker)[3048]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:46.814499 systemd-networkd[1844]: cilium_host: Link UP Jan 13 21:13:46.817040 systemd-networkd[1844]: cilium_net: Link UP Jan 13 21:13:46.817480 systemd-networkd[1844]: cilium_net: Gained carrier Jan 13 21:13:46.817809 systemd-networkd[1844]: cilium_host: Gained carrier Jan 13 21:13:46.979305 kubelet[2420]: E0113 21:13:46.978573 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:46.988826 systemd-networkd[1844]: cilium_vxlan: Link UP Jan 13 21:13:46.988846 systemd-networkd[1844]: cilium_vxlan: Gained carrier Jan 13 21:13:47.274203 kubelet[2420]: I0113 21:13:47.274038 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-txs5x" podStartSLOduration=11.664183128 podStartE2EDuration="29.274016464s" podCreationTimestamp="2025-01-13 21:13:18 +0000 UTC" firstStartedPulling="2025-01-13 21:13:21.407290167 +0000 UTC m=+4.855872265" lastFinishedPulling="2025-01-13 21:13:39.017123491 +0000 UTC m=+22.465705601" observedRunningTime="2025-01-13 21:13:44.316712293 +0000 UTC m=+27.765294451" watchObservedRunningTime="2025-01-13 21:13:47.274016464 +0000 UTC m=+30.722598562" Jan 13 21:13:47.274557 kubelet[2420]: I0113 21:13:47.274442 2420 topology_manager.go:215] "Topology Admit Handler" podUID="bad43b6f-75fd-4158-95a3-c2785fc1ab62" podNamespace="default" podName="nginx-deployment-85f456d6dd-cfnbz" Jan 13 21:13:47.292079 systemd[1]: Created slice kubepods-besteffort-podbad43b6f_75fd_4158_95a3_c2785fc1ab62.slice - libcontainer container kubepods-besteffort-podbad43b6f_75fd_4158_95a3_c2785fc1ab62.slice. Jan 13 21:13:47.410938 kubelet[2420]: I0113 21:13:47.410846 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skfp6\" (UniqueName: \"kubernetes.io/projected/bad43b6f-75fd-4158-95a3-c2785fc1ab62-kube-api-access-skfp6\") pod \"nginx-deployment-85f456d6dd-cfnbz\" (UID: \"bad43b6f-75fd-4158-95a3-c2785fc1ab62\") " pod="default/nginx-deployment-85f456d6dd-cfnbz" Jan 13 21:13:47.488412 kernel: NET: Registered PF_ALG protocol family Jan 13 21:13:47.564865 systemd-networkd[1844]: cilium_host: Gained IPv6LL Jan 13 21:13:47.600005 containerd[1944]: time="2025-01-13T21:13:47.599930346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfnbz,Uid:bad43b6f-75fd-4158-95a3-c2785fc1ab62,Namespace:default,Attempt:0,}" Jan 13 21:13:47.692834 systemd-networkd[1844]: cilium_net: Gained IPv6LL Jan 13 21:13:47.979131 kubelet[2420]: E0113 21:13:47.979059 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:48.268640 systemd-networkd[1844]: cilium_vxlan: Gained IPv6LL Jan 13 21:13:48.800170 systemd-networkd[1844]: lxc_health: Link UP Jan 13 21:13:48.812973 systemd-networkd[1844]: lxc_health: Gained carrier Jan 13 21:13:48.980358 kubelet[2420]: E0113 21:13:48.980284 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:49.180771 systemd-networkd[1844]: lxc7a7d459d1b09: Link UP Jan 13 21:13:49.187289 kernel: eth0: renamed from tmp29459 Jan 13 21:13:49.195346 systemd-networkd[1844]: lxc7a7d459d1b09: Gained carrier Jan 13 21:13:49.980881 kubelet[2420]: E0113 21:13:49.980791 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:50.252566 systemd-networkd[1844]: lxc_health: Gained IPv6LL Jan 13 21:13:50.636478 systemd-networkd[1844]: lxc7a7d459d1b09: Gained IPv6LL Jan 13 21:13:50.982333 kubelet[2420]: E0113 21:13:50.981012 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:51.981905 kubelet[2420]: E0113 21:13:51.981826 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:52.982631 kubelet[2420]: E0113 21:13:52.982547 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:53.319057 ntpd[1909]: Listen normally on 8 cilium_host 192.168.1.98:123 Jan 13 21:13:53.320695 ntpd[1909]: 13 Jan 21:13:53 ntpd[1909]: Listen normally on 8 cilium_host 192.168.1.98:123 Jan 13 21:13:53.320695 ntpd[1909]: 13 Jan 21:13:53 ntpd[1909]: Listen normally on 9 cilium_net [fe80::2c1f:f3ff:fe43:9862%3]:123 Jan 13 21:13:53.320695 ntpd[1909]: 13 Jan 21:13:53 ntpd[1909]: Listen normally on 10 cilium_host [fe80::f80c:b6ff:fe55:9530%4]:123 Jan 13 21:13:53.320695 ntpd[1909]: 13 Jan 21:13:53 ntpd[1909]: Listen normally on 11 cilium_vxlan [fe80::e098:d6ff:fe15:bbf0%5]:123 Jan 13 21:13:53.320695 ntpd[1909]: 13 Jan 21:13:53 ntpd[1909]: Listen normally on 12 lxc_health [fe80::c0ef:1eff:fe26:169%7]:123 Jan 13 21:13:53.320695 ntpd[1909]: 13 Jan 21:13:53 ntpd[1909]: Listen normally on 13 lxc7a7d459d1b09 [fe80::e9:c2ff:fed4:b1d8%9]:123 Jan 13 21:13:53.319220 ntpd[1909]: Listen normally on 9 cilium_net [fe80::2c1f:f3ff:fe43:9862%3]:123 Jan 13 21:13:53.319371 ntpd[1909]: Listen normally on 10 cilium_host [fe80::f80c:b6ff:fe55:9530%4]:123 Jan 13 21:13:53.319445 ntpd[1909]: Listen normally on 11 cilium_vxlan [fe80::e098:d6ff:fe15:bbf0%5]:123 Jan 13 21:13:53.319520 ntpd[1909]: Listen normally on 12 lxc_health [fe80::c0ef:1eff:fe26:169%7]:123 Jan 13 21:13:53.319593 ntpd[1909]: Listen normally on 13 lxc7a7d459d1b09 [fe80::e9:c2ff:fed4:b1d8%9]:123 Jan 13 21:13:53.983734 kubelet[2420]: E0113 21:13:53.983655 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:54.984676 kubelet[2420]: E0113 21:13:54.984591 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:55.985812 kubelet[2420]: E0113 21:13:55.985734 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:56.986734 kubelet[2420]: E0113 21:13:56.986651 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:57.958032 kubelet[2420]: E0113 21:13:57.957956 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:57.987370 kubelet[2420]: E0113 21:13:57.987292 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:58.057341 containerd[1944]: time="2025-01-13T21:13:58.057098137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:58.057341 containerd[1944]: time="2025-01-13T21:13:58.057204853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:58.058756 containerd[1944]: time="2025-01-13T21:13:58.057278365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:58.059263 containerd[1944]: time="2025-01-13T21:13:58.059077825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:58.094093 systemd[1]: run-containerd-runc-k8s.io-29459d4b161a213d95f858044ca5cc27d726f26e8d673323698184f5e9e51d01-runc.DVBxpK.mount: Deactivated successfully. Jan 13 21:13:58.108544 systemd[1]: Started cri-containerd-29459d4b161a213d95f858044ca5cc27d726f26e8d673323698184f5e9e51d01.scope - libcontainer container 29459d4b161a213d95f858044ca5cc27d726f26e8d673323698184f5e9e51d01. Jan 13 21:13:58.172995 containerd[1944]: time="2025-01-13T21:13:58.172912202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfnbz,Uid:bad43b6f-75fd-4158-95a3-c2785fc1ab62,Namespace:default,Attempt:0,} returns sandbox id \"29459d4b161a213d95f858044ca5cc27d726f26e8d673323698184f5e9e51d01\"" Jan 13 21:13:58.176996 containerd[1944]: time="2025-01-13T21:13:58.176924834Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:13:58.988143 kubelet[2420]: E0113 21:13:58.988075 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:59.989160 kubelet[2420]: E0113 21:13:59.989097 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:00.989447 kubelet[2420]: E0113 21:14:00.989402 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:01.336162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2756085251.mount: Deactivated successfully. Jan 13 21:14:01.991148 kubelet[2420]: E0113 21:14:01.990955 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:02.824245 containerd[1944]: time="2025-01-13T21:14:02.824161701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:02.826644 containerd[1944]: time="2025-01-13T21:14:02.826268685Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 21:14:02.828844 containerd[1944]: time="2025-01-13T21:14:02.828779793Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:02.835001 containerd[1944]: time="2025-01-13T21:14:02.834941109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:02.837048 containerd[1944]: time="2025-01-13T21:14:02.836865537Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 4.659868235s" Jan 13 21:14:02.837048 containerd[1944]: time="2025-01-13T21:14:02.836917749Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 21:14:02.841457 containerd[1944]: time="2025-01-13T21:14:02.841291569Z" level=info msg="CreateContainer within sandbox \"29459d4b161a213d95f858044ca5cc27d726f26e8d673323698184f5e9e51d01\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:14:02.866186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount235167917.mount: Deactivated successfully. Jan 13 21:14:02.870306 containerd[1944]: time="2025-01-13T21:14:02.870206637Z" level=info msg="CreateContainer within sandbox \"29459d4b161a213d95f858044ca5cc27d726f26e8d673323698184f5e9e51d01\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"56b117976f892bbb4e65c6a672896ea7420f7fe78b355478d8918217adb446e2\"" Jan 13 21:14:02.871870 containerd[1944]: time="2025-01-13T21:14:02.871522557Z" level=info msg="StartContainer for \"56b117976f892bbb4e65c6a672896ea7420f7fe78b355478d8918217adb446e2\"" Jan 13 21:14:02.924568 systemd[1]: Started cri-containerd-56b117976f892bbb4e65c6a672896ea7420f7fe78b355478d8918217adb446e2.scope - libcontainer container 56b117976f892bbb4e65c6a672896ea7420f7fe78b355478d8918217adb446e2. Jan 13 21:14:02.972045 containerd[1944]: time="2025-01-13T21:14:02.971985214Z" level=info msg="StartContainer for \"56b117976f892bbb4e65c6a672896ea7420f7fe78b355478d8918217adb446e2\" returns successfully" Jan 13 21:14:02.991900 kubelet[2420]: E0113 21:14:02.991831 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:03.350392 kubelet[2420]: I0113 21:14:03.350280 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-cfnbz" podStartSLOduration=11.687899813 podStartE2EDuration="16.35025692s" podCreationTimestamp="2025-01-13 21:13:47 +0000 UTC" firstStartedPulling="2025-01-13 21:13:58.176267558 +0000 UTC m=+41.624849656" lastFinishedPulling="2025-01-13 21:14:02.838624653 +0000 UTC m=+46.287206763" observedRunningTime="2025-01-13 21:14:03.348850376 +0000 UTC m=+46.797432606" watchObservedRunningTime="2025-01-13 21:14:03.35025692 +0000 UTC m=+46.798839066" Jan 13 21:14:03.993025 kubelet[2420]: E0113 21:14:03.992948 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:04.994253 kubelet[2420]: E0113 21:14:04.994148 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:05.995259 kubelet[2420]: E0113 21:14:05.995182 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:06.996402 kubelet[2420]: E0113 21:14:06.996316 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:07.627373 kubelet[2420]: I0113 21:14:07.627269 2420 topology_manager.go:215] "Topology Admit Handler" podUID="7e275c1c-b3a5-4ea2-9d71-7b88ffaaca90" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:14:07.640219 systemd[1]: Created slice kubepods-besteffort-pod7e275c1c_b3a5_4ea2_9d71_7b88ffaaca90.slice - libcontainer container kubepods-besteffort-pod7e275c1c_b3a5_4ea2_9d71_7b88ffaaca90.slice. Jan 13 21:14:07.753938 kubelet[2420]: I0113 21:14:07.753726 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7e275c1c-b3a5-4ea2-9d71-7b88ffaaca90-data\") pod \"nfs-server-provisioner-0\" (UID: \"7e275c1c-b3a5-4ea2-9d71-7b88ffaaca90\") " pod="default/nfs-server-provisioner-0" Jan 13 21:14:07.753938 kubelet[2420]: I0113 21:14:07.753804 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffp8n\" (UniqueName: \"kubernetes.io/projected/7e275c1c-b3a5-4ea2-9d71-7b88ffaaca90-kube-api-access-ffp8n\") pod \"nfs-server-provisioner-0\" (UID: \"7e275c1c-b3a5-4ea2-9d71-7b88ffaaca90\") " pod="default/nfs-server-provisioner-0" Jan 13 21:14:07.946609 containerd[1944]: time="2025-01-13T21:14:07.946483899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7e275c1c-b3a5-4ea2-9d71-7b88ffaaca90,Namespace:default,Attempt:0,}" Jan 13 21:14:07.997039 kubelet[2420]: E0113 21:14:07.996952 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:08.011029 (udev-worker)[3874]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:14:08.019371 kernel: eth0: renamed from tmp7c6df Jan 13 21:14:08.021649 (udev-worker)[3875]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:14:08.023309 systemd-networkd[1844]: lxcdd72d4bfa7cc: Link UP Jan 13 21:14:08.026791 systemd-networkd[1844]: lxcdd72d4bfa7cc: Gained carrier Jan 13 21:14:08.396651 containerd[1944]: time="2025-01-13T21:14:08.396504337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:08.396819 containerd[1944]: time="2025-01-13T21:14:08.396709609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:08.396893 containerd[1944]: time="2025-01-13T21:14:08.396819913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:08.397296 containerd[1944]: time="2025-01-13T21:14:08.397173433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:08.437600 systemd[1]: Started cri-containerd-7c6df7f8d995a142448d5e7cf9f2bc03a73ba4e0803b9c68e3f588906613298c.scope - libcontainer container 7c6df7f8d995a142448d5e7cf9f2bc03a73ba4e0803b9c68e3f588906613298c. Jan 13 21:14:08.500609 containerd[1944]: time="2025-01-13T21:14:08.500547421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7e275c1c-b3a5-4ea2-9d71-7b88ffaaca90,Namespace:default,Attempt:0,} returns sandbox id \"7c6df7f8d995a142448d5e7cf9f2bc03a73ba4e0803b9c68e3f588906613298c\"" Jan 13 21:14:08.504029 containerd[1944]: time="2025-01-13T21:14:08.503726053Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:14:08.997162 kubelet[2420]: E0113 21:14:08.997088 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:09.644952 systemd-networkd[1844]: lxcdd72d4bfa7cc: Gained IPv6LL Jan 13 21:14:09.997751 kubelet[2420]: E0113 21:14:09.997582 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:10.998574 kubelet[2420]: E0113 21:14:10.998506 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:11.191780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937916051.mount: Deactivated successfully. Jan 13 21:14:11.999412 kubelet[2420]: E0113 21:14:11.999340 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:12.319059 ntpd[1909]: Listen normally on 14 lxcdd72d4bfa7cc [fe80::6409:99ff:fe1f:6b9d%11]:123 Jan 13 21:14:12.320746 ntpd[1909]: 13 Jan 21:14:12 ntpd[1909]: Listen normally on 14 lxcdd72d4bfa7cc [fe80::6409:99ff:fe1f:6b9d%11]:123 Jan 13 21:14:12.999734 kubelet[2420]: E0113 21:14:12.999613 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:14.000568 kubelet[2420]: E0113 21:14:14.000500 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:14.531626 containerd[1944]: time="2025-01-13T21:14:14.531535135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:14.533926 containerd[1944]: time="2025-01-13T21:14:14.533681587Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 13 21:14:14.536276 containerd[1944]: time="2025-01-13T21:14:14.536121667Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:14.544371 containerd[1944]: time="2025-01-13T21:14:14.544272763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:14.546719 containerd[1944]: time="2025-01-13T21:14:14.546517783Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 6.042718098s" Jan 13 21:14:14.546719 containerd[1944]: time="2025-01-13T21:14:14.546582403Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 21:14:14.551575 containerd[1944]: time="2025-01-13T21:14:14.551510875Z" level=info msg="CreateContainer within sandbox \"7c6df7f8d995a142448d5e7cf9f2bc03a73ba4e0803b9c68e3f588906613298c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:14:14.586183 containerd[1944]: time="2025-01-13T21:14:14.585982076Z" level=info msg="CreateContainer within sandbox \"7c6df7f8d995a142448d5e7cf9f2bc03a73ba4e0803b9c68e3f588906613298c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a6a9afe3844eaa01d47ecfc2dbf5a4ee597acfaaf7ba4d1e240825fc34deaa54\"" Jan 13 21:14:14.587494 containerd[1944]: time="2025-01-13T21:14:14.586981256Z" level=info msg="StartContainer for \"a6a9afe3844eaa01d47ecfc2dbf5a4ee597acfaaf7ba4d1e240825fc34deaa54\"" Jan 13 21:14:14.641560 systemd[1]: Started cri-containerd-a6a9afe3844eaa01d47ecfc2dbf5a4ee597acfaaf7ba4d1e240825fc34deaa54.scope - libcontainer container a6a9afe3844eaa01d47ecfc2dbf5a4ee597acfaaf7ba4d1e240825fc34deaa54. Jan 13 21:14:14.691708 containerd[1944]: time="2025-01-13T21:14:14.691025192Z" level=info msg="StartContainer for \"a6a9afe3844eaa01d47ecfc2dbf5a4ee597acfaaf7ba4d1e240825fc34deaa54\" returns successfully" Jan 13 21:14:15.001681 kubelet[2420]: E0113 21:14:15.001584 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:15.425709 kubelet[2420]: I0113 21:14:15.425514 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.38035793 podStartE2EDuration="8.4254953s" podCreationTimestamp="2025-01-13 21:14:07 +0000 UTC" firstStartedPulling="2025-01-13 21:14:08.503036389 +0000 UTC m=+51.951618487" lastFinishedPulling="2025-01-13 21:14:14.548173747 +0000 UTC m=+57.996755857" observedRunningTime="2025-01-13 21:14:15.424697696 +0000 UTC m=+58.873279842" watchObservedRunningTime="2025-01-13 21:14:15.4254953 +0000 UTC m=+58.874077398" Jan 13 21:14:16.002091 kubelet[2420]: E0113 21:14:16.002017 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:17.003176 kubelet[2420]: E0113 21:14:17.003111 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:17.958257 kubelet[2420]: E0113 21:14:17.958155 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:18.003702 kubelet[2420]: E0113 21:14:18.003207 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:19.003883 kubelet[2420]: E0113 21:14:19.003805 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:20.004938 kubelet[2420]: E0113 21:14:20.004870 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:21.006019 kubelet[2420]: E0113 21:14:21.005938 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:22.006739 kubelet[2420]: E0113 21:14:22.006663 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:23.007395 kubelet[2420]: E0113 21:14:23.007314 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:24.007837 kubelet[2420]: E0113 21:14:24.007763 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:24.491775 kubelet[2420]: I0113 21:14:24.491721 2420 topology_manager.go:215] "Topology Admit Handler" podUID="e4514939-1a38-49c3-b5b7-244659be257a" podNamespace="default" podName="test-pod-1" Jan 13 21:14:24.504387 systemd[1]: Created slice kubepods-besteffort-pode4514939_1a38_49c3_b5b7_244659be257a.slice - libcontainer container kubepods-besteffort-pode4514939_1a38_49c3_b5b7_244659be257a.slice. Jan 13 21:14:24.666114 kubelet[2420]: I0113 21:14:24.665749 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2rj\" (UniqueName: \"kubernetes.io/projected/e4514939-1a38-49c3-b5b7-244659be257a-kube-api-access-7l2rj\") pod \"test-pod-1\" (UID: \"e4514939-1a38-49c3-b5b7-244659be257a\") " pod="default/test-pod-1" Jan 13 21:14:24.666114 kubelet[2420]: I0113 21:14:24.665816 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40d3784a-df3b-4a23-80e1-b24010d5bd24\" (UniqueName: \"kubernetes.io/nfs/e4514939-1a38-49c3-b5b7-244659be257a-pvc-40d3784a-df3b-4a23-80e1-b24010d5bd24\") pod \"test-pod-1\" (UID: \"e4514939-1a38-49c3-b5b7-244659be257a\") " pod="default/test-pod-1" Jan 13 21:14:24.802393 kernel: FS-Cache: Loaded Jan 13 21:14:24.846655 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:14:24.846796 kernel: RPC: Registered udp transport module. Jan 13 21:14:24.846844 kernel: RPC: Registered tcp transport module. Jan 13 21:14:24.848852 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:14:24.848953 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:14:25.008120 kubelet[2420]: E0113 21:14:25.007974 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:25.175042 kernel: NFS: Registering the id_resolver key type Jan 13 21:14:25.175221 kernel: Key type id_resolver registered Jan 13 21:14:25.175602 kernel: Key type id_legacy registered Jan 13 21:14:25.214014 nfsidmap[4061]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 21:14:25.220493 nfsidmap[4062]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 21:14:25.410269 containerd[1944]: time="2025-01-13T21:14:25.410165165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e4514939-1a38-49c3-b5b7-244659be257a,Namespace:default,Attempt:0,}" Jan 13 21:14:25.471351 systemd-networkd[1844]: lxc72b2629bda49: Link UP Jan 13 21:14:25.473347 (udev-worker)[4054]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:14:25.475296 kernel: eth0: renamed from tmp25c93 Jan 13 21:14:25.482457 systemd-networkd[1844]: lxc72b2629bda49: Gained carrier Jan 13 21:14:25.807053 containerd[1944]: time="2025-01-13T21:14:25.806410819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:25.807053 containerd[1944]: time="2025-01-13T21:14:25.806526571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:25.807053 containerd[1944]: time="2025-01-13T21:14:25.806577727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:25.807053 containerd[1944]: time="2025-01-13T21:14:25.806731819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:25.850563 systemd[1]: Started cri-containerd-25c93b7c481194d05b55ccce0c08e0d6df8fb1cf0effae5ca53caab670130ab8.scope - libcontainer container 25c93b7c481194d05b55ccce0c08e0d6df8fb1cf0effae5ca53caab670130ab8. Jan 13 21:14:25.905617 containerd[1944]: time="2025-01-13T21:14:25.905548760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e4514939-1a38-49c3-b5b7-244659be257a,Namespace:default,Attempt:0,} returns sandbox id \"25c93b7c481194d05b55ccce0c08e0d6df8fb1cf0effae5ca53caab670130ab8\"" Jan 13 21:14:25.909141 containerd[1944]: time="2025-01-13T21:14:25.908703044Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:14:26.008917 kubelet[2420]: E0113 21:14:26.008850 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:26.242588 containerd[1944]: time="2025-01-13T21:14:26.242512049Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:26.244569 containerd[1944]: time="2025-01-13T21:14:26.244487165Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:14:26.250381 containerd[1944]: time="2025-01-13T21:14:26.250299029Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 341.532805ms" Jan 13 21:14:26.250381 containerd[1944]: time="2025-01-13T21:14:26.250357997Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 21:14:26.253785 containerd[1944]: time="2025-01-13T21:14:26.253712226Z" level=info msg="CreateContainer within sandbox \"25c93b7c481194d05b55ccce0c08e0d6df8fb1cf0effae5ca53caab670130ab8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:14:26.285003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099949410.mount: Deactivated successfully. Jan 13 21:14:26.293430 containerd[1944]: time="2025-01-13T21:14:26.293274558Z" level=info msg="CreateContainer within sandbox \"25c93b7c481194d05b55ccce0c08e0d6df8fb1cf0effae5ca53caab670130ab8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"319a0a36dfc53150354ee451ab30cfda2c35465c8d477692db5dfa96c177b37d\"" Jan 13 21:14:26.294256 containerd[1944]: time="2025-01-13T21:14:26.293941338Z" level=info msg="StartContainer for \"319a0a36dfc53150354ee451ab30cfda2c35465c8d477692db5dfa96c177b37d\"" Jan 13 21:14:26.343564 systemd[1]: Started cri-containerd-319a0a36dfc53150354ee451ab30cfda2c35465c8d477692db5dfa96c177b37d.scope - libcontainer container 319a0a36dfc53150354ee451ab30cfda2c35465c8d477692db5dfa96c177b37d. Jan 13 21:14:26.389119 containerd[1944]: time="2025-01-13T21:14:26.388486314Z" level=info msg="StartContainer for \"319a0a36dfc53150354ee451ab30cfda2c35465c8d477692db5dfa96c177b37d\" returns successfully" Jan 13 21:14:26.458257 kubelet[2420]: I0113 21:14:26.458133 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.114666834 podStartE2EDuration="18.458112391s" podCreationTimestamp="2025-01-13 21:14:08 +0000 UTC" firstStartedPulling="2025-01-13 21:14:25.908112668 +0000 UTC m=+69.356694766" lastFinishedPulling="2025-01-13 21:14:26.251558225 +0000 UTC m=+69.700140323" observedRunningTime="2025-01-13 21:14:26.458011303 +0000 UTC m=+69.906593413" watchObservedRunningTime="2025-01-13 21:14:26.458112391 +0000 UTC m=+69.906694537" Jan 13 21:14:26.668516 systemd-networkd[1844]: lxc72b2629bda49: Gained IPv6LL Jan 13 21:14:27.009340 kubelet[2420]: E0113 21:14:27.009158 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:28.009879 kubelet[2420]: E0113 21:14:28.009815 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:29.010946 kubelet[2420]: E0113 21:14:29.010883 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:29.319076 ntpd[1909]: Listen normally on 15 lxc72b2629bda49 [fe80::4dd:d1ff:fea1:286b%13]:123 Jan 13 21:14:29.319818 ntpd[1909]: 13 Jan 21:14:29 ntpd[1909]: Listen normally on 15 lxc72b2629bda49 [fe80::4dd:d1ff:fea1:286b%13]:123 Jan 13 21:14:30.011957 kubelet[2420]: E0113 21:14:30.011898 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:31.013005 kubelet[2420]: E0113 21:14:31.012928 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:32.013321 kubelet[2420]: E0113 21:14:32.013257 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:33.013660 kubelet[2420]: E0113 21:14:33.013575 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:34.014103 kubelet[2420]: E0113 21:14:34.014018 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:34.149080 systemd[1]: run-containerd-runc-k8s.io-f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda-runc.PHWRj1.mount: Deactivated successfully. Jan 13 21:14:34.167900 containerd[1944]: time="2025-01-13T21:14:34.167633641Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:14:34.179066 containerd[1944]: time="2025-01-13T21:14:34.178840357Z" level=info msg="StopContainer for \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\" with timeout 2 (s)" Jan 13 21:14:34.179732 containerd[1944]: time="2025-01-13T21:14:34.179674117Z" level=info msg="Stop container \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\" with signal terminated" Jan 13 21:14:34.191736 systemd-networkd[1844]: lxc_health: Link DOWN Jan 13 21:14:34.191756 systemd-networkd[1844]: lxc_health: Lost carrier Jan 13 21:14:34.207317 systemd[1]: cri-containerd-f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda.scope: Deactivated successfully. Jan 13 21:14:34.208085 systemd[1]: cri-containerd-f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda.scope: Consumed 15.453s CPU time. Jan 13 21:14:34.243743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda-rootfs.mount: Deactivated successfully. Jan 13 21:14:34.506496 containerd[1944]: time="2025-01-13T21:14:34.506410862Z" level=info msg="shim disconnected" id=f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda namespace=k8s.io Jan 13 21:14:34.506496 containerd[1944]: time="2025-01-13T21:14:34.506486270Z" level=warning msg="cleaning up after shim disconnected" id=f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda namespace=k8s.io Jan 13 21:14:34.506496 containerd[1944]: time="2025-01-13T21:14:34.506509538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:34.525969 containerd[1944]: time="2025-01-13T21:14:34.525869115Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:14:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:14:34.530643 containerd[1944]: time="2025-01-13T21:14:34.530490123Z" level=info msg="StopContainer for \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\" returns successfully" Jan 13 21:14:34.531631 containerd[1944]: time="2025-01-13T21:14:34.531574299Z" level=info msg="StopPodSandbox for \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\"" Jan 13 21:14:34.531760 containerd[1944]: time="2025-01-13T21:14:34.531663087Z" level=info msg="Container to stop \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:14:34.531760 containerd[1944]: time="2025-01-13T21:14:34.531692043Z" level=info msg="Container to stop \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:14:34.531760 containerd[1944]: time="2025-01-13T21:14:34.531738891Z" level=info msg="Container to stop \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:14:34.531933 containerd[1944]: time="2025-01-13T21:14:34.531768819Z" level=info msg="Container to stop \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:14:34.531933 containerd[1944]: time="2025-01-13T21:14:34.531794151Z" level=info msg="Container to stop \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:14:34.535497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c-shm.mount: Deactivated successfully. Jan 13 21:14:34.545467 systemd[1]: cri-containerd-601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c.scope: Deactivated successfully. Jan 13 21:14:34.579934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c-rootfs.mount: Deactivated successfully. Jan 13 21:14:34.585133 containerd[1944]: time="2025-01-13T21:14:34.584820543Z" level=info msg="shim disconnected" id=601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c namespace=k8s.io Jan 13 21:14:34.585133 containerd[1944]: time="2025-01-13T21:14:34.584873343Z" level=warning msg="cleaning up after shim disconnected" id=601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c namespace=k8s.io Jan 13 21:14:34.585133 containerd[1944]: time="2025-01-13T21:14:34.584892831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:34.607101 containerd[1944]: time="2025-01-13T21:14:34.606979743Z" level=info msg="TearDown network for sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" successfully" Jan 13 21:14:34.607101 containerd[1944]: time="2025-01-13T21:14:34.607046583Z" level=info msg="StopPodSandbox for \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" returns successfully" Jan 13 21:14:34.724057 kubelet[2420]: I0113 21:14:34.723940 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-etc-cni-netd\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.724057 kubelet[2420]: I0113 21:14:34.723995 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.726253 kubelet[2420]: I0113 21:14:34.724035 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-hubble-tls\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726253 kubelet[2420]: I0113 21:14:34.724447 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-kernel\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726253 kubelet[2420]: I0113 21:14:34.724491 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-net\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726253 kubelet[2420]: I0113 21:14:34.724536 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4mr5\" (UniqueName: \"kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-kube-api-access-c4mr5\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726253 kubelet[2420]: I0113 21:14:34.724573 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-bpf-maps\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726253 kubelet[2420]: I0113 21:14:34.724610 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-lib-modules\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726659 kubelet[2420]: I0113 21:14:34.724642 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-hostproc\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726659 kubelet[2420]: I0113 21:14:34.724674 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cni-path\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726659 kubelet[2420]: I0113 21:14:34.724709 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-run\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726659 kubelet[2420]: I0113 21:14:34.724741 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-xtables-lock\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726659 kubelet[2420]: I0113 21:14:34.724776 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-cgroup\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.726659 kubelet[2420]: I0113 21:14:34.724817 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/514ec45b-bcfb-46a7-a921-65de721e8974-clustermesh-secrets\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.727018 kubelet[2420]: I0113 21:14:34.724858 2420 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-config-path\") pod \"514ec45b-bcfb-46a7-a921-65de721e8974\" (UID: \"514ec45b-bcfb-46a7-a921-65de721e8974\") " Jan 13 21:14:34.727018 kubelet[2420]: I0113 21:14:34.724909 2420 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-etc-cni-netd\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.727018 kubelet[2420]: I0113 21:14:34.725032 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-hostproc" (OuterVolumeSpecName: "hostproc") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.727018 kubelet[2420]: I0113 21:14:34.725201 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.727018 kubelet[2420]: I0113 21:14:34.725274 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.727351 kubelet[2420]: I0113 21:14:34.725636 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.729742 kubelet[2420]: I0113 21:14:34.729675 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.729896 kubelet[2420]: I0113 21:14:34.729797 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.729896 kubelet[2420]: I0113 21:14:34.729883 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cni-path" (OuterVolumeSpecName: "cni-path") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.730016 kubelet[2420]: I0113 21:14:34.729951 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.730182 kubelet[2420]: I0113 21:14:34.730021 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:14:34.732124 kubelet[2420]: I0113 21:14:34.732055 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:14:34.735610 kubelet[2420]: I0113 21:14:34.735538 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:14:34.736867 kubelet[2420]: I0113 21:14:34.736798 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-kube-api-access-c4mr5" (OuterVolumeSpecName: "kube-api-access-c4mr5") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "kube-api-access-c4mr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:14:34.737589 kubelet[2420]: I0113 21:14:34.737529 2420 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/514ec45b-bcfb-46a7-a921-65de721e8974-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "514ec45b-bcfb-46a7-a921-65de721e8974" (UID: "514ec45b-bcfb-46a7-a921-65de721e8974"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.825909 2420 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-run\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.825961 2420 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-xtables-lock\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.825985 2420 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-cgroup\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.826005 2420 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/514ec45b-bcfb-46a7-a921-65de721e8974-clustermesh-secrets\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.826044 2420 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/514ec45b-bcfb-46a7-a921-65de721e8974-cilium-config-path\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.826069 2420 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-hubble-tls\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.826089 2420 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-kernel\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827293 kubelet[2420]: I0113 21:14:34.826108 2420 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-host-proc-sys-net\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827765 kubelet[2420]: I0113 21:14:34.826137 2420 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c4mr5\" (UniqueName: \"kubernetes.io/projected/514ec45b-bcfb-46a7-a921-65de721e8974-kube-api-access-c4mr5\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827765 kubelet[2420]: I0113 21:14:34.826169 2420 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-bpf-maps\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827765 kubelet[2420]: I0113 21:14:34.826192 2420 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-lib-modules\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827765 kubelet[2420]: I0113 21:14:34.826212 2420 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-hostproc\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:34.827765 kubelet[2420]: I0113 21:14:34.826269 2420 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/514ec45b-bcfb-46a7-a921-65de721e8974-cni-path\") on node \"172.31.29.222\" DevicePath \"\"" Jan 13 21:14:35.015076 kubelet[2420]: E0113 21:14:35.015010 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:35.139270 systemd[1]: var-lib-kubelet-pods-514ec45b\x2dbcfb\x2d46a7\x2da921\x2d65de721e8974-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4mr5.mount: Deactivated successfully. Jan 13 21:14:35.139445 systemd[1]: var-lib-kubelet-pods-514ec45b\x2dbcfb\x2d46a7\x2da921\x2d65de721e8974-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:14:35.139585 systemd[1]: var-lib-kubelet-pods-514ec45b\x2dbcfb\x2d46a7\x2da921\x2d65de721e8974-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:14:35.473356 kubelet[2420]: I0113 21:14:35.471161 2420 scope.go:117] "RemoveContainer" containerID="f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda" Jan 13 21:14:35.473998 containerd[1944]: time="2025-01-13T21:14:35.473937783Z" level=info msg="RemoveContainer for \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\"" Jan 13 21:14:35.482170 containerd[1944]: time="2025-01-13T21:14:35.481992399Z" level=info msg="RemoveContainer for \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\" returns successfully" Jan 13 21:14:35.482588 kubelet[2420]: I0113 21:14:35.482526 2420 scope.go:117] "RemoveContainer" containerID="83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9" Jan 13 21:14:35.482899 systemd[1]: Removed slice kubepods-burstable-pod514ec45b_bcfb_46a7_a921_65de721e8974.slice - libcontainer container kubepods-burstable-pod514ec45b_bcfb_46a7_a921_65de721e8974.slice. Jan 13 21:14:35.483545 systemd[1]: kubepods-burstable-pod514ec45b_bcfb_46a7_a921_65de721e8974.slice: Consumed 15.605s CPU time. Jan 13 21:14:35.486897 containerd[1944]: time="2025-01-13T21:14:35.485651751Z" level=info msg="RemoveContainer for \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\"" Jan 13 21:14:35.490978 containerd[1944]: time="2025-01-13T21:14:35.490475163Z" level=info msg="RemoveContainer for \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\" returns successfully" Jan 13 21:14:35.491174 kubelet[2420]: I0113 21:14:35.490812 2420 scope.go:117] "RemoveContainer" containerID="f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41" Jan 13 21:14:35.493132 containerd[1944]: time="2025-01-13T21:14:35.493072635Z" level=info msg="RemoveContainer for \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\"" Jan 13 21:14:35.498735 containerd[1944]: time="2025-01-13T21:14:35.498649995Z" level=info msg="RemoveContainer for \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\" returns successfully" Jan 13 21:14:35.499912 kubelet[2420]: I0113 21:14:35.499311 2420 scope.go:117] "RemoveContainer" containerID="0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267" Jan 13 21:14:35.502053 containerd[1944]: time="2025-01-13T21:14:35.501957327Z" level=info msg="RemoveContainer for \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\"" Jan 13 21:14:35.505981 containerd[1944]: time="2025-01-13T21:14:35.505902351Z" level=info msg="RemoveContainer for \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\" returns successfully" Jan 13 21:14:35.506643 kubelet[2420]: I0113 21:14:35.506472 2420 scope.go:117] "RemoveContainer" containerID="b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2" Jan 13 21:14:35.509503 containerd[1944]: time="2025-01-13T21:14:35.509092071Z" level=info msg="RemoveContainer for \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\"" Jan 13 21:14:35.513059 containerd[1944]: time="2025-01-13T21:14:35.512993031Z" level=info msg="RemoveContainer for \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\" returns successfully" Jan 13 21:14:35.513506 kubelet[2420]: I0113 21:14:35.513435 2420 scope.go:117] "RemoveContainer" containerID="f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda" Jan 13 21:14:35.514432 containerd[1944]: time="2025-01-13T21:14:35.514204528Z" level=error msg="ContainerStatus for \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\": not found" Jan 13 21:14:35.514599 kubelet[2420]: E0113 21:14:35.514556 2420 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\": not found" containerID="f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda" Jan 13 21:14:35.514746 kubelet[2420]: I0113 21:14:35.514610 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda"} err="failed to get container status \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\": rpc error: code = NotFound desc = an error occurred when try to find container \"f047f3650284523b11325bce7e23605aa1711ba665295c5d231ede4a648f3eda\": not found" Jan 13 21:14:35.514746 kubelet[2420]: I0113 21:14:35.514740 2420 scope.go:117] "RemoveContainer" containerID="83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9" Jan 13 21:14:35.515387 containerd[1944]: time="2025-01-13T21:14:35.515105644Z" level=error msg="ContainerStatus for \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\": not found" Jan 13 21:14:35.515508 kubelet[2420]: E0113 21:14:35.515452 2420 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\": not found" containerID="83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9" Jan 13 21:14:35.515585 kubelet[2420]: I0113 21:14:35.515508 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9"} err="failed to get container status \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"83ec54a0ba43db7eed0349303e4829f3ad9a7f7e393310b98a465d238a1515a9\": not found" Jan 13 21:14:35.515585 kubelet[2420]: I0113 21:14:35.515547 2420 scope.go:117] "RemoveContainer" containerID="f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41" Jan 13 21:14:35.515970 containerd[1944]: time="2025-01-13T21:14:35.515906332Z" level=error msg="ContainerStatus for \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\": not found" Jan 13 21:14:35.516360 kubelet[2420]: E0113 21:14:35.516162 2420 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\": not found" containerID="f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41" Jan 13 21:14:35.516360 kubelet[2420]: I0113 21:14:35.516204 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41"} err="failed to get container status \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\": rpc error: code = NotFound desc = an error occurred when try to find container \"f093f6c32de66b321db6b3c1be4ee4ff890263d58696c0596a62b72c62fa7d41\": not found" Jan 13 21:14:35.516360 kubelet[2420]: I0113 21:14:35.516272 2420 scope.go:117] "RemoveContainer" containerID="0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267" Jan 13 21:14:35.517271 containerd[1944]: time="2025-01-13T21:14:35.516877096Z" level=error msg="ContainerStatus for \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\": not found" Jan 13 21:14:35.517473 kubelet[2420]: E0113 21:14:35.517132 2420 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\": not found" containerID="0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267" Jan 13 21:14:35.517473 kubelet[2420]: I0113 21:14:35.517177 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267"} err="failed to get container status \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b5045205e72ea3e9e4f19593548cac0845d427e0f4da04bb068420e2af21267\": not found" Jan 13 21:14:35.517473 kubelet[2420]: I0113 21:14:35.517214 2420 scope.go:117] "RemoveContainer" containerID="b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2" Jan 13 21:14:35.518172 containerd[1944]: time="2025-01-13T21:14:35.518012992Z" level=error msg="ContainerStatus for \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\": not found" Jan 13 21:14:35.518446 kubelet[2420]: E0113 21:14:35.518314 2420 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\": not found" containerID="b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2" Jan 13 21:14:35.518446 kubelet[2420]: I0113 21:14:35.518361 2420 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2"} err="failed to get container status \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5108a2ac0f3786724614b54ad170b199ff99f9a580743bc562cb0d9f95002c2\": not found" Jan 13 21:14:36.015408 kubelet[2420]: E0113 21:14:36.015334 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:36.114683 kubelet[2420]: I0113 21:14:36.114620 2420 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" path="/var/lib/kubelet/pods/514ec45b-bcfb-46a7-a921-65de721e8974/volumes" Jan 13 21:14:36.319160 ntpd[1909]: Deleting interface #12 lxc_health, fe80::c0ef:1eff:fe26:169%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 13 21:14:36.319827 ntpd[1909]: 13 Jan 21:14:36 ntpd[1909]: Deleting interface #12 lxc_health, fe80::c0ef:1eff:fe26:169%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 13 21:14:37.015770 kubelet[2420]: E0113 21:14:37.015701 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:37.957843 kubelet[2420]: E0113 21:14:37.957779 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:38.016351 kubelet[2420]: E0113 21:14:38.016282 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:38.017908 kubelet[2420]: I0113 21:14:38.017860 2420 topology_manager.go:215] "Topology Admit Handler" podUID="3bebd356-0b6a-498a-ad6a-b8711a417526" podNamespace="kube-system" podName="cilium-operator-599987898-bbkmw" Jan 13 21:14:38.018010 kubelet[2420]: E0113 21:14:38.017939 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" containerName="mount-cgroup" Jan 13 21:14:38.018010 kubelet[2420]: E0113 21:14:38.017960 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" containerName="apply-sysctl-overwrites" Jan 13 21:14:38.018010 kubelet[2420]: E0113 21:14:38.017975 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" containerName="mount-bpf-fs" Jan 13 21:14:38.018010 kubelet[2420]: E0113 21:14:38.017991 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" containerName="cilium-agent" Jan 13 21:14:38.018010 kubelet[2420]: E0113 21:14:38.018005 2420 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" containerName="clean-cilium-state" Jan 13 21:14:38.018363 kubelet[2420]: I0113 21:14:38.018062 2420 memory_manager.go:354] "RemoveStaleState removing state" podUID="514ec45b-bcfb-46a7-a921-65de721e8974" containerName="cilium-agent" Jan 13 21:14:38.028254 systemd[1]: Created slice kubepods-besteffort-pod3bebd356_0b6a_498a_ad6a_b8711a417526.slice - libcontainer container kubepods-besteffort-pod3bebd356_0b6a_498a_ad6a_b8711a417526.slice. Jan 13 21:14:38.043104 kubelet[2420]: W0113 21:14:38.043065 2420 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.29.222" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.043377 kubelet[2420]: E0113 21:14:38.043351 2420 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.29.222" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.080820 kubelet[2420]: I0113 21:14:38.080773 2420 topology_manager.go:215] "Topology Admit Handler" podUID="7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7" podNamespace="kube-system" podName="cilium-mnl65" Jan 13 21:14:38.091790 systemd[1]: Created slice kubepods-burstable-pod7e01fde7_bd72_44c6_ae5b_99ccd8fb2ff7.slice - libcontainer container kubepods-burstable-pod7e01fde7_bd72_44c6_ae5b_99ccd8fb2ff7.slice. Jan 13 21:14:38.097298 kubelet[2420]: W0113 21:14:38.097147 2420 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.29.222" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.097467 kubelet[2420]: E0113 21:14:38.097309 2420 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.29.222" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.097467 kubelet[2420]: W0113 21:14:38.097198 2420 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.29.222" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.097467 kubelet[2420]: E0113 21:14:38.097368 2420 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.29.222" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.097656 kubelet[2420]: W0113 21:14:38.097512 2420 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.29.222" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.097656 kubelet[2420]: E0113 21:14:38.097539 2420 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.29.222" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.29.222' and this object Jan 13 21:14:38.146358 kubelet[2420]: I0113 21:14:38.145625 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bebd356-0b6a-498a-ad6a-b8711a417526-cilium-config-path\") pod \"cilium-operator-599987898-bbkmw\" (UID: \"3bebd356-0b6a-498a-ad6a-b8711a417526\") " pod="kube-system/cilium-operator-599987898-bbkmw" Jan 13 21:14:38.146358 kubelet[2420]: I0113 21:14:38.145690 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8tx\" (UniqueName: \"kubernetes.io/projected/3bebd356-0b6a-498a-ad6a-b8711a417526-kube-api-access-bv8tx\") pod \"cilium-operator-599987898-bbkmw\" (UID: \"3bebd356-0b6a-498a-ad6a-b8711a417526\") " pod="kube-system/cilium-operator-599987898-bbkmw" Jan 13 21:14:38.151667 kubelet[2420]: E0113 21:14:38.151541 2420 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:14:38.248300 kubelet[2420]: I0113 21:14:38.246546 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-ipsec-secrets\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248300 kubelet[2420]: I0113 21:14:38.246613 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-hubble-tls\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248300 kubelet[2420]: I0113 21:14:38.246649 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-run\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248300 kubelet[2420]: I0113 21:14:38.246683 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cni-path\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248300 kubelet[2420]: I0113 21:14:38.246715 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-etc-cni-netd\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248300 kubelet[2420]: I0113 21:14:38.246748 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-lib-modules\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248693 kubelet[2420]: I0113 21:14:38.246782 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwbp6\" (UniqueName: \"kubernetes.io/projected/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-kube-api-access-xwbp6\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248693 kubelet[2420]: I0113 21:14:38.246852 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-cgroup\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248693 kubelet[2420]: I0113 21:14:38.246889 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-host-proc-sys-net\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248693 kubelet[2420]: I0113 21:14:38.246927 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-xtables-lock\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248693 kubelet[2420]: I0113 21:14:38.246962 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-clustermesh-secrets\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.248693 kubelet[2420]: I0113 21:14:38.246997 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-bpf-maps\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.249057 kubelet[2420]: I0113 21:14:38.247047 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-hostproc\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.249057 kubelet[2420]: I0113 21:14:38.247087 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-config-path\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:38.249057 kubelet[2420]: I0113 21:14:38.247121 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-host-proc-sys-kernel\") pod \"cilium-mnl65\" (UID: \"7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7\") " pod="kube-system/cilium-mnl65" Jan 13 21:14:39.017369 kubelet[2420]: E0113 21:14:39.017299 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:39.248455 kubelet[2420]: E0113 21:14:39.248393 2420 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:14:39.248587 kubelet[2420]: E0113 21:14:39.248514 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bebd356-0b6a-498a-ad6a-b8711a417526-cilium-config-path podName:3bebd356-0b6a-498a-ad6a-b8711a417526 nodeName:}" failed. No retries permitted until 2025-01-13 21:14:39.748483082 +0000 UTC m=+83.197065192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/3bebd356-0b6a-498a-ad6a-b8711a417526-cilium-config-path") pod "cilium-operator-599987898-bbkmw" (UID: "3bebd356-0b6a-498a-ad6a-b8711a417526") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:14:39.349001 kubelet[2420]: E0113 21:14:39.348569 2420 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 21:14:39.349001 kubelet[2420]: E0113 21:14:39.348601 2420 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 21:14:39.349001 kubelet[2420]: E0113 21:14:39.348667 2420 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:14:39.349001 kubelet[2420]: E0113 21:14:39.348685 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-ipsec-secrets podName:7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7 nodeName:}" failed. No retries permitted until 2025-01-13 21:14:39.848658315 +0000 UTC m=+83.297240425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-ipsec-secrets") pod "cilium-mnl65" (UID: "7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:14:39.349001 kubelet[2420]: E0113 21:14:39.348715 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-config-path podName:7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7 nodeName:}" failed. No retries permitted until 2025-01-13 21:14:39.848698239 +0000 UTC m=+83.297280337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-cilium-config-path") pod "cilium-mnl65" (UID: "7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:14:39.349787 kubelet[2420]: E0113 21:14:39.348751 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-clustermesh-secrets podName:7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7 nodeName:}" failed. No retries permitted until 2025-01-13 21:14:39.848733615 +0000 UTC m=+83.297315725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-clustermesh-secrets") pod "cilium-mnl65" (UID: "7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:14:39.349787 kubelet[2420]: E0113 21:14:39.348580 2420 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 21:14:39.349787 kubelet[2420]: E0113 21:14:39.348775 2420 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-mnl65: failed to sync secret cache: timed out waiting for the condition Jan 13 21:14:39.349787 kubelet[2420]: E0113 21:14:39.348826 2420 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-hubble-tls podName:7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7 nodeName:}" failed. No retries permitted until 2025-01-13 21:14:39.848811303 +0000 UTC m=+83.297393413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7-hubble-tls") pod "cilium-mnl65" (UID: "7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:14:39.817315 kubelet[2420]: I0113 21:14:39.817214 2420 setters.go:580] "Node became not ready" node="172.31.29.222" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:14:39Z","lastTransitionTime":"2025-01-13T21:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:14:39.833566 containerd[1944]: time="2025-01-13T21:14:39.833505729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bbkmw,Uid:3bebd356-0b6a-498a-ad6a-b8711a417526,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:39.910109 containerd[1944]: time="2025-01-13T21:14:39.909910209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:39.910109 containerd[1944]: time="2025-01-13T21:14:39.910025097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:39.910640 containerd[1944]: time="2025-01-13T21:14:39.910078953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:39.910640 containerd[1944]: time="2025-01-13T21:14:39.910347201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:39.947571 systemd[1]: Started cri-containerd-f0bc0436d0258467c370ecd4e382c80869f89f6f5c646ce1bcbf47412355257f.scope - libcontainer container f0bc0436d0258467c370ecd4e382c80869f89f6f5c646ce1bcbf47412355257f. Jan 13 21:14:40.005756 containerd[1944]: time="2025-01-13T21:14:40.005655834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bbkmw,Uid:3bebd356-0b6a-498a-ad6a-b8711a417526,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0bc0436d0258467c370ecd4e382c80869f89f6f5c646ce1bcbf47412355257f\"" Jan 13 21:14:40.009176 containerd[1944]: time="2025-01-13T21:14:40.009119154Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:14:40.018172 kubelet[2420]: E0113 21:14:40.018082 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:40.203676 containerd[1944]: time="2025-01-13T21:14:40.203471431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mnl65,Uid:7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7,Namespace:kube-system,Attempt:0,}" Jan 13 21:14:40.234125 containerd[1944]: time="2025-01-13T21:14:40.233482663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:14:40.234125 containerd[1944]: time="2025-01-13T21:14:40.233593219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:14:40.234125 containerd[1944]: time="2025-01-13T21:14:40.233629111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.234125 containerd[1944]: time="2025-01-13T21:14:40.233775439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:14:40.262544 systemd[1]: Started cri-containerd-6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226.scope - libcontainer container 6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226. Jan 13 21:14:40.304373 containerd[1944]: time="2025-01-13T21:14:40.304305763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mnl65,Uid:7e01fde7-bd72-44c6-ae5b-99ccd8fb2ff7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\"" Jan 13 21:14:40.310959 containerd[1944]: time="2025-01-13T21:14:40.310848559Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:14:40.324142 containerd[1944]: time="2025-01-13T21:14:40.324055231Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5\"" Jan 13 21:14:40.325124 containerd[1944]: time="2025-01-13T21:14:40.325055851Z" level=info msg="StartContainer for \"0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5\"" Jan 13 21:14:40.370570 systemd[1]: Started cri-containerd-0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5.scope - libcontainer container 0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5. Jan 13 21:14:40.415752 containerd[1944]: time="2025-01-13T21:14:40.415681208Z" level=info msg="StartContainer for \"0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5\" returns successfully" Jan 13 21:14:40.430799 systemd[1]: cri-containerd-0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5.scope: Deactivated successfully. Jan 13 21:14:40.474371 containerd[1944]: time="2025-01-13T21:14:40.474062156Z" level=info msg="shim disconnected" id=0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5 namespace=k8s.io Jan 13 21:14:40.474371 containerd[1944]: time="2025-01-13T21:14:40.474150740Z" level=warning msg="cleaning up after shim disconnected" id=0520b7e4f1b04e2243d4c724fb451bfd203678dfb57ecab07b8cdcca7e5a5db5 namespace=k8s.io Jan 13 21:14:40.474371 containerd[1944]: time="2025-01-13T21:14:40.474173540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:41.019277 kubelet[2420]: E0113 21:14:41.019172 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:41.499346 containerd[1944]: time="2025-01-13T21:14:41.499202325Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:14:41.517948 containerd[1944]: time="2025-01-13T21:14:41.517880949Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e\"" Jan 13 21:14:41.519591 containerd[1944]: time="2025-01-13T21:14:41.519431601Z" level=info msg="StartContainer for \"326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e\"" Jan 13 21:14:41.571608 systemd[1]: Started cri-containerd-326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e.scope - libcontainer container 326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e. Jan 13 21:14:41.618569 containerd[1944]: time="2025-01-13T21:14:41.618485278Z" level=info msg="StartContainer for \"326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e\" returns successfully" Jan 13 21:14:41.632327 systemd[1]: cri-containerd-326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e.scope: Deactivated successfully. Jan 13 21:14:41.670067 containerd[1944]: time="2025-01-13T21:14:41.669922438Z" level=info msg="shim disconnected" id=326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e namespace=k8s.io Jan 13 21:14:41.670067 containerd[1944]: time="2025-01-13T21:14:41.669993574Z" level=warning msg="cleaning up after shim disconnected" id=326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e namespace=k8s.io Jan 13 21:14:41.670067 containerd[1944]: time="2025-01-13T21:14:41.670033822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:41.846812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-326270fe28e8e8d167570d60a697c6f00ef7952bce29d7a00ca9c6e15fdcb44e-rootfs.mount: Deactivated successfully. Jan 13 21:14:42.020064 kubelet[2420]: E0113 21:14:42.019995 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:42.504664 containerd[1944]: time="2025-01-13T21:14:42.504409822Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:14:42.535879 containerd[1944]: time="2025-01-13T21:14:42.535755526Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131\"" Jan 13 21:14:42.537131 containerd[1944]: time="2025-01-13T21:14:42.536750842Z" level=info msg="StartContainer for \"3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131\"" Jan 13 21:14:42.593580 systemd[1]: Started cri-containerd-3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131.scope - libcontainer container 3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131. Jan 13 21:14:42.647162 systemd[1]: cri-containerd-3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131.scope: Deactivated successfully. Jan 13 21:14:42.649705 containerd[1944]: time="2025-01-13T21:14:42.649512467Z" level=info msg="StartContainer for \"3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131\" returns successfully" Jan 13 21:14:42.694938 containerd[1944]: time="2025-01-13T21:14:42.694857059Z" level=info msg="shim disconnected" id=3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131 namespace=k8s.io Jan 13 21:14:42.695371 containerd[1944]: time="2025-01-13T21:14:42.694937015Z" level=warning msg="cleaning up after shim disconnected" id=3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131 namespace=k8s.io Jan 13 21:14:42.695371 containerd[1944]: time="2025-01-13T21:14:42.694959803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:42.846899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ecd42ec4c58989cf13521bfcd6bb36d536a951dcf92878981da6f59be512131-rootfs.mount: Deactivated successfully. Jan 13 21:14:43.020955 kubelet[2420]: E0113 21:14:43.020887 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:43.153473 kubelet[2420]: E0113 21:14:43.153413 2420 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:14:43.519456 containerd[1944]: time="2025-01-13T21:14:43.518995847Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:14:43.545802 containerd[1944]: time="2025-01-13T21:14:43.545709311Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172\"" Jan 13 21:14:43.547282 containerd[1944]: time="2025-01-13T21:14:43.547085747Z" level=info msg="StartContainer for \"377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172\"" Jan 13 21:14:43.598571 systemd[1]: Started cri-containerd-377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172.scope - libcontainer container 377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172. Jan 13 21:14:43.642592 systemd[1]: cri-containerd-377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172.scope: Deactivated successfully. Jan 13 21:14:43.651790 containerd[1944]: time="2025-01-13T21:14:43.651647856Z" level=info msg="StartContainer for \"377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172\" returns successfully" Jan 13 21:14:43.680892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172-rootfs.mount: Deactivated successfully. Jan 13 21:14:43.692775 containerd[1944]: time="2025-01-13T21:14:43.692690208Z" level=info msg="shim disconnected" id=377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172 namespace=k8s.io Jan 13 21:14:43.692775 containerd[1944]: time="2025-01-13T21:14:43.692766660Z" level=warning msg="cleaning up after shim disconnected" id=377bfa30b4ce3d56136cf5c7cc23496804c98cd7923d0b08c14dd4ac5706a172 namespace=k8s.io Jan 13 21:14:43.693115 containerd[1944]: time="2025-01-13T21:14:43.692791680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:44.021534 kubelet[2420]: E0113 21:14:44.021467 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:44.524846 containerd[1944]: time="2025-01-13T21:14:44.524532912Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:14:44.562396 containerd[1944]: time="2025-01-13T21:14:44.562210764Z" level=info msg="CreateContainer within sandbox \"6be551b773adfa16169b2baa5957b336a1d1ea185c3ef50d6ad94c9b379ff226\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3a6b59880d73483b4d05c8bf1e362157e0d4260ed0db4526d7de8bf1bc50d517\"" Jan 13 21:14:44.563268 containerd[1944]: time="2025-01-13T21:14:44.563111556Z" level=info msg="StartContainer for \"3a6b59880d73483b4d05c8bf1e362157e0d4260ed0db4526d7de8bf1bc50d517\"" Jan 13 21:14:44.621539 systemd[1]: Started cri-containerd-3a6b59880d73483b4d05c8bf1e362157e0d4260ed0db4526d7de8bf1bc50d517.scope - libcontainer container 3a6b59880d73483b4d05c8bf1e362157e0d4260ed0db4526d7de8bf1bc50d517. Jan 13 21:14:44.677570 containerd[1944]: time="2025-01-13T21:14:44.677461801Z" level=info msg="StartContainer for \"3a6b59880d73483b4d05c8bf1e362157e0d4260ed0db4526d7de8bf1bc50d517\" returns successfully" Jan 13 21:14:45.022183 kubelet[2420]: E0113 21:14:45.021895 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:45.427481 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 21:14:45.574738 kubelet[2420]: I0113 21:14:45.573909 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mnl65" podStartSLOduration=8.573889141 podStartE2EDuration="8.573889141s" podCreationTimestamp="2025-01-13 21:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:14:45.571914229 +0000 UTC m=+89.020496363" watchObservedRunningTime="2025-01-13 21:14:45.573889141 +0000 UTC m=+89.022471263" Jan 13 21:14:46.022867 kubelet[2420]: E0113 21:14:46.022818 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:46.093297 containerd[1944]: time="2025-01-13T21:14:46.093016740Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:46.094980 containerd[1944]: time="2025-01-13T21:14:46.094904232Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137102" Jan 13 21:14:46.097412 containerd[1944]: time="2025-01-13T21:14:46.097336596Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:14:46.100325 containerd[1944]: time="2025-01-13T21:14:46.100252476Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.09103809s" Jan 13 21:14:46.100482 containerd[1944]: time="2025-01-13T21:14:46.100322376Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 21:14:46.104962 containerd[1944]: time="2025-01-13T21:14:46.104775588Z" level=info msg="CreateContainer within sandbox \"f0bc0436d0258467c370ecd4e382c80869f89f6f5c646ce1bcbf47412355257f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:14:46.140385 containerd[1944]: time="2025-01-13T21:14:46.140203932Z" level=info msg="CreateContainer within sandbox \"f0bc0436d0258467c370ecd4e382c80869f89f6f5c646ce1bcbf47412355257f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"29b1f37285b0ff71ac1592adfe0a1835626ddd91c7371b23ca875c3f5af105ec\"" Jan 13 21:14:46.141656 containerd[1944]: time="2025-01-13T21:14:46.141272364Z" level=info msg="StartContainer for \"29b1f37285b0ff71ac1592adfe0a1835626ddd91c7371b23ca875c3f5af105ec\"" Jan 13 21:14:46.196529 systemd[1]: Started cri-containerd-29b1f37285b0ff71ac1592adfe0a1835626ddd91c7371b23ca875c3f5af105ec.scope - libcontainer container 29b1f37285b0ff71ac1592adfe0a1835626ddd91c7371b23ca875c3f5af105ec. Jan 13 21:14:46.244072 containerd[1944]: time="2025-01-13T21:14:46.243983509Z" level=info msg="StartContainer for \"29b1f37285b0ff71ac1592adfe0a1835626ddd91c7371b23ca875c3f5af105ec\" returns successfully" Jan 13 21:14:47.024394 kubelet[2420]: E0113 21:14:47.024328 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:47.474776 systemd[1]: run-containerd-runc-k8s.io-3a6b59880d73483b4d05c8bf1e362157e0d4260ed0db4526d7de8bf1bc50d517-runc.IJAHkh.mount: Deactivated successfully. Jan 13 21:14:48.025387 kubelet[2420]: E0113 21:14:48.025305 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:49.025864 kubelet[2420]: E0113 21:14:49.025785 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:49.625823 (udev-worker)[5176]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:14:49.628055 (udev-worker)[5178]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:14:49.632948 systemd-networkd[1844]: lxc_health: Link UP Jan 13 21:14:49.649562 systemd-networkd[1844]: lxc_health: Gained carrier Jan 13 21:14:50.026314 kubelet[2420]: E0113 21:14:50.026178 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:50.236028 kubelet[2420]: I0113 21:14:50.235014 2420 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-bbkmw" podStartSLOduration=7.140904599 podStartE2EDuration="13.234992429s" podCreationTimestamp="2025-01-13 21:14:37 +0000 UTC" firstStartedPulling="2025-01-13 21:14:40.00844101 +0000 UTC m=+83.457023132" lastFinishedPulling="2025-01-13 21:14:46.102528864 +0000 UTC m=+89.551110962" observedRunningTime="2025-01-13 21:14:46.56550803 +0000 UTC m=+90.014090164" watchObservedRunningTime="2025-01-13 21:14:50.234992429 +0000 UTC m=+93.683574551" Jan 13 21:14:50.924520 systemd-networkd[1844]: lxc_health: Gained IPv6LL Jan 13 21:14:51.026719 kubelet[2420]: E0113 21:14:51.026639 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:52.027409 kubelet[2420]: E0113 21:14:52.027339 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:53.028636 kubelet[2420]: E0113 21:14:53.028510 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:53.319154 ntpd[1909]: Listen normally on 16 lxc_health [fe80::9c44:46ff:fe96:856f%15]:123 Jan 13 21:14:53.319800 ntpd[1909]: 13 Jan 21:14:53 ntpd[1909]: Listen normally on 16 lxc_health [fe80::9c44:46ff:fe96:856f%15]:123 Jan 13 21:14:54.029005 kubelet[2420]: E0113 21:14:54.028930 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:54.531608 systemd[1]: run-containerd-runc-k8s.io-3a6b59880d73483b4d05c8bf1e362157e0d4260ed0db4526d7de8bf1bc50d517-runc.BDprJU.mount: Deactivated successfully. Jan 13 21:14:55.029482 kubelet[2420]: E0113 21:14:55.029400 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:56.029961 kubelet[2420]: E0113 21:14:56.029872 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:56.876560 kubelet[2420]: E0113 21:14:56.876498 2420 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40612->127.0.0.1:45223: write tcp 127.0.0.1:40612->127.0.0.1:45223: write: connection reset by peer Jan 13 21:14:57.030619 kubelet[2420]: E0113 21:14:57.030520 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:57.957982 kubelet[2420]: E0113 21:14:57.957859 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:58.030834 kubelet[2420]: E0113 21:14:58.030758 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:59.031411 kubelet[2420]: E0113 21:14:59.031343 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:00.031797 kubelet[2420]: E0113 21:15:00.031726 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:01.032160 kubelet[2420]: E0113 21:15:01.032098 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:02.033237 kubelet[2420]: E0113 21:15:02.033096 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:03.033396 kubelet[2420]: E0113 21:15:03.033332 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:04.034146 kubelet[2420]: E0113 21:15:04.034078 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:05.034791 kubelet[2420]: E0113 21:15:05.034722 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:06.035070 kubelet[2420]: E0113 21:15:06.035003 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:07.035343 kubelet[2420]: E0113 21:15:07.035282 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:08.036319 kubelet[2420]: E0113 21:15:08.036218 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:09.037292 kubelet[2420]: E0113 21:15:09.037208 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:10.037746 kubelet[2420]: E0113 21:15:10.037683 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:11.038305 kubelet[2420]: E0113 21:15:11.038197 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:12.038449 kubelet[2420]: E0113 21:15:12.038368 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:13.039346 kubelet[2420]: E0113 21:15:13.039269 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:14.039511 kubelet[2420]: E0113 21:15:14.039419 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:15.040017 kubelet[2420]: E0113 21:15:15.039944 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:16.040551 kubelet[2420]: E0113 21:15:16.040476 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:17.041380 kubelet[2420]: E0113 21:15:17.041316 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:17.958399 kubelet[2420]: E0113 21:15:17.958333 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:18.019899 containerd[1944]: time="2025-01-13T21:15:18.019842475Z" level=info msg="StopPodSandbox for \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\"" Jan 13 21:15:18.020691 containerd[1944]: time="2025-01-13T21:15:18.019992031Z" level=info msg="TearDown network for sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" successfully" Jan 13 21:15:18.020691 containerd[1944]: time="2025-01-13T21:15:18.020017291Z" level=info msg="StopPodSandbox for \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" returns successfully" Jan 13 21:15:18.021146 containerd[1944]: time="2025-01-13T21:15:18.021057379Z" level=info msg="RemovePodSandbox for \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\"" Jan 13 21:15:18.021146 containerd[1944]: time="2025-01-13T21:15:18.021103891Z" level=info msg="Forcibly stopping sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\"" Jan 13 21:15:18.021331 containerd[1944]: time="2025-01-13T21:15:18.021198367Z" level=info msg="TearDown network for sandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" successfully" Jan 13 21:15:18.029376 containerd[1944]: time="2025-01-13T21:15:18.029305663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:15:18.029520 containerd[1944]: time="2025-01-13T21:15:18.029403187Z" level=info msg="RemovePodSandbox \"601eea169274165ebf50ecbcb216ee62b16fa82a1c607d1b18215abf0d25352c\" returns successfully" Jan 13 21:15:18.041698 kubelet[2420]: E0113 21:15:18.041632 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:19.042517 kubelet[2420]: E0113 21:15:19.042455 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:20.043494 kubelet[2420]: E0113 21:15:20.043432 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:20.373701 kubelet[2420]: E0113 21:15:20.373503 2420 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:15:21.044353 kubelet[2420]: E0113 21:15:21.044303 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:22.046083 kubelet[2420]: E0113 21:15:22.046014 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:23.046614 kubelet[2420]: E0113 21:15:23.046540 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:24.047603 kubelet[2420]: E0113 21:15:24.047538 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:25.048615 kubelet[2420]: E0113 21:15:25.048534 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:26.049112 kubelet[2420]: E0113 21:15:26.049036 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:27.050143 kubelet[2420]: E0113 21:15:27.050077 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:28.050920 kubelet[2420]: E0113 21:15:28.050853 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:29.051437 kubelet[2420]: E0113 21:15:29.051364 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:30.051740 kubelet[2420]: E0113 21:15:30.051673 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:30.374525 kubelet[2420]: E0113 21:15:30.374298 2420 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:15:31.052297 kubelet[2420]: E0113 21:15:31.052192 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:32.052727 kubelet[2420]: E0113 21:15:32.052661 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:33.052900 kubelet[2420]: E0113 21:15:33.052837 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:34.053434 kubelet[2420]: E0113 21:15:34.053372 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:35.054180 kubelet[2420]: E0113 21:15:35.054106 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:36.055016 kubelet[2420]: E0113 21:15:36.054947 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:37.055169 kubelet[2420]: E0113 21:15:37.055104 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:37.958420 kubelet[2420]: E0113 21:15:37.958353 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:38.056200 kubelet[2420]: E0113 21:15:38.056135 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:39.057049 kubelet[2420]: E0113 21:15:39.056981 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:40.057937 kubelet[2420]: E0113 21:15:40.057858 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:40.375357 kubelet[2420]: E0113 21:15:40.375160 2420 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:15:40.722592 kubelet[2420]: E0113 21:15:40.722491 2420 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": unexpected EOF" Jan 13 21:15:40.732314 kubelet[2420]: E0113 21:15:40.732018 2420 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": read tcp 172.31.29.222:46802->172.31.17.247:6443: read: connection reset by peer" Jan 13 21:15:40.732314 kubelet[2420]: I0113 21:15:40.732087 2420 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 13 21:15:40.732314 kubelet[2420]: E0113 21:15:40.733097 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused" interval="200ms" Jan 13 21:15:40.934808 kubelet[2420]: E0113 21:15:40.934746 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused" interval="400ms" Jan 13 21:15:41.058755 kubelet[2420]: E0113 21:15:41.058633 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:41.336600 kubelet[2420]: E0113 21:15:41.336443 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused" interval="800ms" Jan 13 21:15:41.724494 kubelet[2420]: E0113 21:15:41.724423 2420 desired_state_of_world_populator.go:318] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.17.247:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.17.247:6443: connect: connection refused - error from a previous attempt: unexpected EOF" pod="default/test-pod-1" volumeName="config" Jan 13 21:15:41.737275 kubelet[2420]: E0113 21:15:41.734674 2420 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.29.222\": Get \"https://172.31.17.247:6443/api/v1/nodes/172.31.29.222?resourceVersion=0&timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.29.222:46810->172.31.17.247:6443: read: connection reset by peer" Jan 13 21:15:41.737275 kubelet[2420]: E0113 21:15:41.735156 2420 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.29.222\": Get \"https://172.31.17.247:6443/api/v1/nodes/172.31.29.222?timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused" Jan 13 21:15:41.737275 kubelet[2420]: E0113 21:15:41.735861 2420 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.29.222\": Get \"https://172.31.17.247:6443/api/v1/nodes/172.31.29.222?timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused" Jan 13 21:15:41.738032 kubelet[2420]: E0113 21:15:41.737981 2420 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.29.222\": Get \"https://172.31.17.247:6443/api/v1/nodes/172.31.29.222?timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused" Jan 13 21:15:41.738611 kubelet[2420]: E0113 21:15:41.738514 2420 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.29.222\": Get \"https://172.31.17.247:6443/api/v1/nodes/172.31.29.222?timeout=10s\": dial tcp 172.31.17.247:6443: connect: connection refused" Jan 13 21:15:41.738611 kubelet[2420]: E0113 21:15:41.738551 2420 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Jan 13 21:15:42.059887 kubelet[2420]: E0113 21:15:42.059728 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:43.060586 kubelet[2420]: E0113 21:15:43.060510 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:44.061619 kubelet[2420]: E0113 21:15:44.061554 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:45.062442 kubelet[2420]: E0113 21:15:45.062374 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:46.063249 kubelet[2420]: E0113 21:15:46.063184 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:47.064127 kubelet[2420]: E0113 21:15:47.064070 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:48.065027 kubelet[2420]: E0113 21:15:48.064963 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:49.065304 kubelet[2420]: E0113 21:15:49.065214 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:50.066478 kubelet[2420]: E0113 21:15:50.066410 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:51.066609 kubelet[2420]: E0113 21:15:51.066536 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:52.067559 kubelet[2420]: E0113 21:15:52.067491 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:52.137865 kubelet[2420]: E0113 21:15:52.137791 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 13 21:15:53.067973 kubelet[2420]: E0113 21:15:53.067884 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:54.068558 kubelet[2420]: E0113 21:15:54.068491 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:55.069189 kubelet[2420]: E0113 21:15:55.069042 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:56.069861 kubelet[2420]: E0113 21:15:56.069796 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:57.070346 kubelet[2420]: E0113 21:15:57.070268 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:57.958393 kubelet[2420]: E0113 21:15:57.958319 2420 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:58.071112 kubelet[2420]: E0113 21:15:58.071041 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:59.072161 kubelet[2420]: E0113 21:15:59.072080 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:16:00.073389 kubelet[2420]: E0113 21:16:00.073308 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:16:01.073749 kubelet[2420]: E0113 21:16:01.073684 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:16:02.074743 kubelet[2420]: E0113 21:16:02.074680 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:16:02.138541 kubelet[2420]: E0113 21:16:02.138480 2420 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.29.222\": Get \"https://172.31.17.247:6443/api/v1/nodes/172.31.29.222?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 13 21:16:03.075352 kubelet[2420]: E0113 21:16:03.075276 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:16:03.738605 kubelet[2420]: E0113 21:16:03.738536 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.29.222?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 13 21:16:04.075864 kubelet[2420]: E0113 21:16:04.075718 2420 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"