Jan 13 21:11:58.175653 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 21:11:58.175700 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:11:58.175724 kernel: KASLR disabled due to lack of seed Jan 13 21:11:58.175741 kernel: efi: EFI v2.7 by EDK II Jan 13 21:11:58.175757 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 13 21:11:58.175772 kernel: ACPI: Early table checksum verification disabled Jan 13 21:11:58.175790 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 21:11:58.175805 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 21:11:58.175821 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:11:58.175836 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 21:11:58.175857 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:11:58.175873 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 21:11:58.175888 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 21:11:58.175904 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 21:11:58.175922 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:11:58.175944 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 21:11:58.175961 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 21:11:58.175977 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 21:11:58.175993 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 21:11:58.176009 kernel: printk: bootconsole [uart0] enabled Jan 13 21:11:58.176025 kernel: NUMA: Failed to initialise from firmware Jan 13 21:11:58.176042 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:11:58.176077 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 21:11:58.176096 kernel: Zone ranges: Jan 13 21:11:58.176113 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 21:11:58.176129 kernel: DMA32 empty Jan 13 21:11:58.176151 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 21:11:58.176168 kernel: Movable zone start for each node Jan 13 21:11:58.176184 kernel: Early memory node ranges Jan 13 21:11:58.176200 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 21:11:58.176216 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 21:11:58.176232 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 21:11:58.176248 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 21:11:58.176265 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 21:11:58.176337 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 21:11:58.176354 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 21:11:58.176370 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 21:11:58.176387 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:11:58.176409 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 21:11:58.176427 kernel: psci: probing for conduit method from ACPI. Jan 13 21:11:58.176451 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 21:11:58.176468 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:11:58.176486 kernel: psci: Trusted OS migration not required Jan 13 21:11:58.176507 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:11:58.176525 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:11:58.176542 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:11:58.176559 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 21:11:58.176576 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:11:58.176594 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:11:58.176611 kernel: CPU features: detected: Spectre-v2 Jan 13 21:11:58.176628 kernel: CPU features: detected: Spectre-v3a Jan 13 21:11:58.176645 kernel: CPU features: detected: Spectre-BHB Jan 13 21:11:58.176662 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 21:11:58.176680 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 21:11:58.176701 kernel: alternatives: applying boot alternatives Jan 13 21:11:58.176721 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:11:58.176740 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:11:58.176757 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:11:58.176775 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:11:58.176792 kernel: Fallback order for Node 0: 0 Jan 13 21:11:58.176809 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 21:11:58.176826 kernel: Policy zone: Normal Jan 13 21:11:58.176844 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:11:58.176861 kernel: software IO TLB: area num 2. Jan 13 21:11:58.176878 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 21:11:58.176901 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 13 21:11:58.176919 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:11:58.176936 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:11:58.176954 kernel: rcu: RCU event tracing is enabled. Jan 13 21:11:58.176972 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:11:58.176989 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:11:58.177007 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:11:58.177024 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:11:58.177041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:11:58.177058 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:11:58.177075 kernel: GICv3: 96 SPIs implemented Jan 13 21:11:58.177097 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:11:58.177114 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:11:58.177131 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 21:11:58.177148 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 21:11:58.177165 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 21:11:58.177183 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:11:58.177200 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:11:58.177217 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 21:11:58.177234 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 21:11:58.177251 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 21:11:58.179325 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:11:58.179374 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 21:11:58.179403 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 21:11:58.179422 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 21:11:58.179440 kernel: Console: colour dummy device 80x25 Jan 13 21:11:58.179458 kernel: printk: console [tty1] enabled Jan 13 21:11:58.179476 kernel: ACPI: Core revision 20230628 Jan 13 21:11:58.179494 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 21:11:58.179512 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:11:58.179530 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:11:58.179548 kernel: landlock: Up and running. Jan 13 21:11:58.179571 kernel: SELinux: Initializing. Jan 13 21:11:58.179589 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:11:58.179607 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:11:58.179624 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:11:58.179642 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:11:58.179660 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:11:58.179679 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:11:58.179696 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 21:11:58.179714 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 21:11:58.179736 kernel: Remapping and enabling EFI services. Jan 13 21:11:58.179754 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:11:58.179772 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:11:58.179789 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 21:11:58.179807 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 21:11:58.179825 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 21:11:58.179842 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:11:58.179860 kernel: SMP: Total of 2 processors activated. Jan 13 21:11:58.179877 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:11:58.179899 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 21:11:58.179917 kernel: CPU features: detected: CRC32 instructions Jan 13 21:11:58.179935 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:11:58.179965 kernel: alternatives: applying system-wide alternatives Jan 13 21:11:58.179988 kernel: devtmpfs: initialized Jan 13 21:11:58.180007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:11:58.180025 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:11:58.180043 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:11:58.180082 kernel: SMBIOS 3.0.0 present. Jan 13 21:11:58.180104 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 21:11:58.180128 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:11:58.180147 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:11:58.180166 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:11:58.180184 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:11:58.180203 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:11:58.180221 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Jan 13 21:11:58.180240 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:11:58.180263 kernel: cpuidle: using governor menu Jan 13 21:11:58.181385 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:11:58.181415 kernel: ASID allocator initialised with 65536 entries Jan 13 21:11:58.181434 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:11:58.181464 kernel: Serial: AMBA PL011 UART driver Jan 13 21:11:58.181489 kernel: Modules: 17520 pages in range for non-PLT usage Jan 13 21:11:58.181508 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:11:58.181528 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:11:58.181547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:11:58.181588 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:11:58.181608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:11:58.181627 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:11:58.181646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:11:58.181664 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:11:58.181683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:11:58.181701 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:11:58.181720 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:11:58.181738 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:11:58.181762 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:11:58.181781 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:11:58.181800 kernel: ACPI: Interpreter enabled Jan 13 21:11:58.181818 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:11:58.181837 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:11:58.181856 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 21:11:58.182184 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:11:58.183586 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:11:58.183820 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:11:58.184019 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 21:11:58.184253 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 21:11:58.184311 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 21:11:58.184334 kernel: acpiphp: Slot [1] registered Jan 13 21:11:58.184353 kernel: acpiphp: Slot [2] registered Jan 13 21:11:58.184372 kernel: acpiphp: Slot [3] registered Jan 13 21:11:58.184390 kernel: acpiphp: Slot [4] registered Jan 13 21:11:58.184415 kernel: acpiphp: Slot [5] registered Jan 13 21:11:58.184434 kernel: acpiphp: Slot [6] registered Jan 13 21:11:58.184452 kernel: acpiphp: Slot [7] registered Jan 13 21:11:58.184471 kernel: acpiphp: Slot [8] registered Jan 13 21:11:58.184489 kernel: acpiphp: Slot [9] registered Jan 13 21:11:58.184507 kernel: acpiphp: Slot [10] registered Jan 13 21:11:58.184525 kernel: acpiphp: Slot [11] registered Jan 13 21:11:58.184543 kernel: acpiphp: Slot [12] registered Jan 13 21:11:58.184561 kernel: acpiphp: Slot [13] registered Jan 13 21:11:58.184580 kernel: acpiphp: Slot [14] registered Jan 13 21:11:58.184604 kernel: acpiphp: Slot [15] registered Jan 13 21:11:58.184622 kernel: acpiphp: Slot [16] registered Jan 13 21:11:58.184640 kernel: acpiphp: Slot [17] registered Jan 13 21:11:58.184658 kernel: acpiphp: Slot [18] registered Jan 13 21:11:58.184676 kernel: acpiphp: Slot [19] registered Jan 13 21:11:58.184695 kernel: acpiphp: Slot [20] registered Jan 13 21:11:58.184713 kernel: acpiphp: Slot [21] registered Jan 13 21:11:58.184731 kernel: acpiphp: Slot [22] registered Jan 13 21:11:58.184749 kernel: acpiphp: Slot [23] registered Jan 13 21:11:58.184772 kernel: acpiphp: Slot [24] registered Jan 13 21:11:58.184791 kernel: acpiphp: Slot [25] registered Jan 13 21:11:58.184809 kernel: acpiphp: Slot [26] registered Jan 13 21:11:58.184827 kernel: acpiphp: Slot [27] registered Jan 13 21:11:58.184845 kernel: acpiphp: Slot [28] registered Jan 13 21:11:58.184863 kernel: acpiphp: Slot [29] registered Jan 13 21:11:58.184882 kernel: acpiphp: Slot [30] registered Jan 13 21:11:58.184900 kernel: acpiphp: Slot [31] registered Jan 13 21:11:58.184918 kernel: PCI host bridge to bus 0000:00 Jan 13 21:11:58.185126 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 21:11:58.185989 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:11:58.186196 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 21:11:58.186417 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 21:11:58.186658 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 21:11:58.186903 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 21:11:58.187133 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 21:11:58.187387 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:11:58.188151 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 21:11:58.188453 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:11:58.188673 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:11:58.188877 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 21:11:58.189098 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 21:11:58.190440 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 21:11:58.190700 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:11:58.190902 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 21:11:58.191105 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 21:11:58.194377 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 21:11:58.194710 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 21:11:58.194933 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 21:11:58.195145 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 21:11:58.195410 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:11:58.195596 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 21:11:58.195622 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:11:58.195643 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:11:58.195662 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:11:58.195681 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:11:58.195709 kernel: iommu: Default domain type: Translated Jan 13 21:11:58.195733 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:11:58.195760 kernel: efivars: Registered efivars operations Jan 13 21:11:58.195779 kernel: vgaarb: loaded Jan 13 21:11:58.195797 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:11:58.195816 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:11:58.195834 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:11:58.195852 kernel: pnp: PnP ACPI init Jan 13 21:11:58.196095 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 21:11:58.196125 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:11:58.196150 kernel: NET: Registered PF_INET protocol family Jan 13 21:11:58.196170 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:11:58.196188 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:11:58.196207 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:11:58.196225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:11:58.196244 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:11:58.196262 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:11:58.198363 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:11:58.198386 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:11:58.198415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:11:58.198435 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:11:58.198453 kernel: kvm [1]: HYP mode not available Jan 13 21:11:58.198472 kernel: Initialise system trusted keyrings Jan 13 21:11:58.198491 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:11:58.198509 kernel: Key type asymmetric registered Jan 13 21:11:58.198527 kernel: Asymmetric key parser 'x509' registered Jan 13 21:11:58.198546 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:11:58.198564 kernel: io scheduler mq-deadline registered Jan 13 21:11:58.198588 kernel: io scheduler kyber registered Jan 13 21:11:58.198606 kernel: io scheduler bfq registered Jan 13 21:11:58.198857 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 21:11:58.198887 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:11:58.198906 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:11:58.198925 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 21:11:58.198943 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 21:11:58.198962 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:11:58.198987 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 21:11:58.199206 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 21:11:58.199233 kernel: printk: console [ttyS0] disabled Jan 13 21:11:58.199252 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 21:11:58.201308 kernel: printk: console [ttyS0] enabled Jan 13 21:11:58.201345 kernel: printk: bootconsole [uart0] disabled Jan 13 21:11:58.201364 kernel: thunder_xcv, ver 1.0 Jan 13 21:11:58.201382 kernel: thunder_bgx, ver 1.0 Jan 13 21:11:58.201401 kernel: nicpf, ver 1.0 Jan 13 21:11:58.201429 kernel: nicvf, ver 1.0 Jan 13 21:11:58.201707 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:11:58.201905 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:11:57 UTC (1736802717) Jan 13 21:11:58.201932 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:11:58.201951 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 21:11:58.201970 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:11:58.201989 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:11:58.202007 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:11:58.202032 kernel: Segment Routing with IPv6 Jan 13 21:11:58.202051 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:11:58.202069 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:11:58.202087 kernel: Key type dns_resolver registered Jan 13 21:11:58.202106 kernel: registered taskstats version 1 Jan 13 21:11:58.202124 kernel: Loading compiled-in X.509 certificates Jan 13 21:11:58.202143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:11:58.202161 kernel: Key type .fscrypt registered Jan 13 21:11:58.202179 kernel: Key type fscrypt-provisioning registered Jan 13 21:11:58.202202 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:11:58.202221 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:11:58.202239 kernel: ima: No architecture policies found Jan 13 21:11:58.202257 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:11:58.202295 kernel: clk: Disabling unused clocks Jan 13 21:11:58.202316 kernel: Freeing unused kernel memory: 39360K Jan 13 21:11:58.202335 kernel: Run /init as init process Jan 13 21:11:58.202353 kernel: with arguments: Jan 13 21:11:58.202371 kernel: /init Jan 13 21:11:58.202389 kernel: with environment: Jan 13 21:11:58.202423 kernel: HOME=/ Jan 13 21:11:58.202446 kernel: TERM=linux Jan 13 21:11:58.202465 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:11:58.202500 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:11:58.202525 systemd[1]: Detected virtualization amazon. Jan 13 21:11:58.202546 systemd[1]: Detected architecture arm64. Jan 13 21:11:58.202565 systemd[1]: Running in initrd. Jan 13 21:11:58.202591 systemd[1]: No hostname configured, using default hostname. Jan 13 21:11:58.202611 systemd[1]: Hostname set to . Jan 13 21:11:58.202632 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:11:58.202651 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:11:58.202685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:11:58.202707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:11:58.202728 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:11:58.202749 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:11:58.202775 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:11:58.202797 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:11:58.202820 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:11:58.202840 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:11:58.202861 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:11:58.202881 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:11:58.202901 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:11:58.202927 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:11:58.202947 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:11:58.202967 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:11:58.202987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:11:58.203007 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:11:58.203028 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:11:58.203048 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:11:58.203068 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:11:58.203088 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:11:58.203113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:11:58.203133 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:11:58.203153 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:11:58.203174 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:11:58.203194 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:11:58.203214 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:11:58.204195 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:11:58.204218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:11:58.204245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:11:58.204278 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:11:58.204338 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:11:58.204360 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:11:58.204422 systemd-journald[251]: Collecting audit messages is disabled. Jan 13 21:11:58.204473 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:11:58.204494 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:11:58.204514 systemd-journald[251]: Journal started Jan 13 21:11:58.204556 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2c1a694241c2da9d83bd9c588914b8) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:11:58.206551 kernel: Bridge firewalling registered Jan 13 21:11:58.169594 systemd-modules-load[252]: Inserted module 'overlay' Jan 13 21:11:58.210418 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:11:58.206980 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 13 21:11:58.214940 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:11:58.217871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:11:58.236116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:11:58.242558 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:11:58.249613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:11:58.252338 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:11:58.264442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:11:58.293130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:11:58.314738 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:11:58.321189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:11:58.326294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:11:58.341690 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:11:58.353564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:11:58.364988 dracut-cmdline[286]: dracut-dracut-053 Jan 13 21:11:58.371859 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:11:58.442109 systemd-resolved[290]: Positive Trust Anchors: Jan 13 21:11:58.442143 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:11:58.442206 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:11:58.530314 kernel: SCSI subsystem initialized Jan 13 21:11:58.537417 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:11:58.550402 kernel: iscsi: registered transport (tcp) Jan 13 21:11:58.572406 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:11:58.572479 kernel: QLogic iSCSI HBA Driver Jan 13 21:11:58.680718 kernel: random: crng init done Jan 13 21:11:58.678943 systemd-resolved[290]: Defaulting to hostname 'linux'. Jan 13 21:11:58.680789 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:11:58.704366 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:11:58.711397 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:11:58.720539 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:11:58.766322 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:11:58.766395 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:11:58.766422 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:11:58.833330 kernel: raid6: neonx8 gen() 6669 MB/s Jan 13 21:11:58.850302 kernel: raid6: neonx4 gen() 6505 MB/s Jan 13 21:11:58.867301 kernel: raid6: neonx2 gen() 5428 MB/s Jan 13 21:11:58.884301 kernel: raid6: neonx1 gen() 3938 MB/s Jan 13 21:11:58.901301 kernel: raid6: int64x8 gen() 3786 MB/s Jan 13 21:11:58.918301 kernel: raid6: int64x4 gen() 3710 MB/s Jan 13 21:11:58.935301 kernel: raid6: int64x2 gen() 3593 MB/s Jan 13 21:11:58.953026 kernel: raid6: int64x1 gen() 2758 MB/s Jan 13 21:11:58.953058 kernel: raid6: using algorithm neonx8 gen() 6669 MB/s Jan 13 21:11:58.971007 kernel: raid6: .... xor() 4900 MB/s, rmw enabled Jan 13 21:11:58.971047 kernel: raid6: using neon recovery algorithm Jan 13 21:11:58.978306 kernel: xor: measuring software checksum speed Jan 13 21:11:58.980417 kernel: 8regs : 10225 MB/sec Jan 13 21:11:58.980450 kernel: 32regs : 11914 MB/sec Jan 13 21:11:58.981570 kernel: arm64_neon : 9515 MB/sec Jan 13 21:11:58.981602 kernel: xor: using function: 32regs (11914 MB/sec) Jan 13 21:11:59.066324 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:11:59.085204 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:11:59.093534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:11:59.137832 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jan 13 21:11:59.147359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:11:59.159823 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:11:59.194386 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Jan 13 21:11:59.249881 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:11:59.257620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:11:59.375575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:11:59.389643 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:11:59.420921 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:11:59.423534 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:11:59.429558 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:11:59.444418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:11:59.460646 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:11:59.491325 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:11:59.576307 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:11:59.576381 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 21:11:59.603484 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:11:59.603749 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:11:59.622650 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:92:68:b8:36:4d Jan 13 21:11:59.595962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:11:59.596230 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:11:59.629484 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 21:11:59.600947 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:11:59.633412 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:11:59.603353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:11:59.603716 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:11:59.605956 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:11:59.622232 (udev-worker)[514]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:11:59.632842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:11:59.659339 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:11:59.671363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:11:59.678651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:11:59.678688 kernel: GPT:9289727 != 16777215 Jan 13 21:11:59.678714 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:11:59.678739 kernel: GPT:9289727 != 16777215 Jan 13 21:11:59.678763 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:11:59.678788 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:11:59.684680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:11:59.726732 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:11:59.813014 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (518) Jan 13 21:11:59.860186 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:11:59.864318 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (529) Jan 13 21:11:59.940085 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:11:59.957594 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:11:59.972351 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:11:59.975555 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:11:59.998635 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:12:00.012959 disk-uuid[660]: Primary Header is updated. Jan 13 21:12:00.012959 disk-uuid[660]: Secondary Entries is updated. Jan 13 21:12:00.012959 disk-uuid[660]: Secondary Header is updated. Jan 13 21:12:00.022308 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:00.031332 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:00.041319 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:01.038364 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:12:01.040554 disk-uuid[661]: The operation has completed successfully. Jan 13 21:12:01.221482 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:12:01.223513 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:12:01.283579 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:12:01.303066 sh[1009]: Success Jan 13 21:12:01.331550 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:12:01.446736 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:12:01.452461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:12:01.459306 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:12:01.491225 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:12:01.491319 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:01.493017 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:12:01.494284 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:12:01.494320 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:12:01.602304 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:12:01.620915 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:12:01.624756 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:12:01.635580 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:12:01.642557 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:12:01.669001 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:01.669084 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:01.670325 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:12:01.677449 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:12:01.692920 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:12:01.695901 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:01.720076 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:12:01.733666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:12:01.821648 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:12:01.833628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:12:01.890163 systemd-networkd[1201]: lo: Link UP Jan 13 21:12:01.890186 systemd-networkd[1201]: lo: Gained carrier Jan 13 21:12:01.893975 systemd-networkd[1201]: Enumeration completed Jan 13 21:12:01.896109 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:01.896127 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:12:01.897072 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:12:01.907492 systemd[1]: Reached target network.target - Network. Jan 13 21:12:01.911813 systemd-networkd[1201]: eth0: Link UP Jan 13 21:12:01.911826 systemd-networkd[1201]: eth0: Gained carrier Jan 13 21:12:01.911844 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:01.937362 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.31.152/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:12:02.303248 ignition[1124]: Ignition 2.19.0 Jan 13 21:12:02.303302 ignition[1124]: Stage: fetch-offline Jan 13 21:12:02.303876 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:02.304903 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:02.310173 ignition[1124]: Ignition finished successfully Jan 13 21:12:02.313819 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:12:02.325645 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:12:02.349888 ignition[1211]: Ignition 2.19.0 Jan 13 21:12:02.349916 ignition[1211]: Stage: fetch Jan 13 21:12:02.351527 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:02.351554 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:02.351857 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:02.362135 ignition[1211]: PUT result: OK Jan 13 21:12:02.365038 ignition[1211]: parsed url from cmdline: "" Jan 13 21:12:02.365054 ignition[1211]: no config URL provided Jan 13 21:12:02.365071 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:12:02.365095 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:12:02.365128 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:02.366663 ignition[1211]: PUT result: OK Jan 13 21:12:02.366745 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:12:02.377465 unknown[1211]: fetched base config from "system" Jan 13 21:12:02.370909 ignition[1211]: GET result: OK Jan 13 21:12:02.377481 unknown[1211]: fetched base config from "system" Jan 13 21:12:02.371201 ignition[1211]: parsing config with SHA512: 3c49bdf6ca2255f2c9b778e2ce001f5db78446868b95cff84b644aad452a31b2137d9bd1ffcbe4e0bb6345ed9ff09363939b3e017f7d9a1ea74748f49589ad40 Jan 13 21:12:02.377495 unknown[1211]: fetched user config from "aws" Jan 13 21:12:02.377910 ignition[1211]: fetch: fetch complete Jan 13 21:12:02.377921 ignition[1211]: fetch: fetch passed Jan 13 21:12:02.377999 ignition[1211]: Ignition finished successfully Jan 13 21:12:02.392670 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:12:02.411693 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:12:02.434861 ignition[1217]: Ignition 2.19.0 Jan 13 21:12:02.434893 ignition[1217]: Stage: kargs Jan 13 21:12:02.435589 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:02.435615 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:02.435764 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:02.438190 ignition[1217]: PUT result: OK Jan 13 21:12:02.446734 ignition[1217]: kargs: kargs passed Jan 13 21:12:02.446840 ignition[1217]: Ignition finished successfully Jan 13 21:12:02.451830 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:12:02.468736 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:12:02.492352 ignition[1223]: Ignition 2.19.0 Jan 13 21:12:02.492374 ignition[1223]: Stage: disks Jan 13 21:12:02.493003 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:02.493028 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:02.493198 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:02.496057 ignition[1223]: PUT result: OK Jan 13 21:12:02.506538 ignition[1223]: disks: disks passed Jan 13 21:12:02.506634 ignition[1223]: Ignition finished successfully Jan 13 21:12:02.509424 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:12:02.512416 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:12:02.514669 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:12:02.516789 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:12:02.517070 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:12:02.517653 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:12:02.535573 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:12:02.594183 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:12:02.603775 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:12:02.613520 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:12:02.695306 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:12:02.696702 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:12:02.700488 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:12:02.720441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:12:02.726480 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:12:02.730178 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:12:02.730310 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:12:02.730365 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:12:02.752326 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Jan 13 21:12:02.756543 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:02.756616 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:02.756644 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:12:02.758664 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:12:02.767309 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:12:02.780617 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:12:02.787393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:12:03.281020 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:12:03.290550 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:12:03.299600 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:12:03.319808 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:12:03.683950 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:12:03.693494 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:12:03.711588 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:12:03.727777 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:12:03.730062 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:03.759968 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:12:03.777235 ignition[1363]: INFO : Ignition 2.19.0 Jan 13 21:12:03.779091 ignition[1363]: INFO : Stage: mount Jan 13 21:12:03.780842 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:03.780842 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:03.785161 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:03.788101 ignition[1363]: INFO : PUT result: OK Jan 13 21:12:03.792018 ignition[1363]: INFO : mount: mount passed Jan 13 21:12:03.792018 ignition[1363]: INFO : Ignition finished successfully Jan 13 21:12:03.794619 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:12:03.812549 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:12:03.830699 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:12:03.851304 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Jan 13 21:12:03.855530 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:12:03.855577 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:12:03.855604 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:12:03.861313 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:12:03.864711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:12:03.868984 systemd-networkd[1201]: eth0: Gained IPv6LL Jan 13 21:12:03.905844 ignition[1392]: INFO : Ignition 2.19.0 Jan 13 21:12:03.908483 ignition[1392]: INFO : Stage: files Jan 13 21:12:03.908483 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:03.908483 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:03.908483 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:03.918415 ignition[1392]: INFO : PUT result: OK Jan 13 21:12:03.919865 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:12:03.937752 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:12:03.937752 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:12:03.972876 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:12:03.976124 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:12:03.976124 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:12:03.973652 unknown[1392]: wrote ssh authorized keys file for user: core Jan 13 21:12:03.992463 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:12:03.997101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:12:03.997101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:12:03.997101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:12:03.997101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:12:03.997101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:12:03.997101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:12:03.997101 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 21:12:04.359324 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:12:04.741358 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:12:04.745743 ignition[1392]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:12:04.745743 ignition[1392]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:12:04.745743 ignition[1392]: INFO : files: files passed Jan 13 21:12:04.745743 ignition[1392]: INFO : Ignition finished successfully Jan 13 21:12:04.757443 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:12:04.766572 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:12:04.779602 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:12:04.792494 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:12:04.794510 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:12:04.804215 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:04.804215 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:04.812417 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:12:04.817492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:12:04.820567 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:12:04.833658 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:12:04.887793 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:12:04.888190 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:12:04.895254 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:12:04.898952 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:12:04.912389 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:12:04.921593 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:12:04.958365 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:12:04.969565 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:12:04.999922 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:05.004915 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:05.007496 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:12:05.012956 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:12:05.013388 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:12:05.020619 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:12:05.023061 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:12:05.027193 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:12:05.032818 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:12:05.035844 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:12:05.038237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:12:05.046040 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:12:05.048921 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:12:05.055072 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:12:05.057134 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:12:05.059649 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:12:05.059875 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:12:05.068129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:05.068455 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:05.074278 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:12:05.078345 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:05.080753 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:12:05.080993 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:12:05.083387 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:12:05.083604 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:12:05.086375 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:12:05.086599 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:12:05.104068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:12:05.113415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:12:05.113702 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:05.121771 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:12:05.123625 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:12:05.125253 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:05.130839 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:12:05.131248 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:12:05.151664 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:12:05.152263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:12:05.157840 ignition[1444]: INFO : Ignition 2.19.0 Jan 13 21:12:05.164941 ignition[1444]: INFO : Stage: umount Jan 13 21:12:05.164941 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:12:05.164941 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:12:05.164941 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:12:05.164941 ignition[1444]: INFO : PUT result: OK Jan 13 21:12:05.178515 ignition[1444]: INFO : umount: umount passed Jan 13 21:12:05.178515 ignition[1444]: INFO : Ignition finished successfully Jan 13 21:12:05.182761 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:12:05.185125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:12:05.189098 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:12:05.189201 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:12:05.197151 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:12:05.197301 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:12:05.203474 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:12:05.203585 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:12:05.205683 systemd[1]: Stopped target network.target - Network. Jan 13 21:12:05.208857 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:12:05.209091 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:12:05.212644 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:12:05.214495 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:12:05.215134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:05.217682 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:12:05.217959 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:12:05.218317 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:12:05.218397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:12:05.218596 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:12:05.218661 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:12:05.218855 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:12:05.218938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:12:05.219161 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:12:05.219232 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:12:05.222033 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:12:05.222791 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:12:05.224977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:12:05.262358 systemd-networkd[1201]: eth0: DHCPv6 lease lost Jan 13 21:12:05.264723 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:12:05.264977 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:12:05.274535 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:12:05.274790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:12:05.279107 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:12:05.279214 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:05.291443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:12:05.296168 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:12:05.296358 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:12:05.299597 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:12:05.299697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:05.300256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:12:05.300353 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:05.300957 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:12:05.301029 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:05.309693 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:05.357746 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:12:05.358103 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:05.361698 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:12:05.361803 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:05.364098 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:12:05.364172 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:05.367010 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:12:05.367107 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:12:05.380727 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:12:05.386157 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:12:05.390222 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:12:05.390384 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:12:05.413652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:12:05.416018 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:12:05.416147 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:05.418533 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:12:05.418621 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:05.424061 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:12:05.424258 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:12:05.433979 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:12:05.434168 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:12:05.685093 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:12:05.685896 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:12:05.689563 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:12:05.698251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:12:05.698386 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:12:05.710576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:12:05.728412 systemd[1]: Switching root. Jan 13 21:12:05.763162 systemd-journald[251]: Journal stopped Jan 13 21:12:08.632548 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 13 21:12:08.632692 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:12:08.632737 kernel: SELinux: policy capability open_perms=1 Jan 13 21:12:08.632768 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:12:08.632805 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:12:08.632836 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:12:08.632867 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:12:08.632898 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:12:08.632929 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:12:08.632960 kernel: audit: type=1403 audit(1736802726.707:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:12:08.633002 systemd[1]: Successfully loaded SELinux policy in 91.774ms. Jan 13 21:12:08.633042 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.035ms. Jan 13 21:12:08.633077 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:12:08.633114 systemd[1]: Detected virtualization amazon. Jan 13 21:12:08.633146 systemd[1]: Detected architecture arm64. Jan 13 21:12:08.633176 systemd[1]: Detected first boot. Jan 13 21:12:08.633213 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:12:08.633247 zram_generator::config[1486]: No configuration found. Jan 13 21:12:08.633311 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:12:08.633347 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:12:08.633385 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:12:08.633419 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:12:08.633454 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:12:08.633489 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:12:08.633520 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:12:08.633550 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:12:08.633588 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:12:08.633620 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:12:08.633653 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:12:08.633684 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:12:08.633717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:12:08.633747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:12:08.633779 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:12:08.633809 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:12:08.633842 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:12:08.633877 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:12:08.633909 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:12:08.633941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:12:08.633973 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:12:08.634003 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:12:08.634036 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:12:08.634065 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:12:08.634095 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:12:08.634131 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:12:08.634167 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:12:08.634199 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:12:08.634229 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:12:08.634261 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:12:08.638119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:12:08.638162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:12:08.638193 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:12:08.638224 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:12:08.638263 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:12:08.638324 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:12:08.638356 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:12:08.638389 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:12:08.638475 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:12:08.638513 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:12:08.638547 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:12:08.638578 systemd[1]: Reached target machines.target - Containers. Jan 13 21:12:08.638608 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:12:08.638643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:12:08.638676 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:12:08.638706 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:12:08.638736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:12:08.638766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:12:08.638796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:12:08.638825 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:12:08.638855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:12:08.638890 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:12:08.638922 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:12:08.638953 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:12:08.638983 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:12:08.639014 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:12:08.639056 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:12:08.639310 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:12:08.639345 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:12:08.639375 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:12:08.639411 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:12:08.639443 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:12:08.639473 systemd[1]: Stopped verity-setup.service. Jan 13 21:12:08.639505 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:12:08.639535 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:12:08.639569 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:12:08.639602 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:12:08.639633 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:12:08.639662 kernel: loop: module loaded Jan 13 21:12:08.639696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:12:08.639729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:12:08.639758 kernel: fuse: init (API version 7.39) Jan 13 21:12:08.639787 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:12:08.639819 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:12:08.639853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:12:08.639895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:12:08.639928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:12:08.639958 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:12:08.639991 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:12:08.640021 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:12:08.640075 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:12:08.640107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:12:08.640143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:12:08.640180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:12:08.640211 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:12:08.640245 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:12:08.644646 systemd-journald[1570]: Collecting audit messages is disabled. Jan 13 21:12:08.644737 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:12:08.644773 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:12:08.644803 systemd-journald[1570]: Journal started Jan 13 21:12:08.644850 systemd-journald[1570]: Runtime Journal (/run/log/journal/ec2c1a694241c2da9d83bd9c588914b8) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:12:08.013396 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:12:08.082754 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:12:08.083551 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:12:08.659316 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:12:08.659414 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:12:08.667305 kernel: ACPI: bus type drm_connector registered Jan 13 21:12:08.667399 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:12:08.684481 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:12:08.694337 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:12:08.699315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:12:08.709743 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:12:08.715491 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:12:08.727409 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:12:08.727498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:12:08.739338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:08.758302 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:12:08.769330 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:12:08.772685 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:12:08.775995 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:12:08.776358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:12:08.778724 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:12:08.781188 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:12:08.784382 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:12:08.845209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:12:08.853662 kernel: loop0: detected capacity change from 0 to 114432 Jan 13 21:12:08.854911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:12:08.867613 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:12:08.876559 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:12:08.891635 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:12:08.896428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:12:08.909840 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:12:08.918430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:08.937217 systemd-journald[1570]: Time spent on flushing to /var/log/journal/ec2c1a694241c2da9d83bd9c588914b8 is 113.152ms for 897 entries. Jan 13 21:12:08.937217 systemd-journald[1570]: System Journal (/var/log/journal/ec2c1a694241c2da9d83bd9c588914b8) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:12:09.087611 systemd-journald[1570]: Received client request to flush runtime journal. Jan 13 21:12:09.087704 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:12:09.087755 kernel: loop1: detected capacity change from 0 to 194512 Jan 13 21:12:08.980097 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:12:08.996416 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:12:08.999780 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:12:09.032638 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:12:09.047684 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:12:09.092205 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:12:09.118033 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jan 13 21:12:09.118065 systemd-tmpfiles[1633]: ACLs are not supported, ignoring. Jan 13 21:12:09.125611 kernel: loop2: detected capacity change from 0 to 114328 Jan 13 21:12:09.132998 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:12:09.257323 kernel: loop3: detected capacity change from 0 to 52536 Jan 13 21:12:09.385391 kernel: loop4: detected capacity change from 0 to 114432 Jan 13 21:12:09.401312 kernel: loop5: detected capacity change from 0 to 194512 Jan 13 21:12:09.422321 kernel: loop6: detected capacity change from 0 to 114328 Jan 13 21:12:09.433324 kernel: loop7: detected capacity change from 0 to 52536 Jan 13 21:12:09.447571 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:12:09.448548 (sd-merge)[1641]: Merged extensions into '/usr'. Jan 13 21:12:09.457403 systemd[1]: Reloading requested from client PID 1597 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:12:09.457434 systemd[1]: Reloading... Jan 13 21:12:09.621799 zram_generator::config[1667]: No configuration found. Jan 13 21:12:09.908750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:12:10.022100 systemd[1]: Reloading finished in 563 ms. Jan 13 21:12:10.079942 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:12:10.082953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:12:10.104612 systemd[1]: Starting ensure-sysext.service... Jan 13 21:12:10.114639 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:12:10.121633 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:12:10.159150 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:12:10.159429 systemd[1]: Reloading... Jan 13 21:12:10.166414 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:12:10.167093 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:12:10.171937 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:12:10.174526 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Jan 13 21:12:10.174713 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Jan 13 21:12:10.187923 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:12:10.188393 systemd-tmpfiles[1720]: Skipping /boot Jan 13 21:12:10.222690 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Jan 13 21:12:10.227647 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:12:10.227674 systemd-tmpfiles[1720]: Skipping /boot Jan 13 21:12:10.354314 zram_generator::config[1752]: No configuration found. Jan 13 21:12:10.552875 (udev-worker)[1772]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:10.736291 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:12:10.801383 ldconfig[1593]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:12:10.836527 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1759) Jan 13 21:12:10.893319 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:12:10.894391 systemd[1]: Reloading finished in 734 ms. Jan 13 21:12:10.933557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:12:10.938347 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:12:10.941241 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:12:11.036488 systemd[1]: Finished ensure-sysext.service. Jan 13 21:12:11.060928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:12:11.075575 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:12:11.078712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:12:11.087616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:12:11.102869 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:12:11.108743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:12:11.118897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:12:11.121171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:12:11.125788 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:12:11.137733 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:12:11.144669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:12:11.148497 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:12:11.160559 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:12:11.172704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:12:11.177335 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:12:11.180462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:12:11.180755 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:12:11.183527 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:12:11.183831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:12:11.186451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:12:11.186747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:12:11.241077 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:12:11.241413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:12:11.251436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:12:11.267597 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:12:11.277560 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:12:11.279744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:12:11.279867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:12:11.286580 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:12:11.302420 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:12:11.310735 augenrules[1950]: No rules Jan 13 21:12:11.318157 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:12:11.321065 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:12:11.338579 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:12:11.354422 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:12:11.357164 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:12:11.360730 lvm[1947]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:12:11.397226 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:12:11.404607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:12:11.426425 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:12:11.427400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:12:11.435697 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:12:11.451262 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:12:11.468310 lvm[1968]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:12:11.534996 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:12:11.540854 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:12:11.592505 systemd-networkd[1933]: lo: Link UP Jan 13 21:12:11.592525 systemd-networkd[1933]: lo: Gained carrier Jan 13 21:12:11.595213 systemd-networkd[1933]: Enumeration completed Jan 13 21:12:11.595432 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:12:11.598409 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:11.598429 systemd-networkd[1933]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:12:11.601765 systemd-networkd[1933]: eth0: Link UP Jan 13 21:12:11.602119 systemd-networkd[1933]: eth0: Gained carrier Jan 13 21:12:11.602157 systemd-networkd[1933]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:12:11.606585 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:12:11.618400 systemd-networkd[1933]: eth0: DHCPv4 address 172.31.31.152/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:12:11.625833 systemd-resolved[1935]: Positive Trust Anchors: Jan 13 21:12:11.625874 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:12:11.625937 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:12:11.649776 systemd-resolved[1935]: Defaulting to hostname 'linux'. Jan 13 21:12:11.652924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:12:11.655233 systemd[1]: Reached target network.target - Network. Jan 13 21:12:11.657056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:12:11.659167 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:12:11.661364 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:12:11.663692 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:12:11.666375 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:12:11.668674 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:12:11.671072 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:12:11.673371 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:12:11.673420 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:12:11.675048 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:12:11.678039 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:12:11.683712 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:12:11.715427 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:12:11.718448 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:12:11.720635 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:12:11.722501 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:12:11.724401 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:12:11.724464 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:12:11.732479 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:12:11.746099 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:12:11.751723 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:12:11.765561 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:12:11.771575 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:12:11.773537 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:12:11.781906 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:12:11.787952 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:12:11.794535 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:12:11.805131 jq[1986]: false Jan 13 21:12:11.800931 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:12:11.812718 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:12:11.823652 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:12:11.826501 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:12:11.828441 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:12:11.831670 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:12:11.843664 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:12:11.852109 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:12:11.853584 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:12:11.863572 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:12:11.866357 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:12:11.916228 jq[1995]: true Jan 13 21:12:11.919006 dbus-daemon[1985]: [system] SELinux support is enabled Jan 13 21:12:11.919366 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:12:11.929987 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:12:11.930105 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:12:11.934550 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:12:11.934609 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:12:11.947172 dbus-daemon[1985]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1933 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:12:11.955100 extend-filesystems[1987]: Found loop4 Jan 13 21:12:11.955100 extend-filesystems[1987]: Found loop5 Jan 13 21:12:11.955100 extend-filesystems[1987]: Found loop6 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found loop7 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1p1 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1p2 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1p3 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found usr Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1p4 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1p6 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1p7 Jan 13 21:12:11.967493 extend-filesystems[1987]: Found nvme0n1p9 Jan 13 21:12:11.967493 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Jan 13 21:12:11.982612 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:12:11.970026 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:12:12.029208 jq[2012]: true Jan 13 21:12:12.038847 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:12:12.051746 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:12:12.052129 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:12:12.086615 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Jan 13 21:12:12.125335 update_engine[1994]: I20250113 21:12:12.119465 1994 main.cc:92] Flatcar Update Engine starting Jan 13 21:12:12.125759 extend-filesystems[2033]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:12:12.134545 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:12:12.148305 update_engine[1994]: I20250113 21:12:12.140046 1994 update_check_scheduler.cc:74] Next update check in 11m21s Jan 13 21:12:12.157064 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:12:12.154613 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:12:12.157994 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: ---------------------------------------------------- Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: available at https://www.nwtime.org/support Jan 13 21:12:12.159803 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: ---------------------------------------------------- Jan 13 21:12:12.158208 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:12:12.158229 ntpd[1989]: ---------------------------------------------------- Jan 13 21:12:12.158249 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:12:12.158384 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:12:12.158412 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 13 21:12:12.158431 ntpd[1989]: available at https://www.nwtime.org/support Jan 13 21:12:12.177759 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: proto: precision = 0.108 usec (-23) Jan 13 21:12:12.170954 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:12:12.158449 ntpd[1989]: ---------------------------------------------------- Jan 13 21:12:12.174110 ntpd[1989]: proto: precision = 0.108 usec (-23) Jan 13 21:12:12.178730 ntpd[1989]: basedate set to 2025-01-01 Jan 13 21:12:12.179810 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: basedate set to 2025-01-01 Jan 13 21:12:12.179810 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: gps base set to 2025-01-05 (week 2348) Jan 13 21:12:12.178768 ntpd[1989]: gps base set to 2025-01-05 (week 2348) Jan 13 21:12:12.195690 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:12:12.200486 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:12:12.200486 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:12:12.200486 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:12:12.200486 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Listen normally on 3 eth0 172.31.31.152:123 Jan 13 21:12:12.200486 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 13 21:12:12.200486 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: bind(21) AF_INET6 fe80::492:68ff:feb8:364d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:12:12.195775 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:12:12.196018 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:12:12.196100 ntpd[1989]: Listen normally on 3 eth0 172.31.31.152:123 Jan 13 21:12:12.196165 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 13 21:12:12.196234 ntpd[1989]: bind(21) AF_INET6 fe80::492:68ff:feb8:364d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:12:12.202407 ntpd[1989]: unable to create socket on eth0 (5) for fe80::492:68ff:feb8:364d%2#123 Jan 13 21:12:12.207490 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: unable to create socket on eth0 (5) for fe80::492:68ff:feb8:364d%2#123 Jan 13 21:12:12.207490 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: failed to init interface for address fe80::492:68ff:feb8:364d%2 Jan 13 21:12:12.207490 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jan 13 21:12:12.202476 ntpd[1989]: failed to init interface for address fe80::492:68ff:feb8:364d%2 Jan 13 21:12:12.202564 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jan 13 21:12:12.227475 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:12.237875 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:12.238634 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:12.242430 ntpd[1989]: 13 Jan 21:12:12 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:12:12.256058 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:12:12.275028 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:12:12.299898 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:12:12.300429 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:12:12.307821 coreos-metadata[1984]: Jan 13 21:12:12.305 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:12:12.306397 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=2016 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:12:12.312868 coreos-metadata[1984]: Jan 13 21:12:12.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:12:12.315037 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.315 INFO Fetch successful Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.316 INFO Fetch successful Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.322 INFO Fetch successful Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.322 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.324 INFO Fetch successful Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.327 INFO Fetch failed with 404: resource not found Jan 13 21:12:12.331135 coreos-metadata[1984]: Jan 13 21:12:12.327 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:12:12.334892 extend-filesystems[2033]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:12:12.334892 extend-filesystems[2033]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:12:12.334892 extend-filesystems[2033]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:12:12.315090 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 21:12:12.366567 bash[2057]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:12:12.366742 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:12:12.378529 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1768) Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.338 INFO Fetch successful Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.338 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.339 INFO Fetch successful Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.339 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.341 INFO Fetch successful Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.341 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.343 INFO Fetch successful Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.343 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:12:12.378666 coreos-metadata[1984]: Jan 13 21:12:12.350 INFO Fetch successful Jan 13 21:12:12.315678 systemd-logind[1993]: New seat seat0. Jan 13 21:12:12.325656 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:12:12.327759 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:12:12.330796 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:12:12.333369 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:12:12.373582 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:12:12.395424 systemd[1]: Starting sshkeys.service... Jan 13 21:12:12.443599 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:12:12.444699 polkitd[2059]: Started polkitd version 121 Jan 13 21:12:12.464755 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:12:12.484008 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:12:12.487219 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:12:12.502507 polkitd[2059]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:12:12.503629 polkitd[2059]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:12:12.510019 polkitd[2059]: Finished loading, compiling and executing 2 rules Jan 13 21:12:12.511625 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:12:12.511887 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:12:12.527648 polkitd[2059]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:12:12.608520 systemd-hostnamed[2016]: Hostname set to (transient) Jan 13 21:12:12.609090 systemd-resolved[1935]: System hostname changed to 'ip-172-31-31-152'. Jan 13 21:12:12.698485 systemd-networkd[1933]: eth0: Gained IPv6LL Jan 13 21:12:12.714842 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:12:12.719477 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:12:12.725392 coreos-metadata[2078]: Jan 13 21:12:12.725 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:12:12.731783 coreos-metadata[2078]: Jan 13 21:12:12.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:12:12.733176 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:12:12.745826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:12:12.752387 coreos-metadata[2078]: Jan 13 21:12:12.745 INFO Fetch successful Jan 13 21:12:12.752387 coreos-metadata[2078]: Jan 13 21:12:12.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:12:12.753892 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:12:12.763323 coreos-metadata[2078]: Jan 13 21:12:12.761 INFO Fetch successful Jan 13 21:12:12.768438 unknown[2078]: wrote ssh authorized keys file for user: core Jan 13 21:12:12.824795 locksmithd[2034]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:12:12.866059 update-ssh-keys[2162]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:12:12.870355 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:12:12.879993 systemd[1]: Finished sshkeys.service. Jan 13 21:12:12.904419 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:12:12.994195 amazon-ssm-agent[2151]: Initializing new seelog logger Jan 13 21:12:12.994698 amazon-ssm-agent[2151]: New Seelog Logger Creation Complete Jan 13 21:12:12.994698 amazon-ssm-agent[2151]: 2025/01/13 21:12:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:12.994698 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:12.997339 amazon-ssm-agent[2151]: 2025/01/13 21:12:12 processing appconfig overrides Jan 13 21:12:12.997339 amazon-ssm-agent[2151]: 2025/01/13 21:12:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:12.997339 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:12.997339 amazon-ssm-agent[2151]: 2025/01/13 21:12:12 processing appconfig overrides Jan 13 21:12:13.000847 amazon-ssm-agent[2151]: 2025/01/13 21:12:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:13.000847 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:13.000847 amazon-ssm-agent[2151]: 2025/01/13 21:12:12 processing appconfig overrides Jan 13 21:12:13.000847 amazon-ssm-agent[2151]: 2025-01-13 21:12:12 INFO Proxy environment variables: Jan 13 21:12:13.005263 amazon-ssm-agent[2151]: 2025/01/13 21:12:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:13.005263 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:12:13.005263 amazon-ssm-agent[2151]: 2025/01/13 21:12:13 processing appconfig overrides Jan 13 21:12:13.070846 containerd[2018]: time="2025-01-13T21:12:13.069875613Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:12:13.103314 amazon-ssm-agent[2151]: 2025-01-13 21:12:12 INFO https_proxy: Jan 13 21:12:13.169226 containerd[2018]: time="2025-01-13T21:12:13.167349933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:13.171851 containerd[2018]: time="2025-01-13T21:12:13.171767793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:13.171962 containerd[2018]: time="2025-01-13T21:12:13.171863925Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:12:13.171962 containerd[2018]: time="2025-01-13T21:12:13.171901269Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:12:13.172433 containerd[2018]: time="2025-01-13T21:12:13.172388205Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:12:13.172498 containerd[2018]: time="2025-01-13T21:12:13.172448553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:13.174534 containerd[2018]: time="2025-01-13T21:12:13.174474777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:13.174534 containerd[2018]: time="2025-01-13T21:12:13.174528525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:13.175724 containerd[2018]: time="2025-01-13T21:12:13.174874449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:13.175724 containerd[2018]: time="2025-01-13T21:12:13.174921429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:13.175724 containerd[2018]: time="2025-01-13T21:12:13.174954693Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:13.175724 containerd[2018]: time="2025-01-13T21:12:13.174979473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:13.175724 containerd[2018]: time="2025-01-13T21:12:13.175160313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:13.175724 containerd[2018]: time="2025-01-13T21:12:13.175606497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:12:13.176250 containerd[2018]: time="2025-01-13T21:12:13.176083533Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:12:13.177480 containerd[2018]: time="2025-01-13T21:12:13.176122221Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:12:13.178585 containerd[2018]: time="2025-01-13T21:12:13.178547001Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:12:13.179007 containerd[2018]: time="2025-01-13T21:12:13.178766229Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:12:13.193365 containerd[2018]: time="2025-01-13T21:12:13.191253645Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:12:13.193365 containerd[2018]: time="2025-01-13T21:12:13.191389257Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:12:13.193365 containerd[2018]: time="2025-01-13T21:12:13.191426997Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:12:13.193365 containerd[2018]: time="2025-01-13T21:12:13.191461497Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:12:13.193365 containerd[2018]: time="2025-01-13T21:12:13.191508969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:12:13.193365 containerd[2018]: time="2025-01-13T21:12:13.191774781Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.195985065Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196263573Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196360353Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196404177Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196451661Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196488993Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196520565Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196551717Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196880481Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.196975 containerd[2018]: time="2025-01-13T21:12:13.196926585Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199352313Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199483941Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199547085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199592709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199640121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199684737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199725633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199764369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199804101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199846725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199888797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199936173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.199976397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201295 containerd[2018]: time="2025-01-13T21:12:13.200032941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200086869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200136969Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200205045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200238249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200298225Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200438205Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200490057Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200527065Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200567385Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200594373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200633121Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200665701Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:12:13.201981 containerd[2018]: time="2025-01-13T21:12:13.200693145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:12:13.202531 amazon-ssm-agent[2151]: 2025-01-13 21:12:12 INFO http_proxy: Jan 13 21:12:13.206870 containerd[2018]: time="2025-01-13T21:12:13.203956522Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:12:13.206870 containerd[2018]: time="2025-01-13T21:12:13.204138862Z" level=info msg="Connect containerd service" Jan 13 21:12:13.206870 containerd[2018]: time="2025-01-13T21:12:13.204220090Z" level=info msg="using legacy CRI server" Jan 13 21:12:13.206870 containerd[2018]: time="2025-01-13T21:12:13.204239830Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:12:13.206870 containerd[2018]: time="2025-01-13T21:12:13.204442414Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:12:13.209933 containerd[2018]: time="2025-01-13T21:12:13.209865658Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:12:13.210178 containerd[2018]: time="2025-01-13T21:12:13.210088174Z" level=info msg="Start subscribing containerd event" Jan 13 21:12:13.210264 containerd[2018]: time="2025-01-13T21:12:13.210196546Z" level=info msg="Start recovering state" Jan 13 21:12:13.210358 containerd[2018]: time="2025-01-13T21:12:13.210337606Z" level=info msg="Start event monitor" Jan 13 21:12:13.210407 containerd[2018]: time="2025-01-13T21:12:13.210363646Z" level=info msg="Start snapshots syncer" Jan 13 21:12:13.210407 containerd[2018]: time="2025-01-13T21:12:13.210385546Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:12:13.210507 containerd[2018]: time="2025-01-13T21:12:13.210404878Z" level=info msg="Start streaming server" Jan 13 21:12:13.213666 containerd[2018]: time="2025-01-13T21:12:13.213610606Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:12:13.213798 containerd[2018]: time="2025-01-13T21:12:13.213726430Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:12:13.213937 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:12:13.220110 containerd[2018]: time="2025-01-13T21:12:13.219347830Z" level=info msg="containerd successfully booted in 0.155894s" Jan 13 21:12:13.300431 amazon-ssm-agent[2151]: 2025-01-13 21:12:12 INFO no_proxy: Jan 13 21:12:13.400496 amazon-ssm-agent[2151]: 2025-01-13 21:12:12 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:12:13.498936 amazon-ssm-agent[2151]: 2025-01-13 21:12:12 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:12:13.582533 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO Agent will take identity from EC2 Jan 13 21:12:13.582533 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:12:13.582841 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:12:13.582841 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:12:13.582841 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:12:13.583100 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 21:12:13.583100 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:12:13.583100 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:12:13.583376 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [Registrar] Starting registrar module Jan 13 21:12:13.583376 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:12:13.584705 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:12:13.584867 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:12:13.585059 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:12:13.585059 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:12:13.598053 amazon-ssm-agent[2151]: 2025-01-13 21:12:13 INFO [CredentialRefresher] Next credential rotation will be in 31.916612026266666 minutes Jan 13 21:12:14.110762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:12:14.118286 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:12:14.581854 sshd_keygen[2017]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:12:14.618678 amazon-ssm-agent[2151]: 2025-01-13 21:12:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:12:14.671780 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:12:14.683803 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:12:14.696759 systemd[1]: Started sshd@0-172.31.31.152:22-139.178.89.65:57506.service - OpenSSH per-connection server daemon (139.178.89.65:57506). Jan 13 21:12:14.718310 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:12:14.720295 amazon-ssm-agent[2151]: 2025-01-13 21:12:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2223) started Jan 13 21:12:14.720505 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:12:14.734563 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:12:14.757464 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:12:14.769824 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:12:14.784053 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:12:14.787794 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:12:14.789786 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:12:14.792585 systemd[1]: Startup finished in 1.148s (kernel) + 8.865s (initrd) + 8.175s (userspace) = 18.189s. Jan 13 21:12:14.821422 amazon-ssm-agent[2151]: 2025-01-13 21:12:14 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:12:14.999858 sshd[2230]: Accepted publickey for core from 139.178.89.65 port 57506 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:15.003657 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:15.023593 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:12:15.037094 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:12:15.045449 systemd-logind[1993]: New session 1 of user core. Jan 13 21:12:15.074495 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:12:15.088045 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:12:15.101786 (systemd)[2252]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:12:15.143295 kubelet[2211]: E0113 21:12:15.143125 2211 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:12:15.147804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:12:15.148165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:12:15.148696 systemd[1]: kubelet.service: Consumed 1.334s CPU time. Jan 13 21:12:15.159654 ntpd[1989]: Listen normally on 6 eth0 [fe80::492:68ff:feb8:364d%2]:123 Jan 13 21:12:15.160780 ntpd[1989]: 13 Jan 21:12:15 ntpd[1989]: Listen normally on 6 eth0 [fe80::492:68ff:feb8:364d%2]:123 Jan 13 21:12:15.331624 systemd[2252]: Queued start job for default target default.target. Jan 13 21:12:15.344924 systemd[2252]: Created slice app.slice - User Application Slice. Jan 13 21:12:15.344989 systemd[2252]: Reached target paths.target - Paths. Jan 13 21:12:15.345021 systemd[2252]: Reached target timers.target - Timers. Jan 13 21:12:15.347566 systemd[2252]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:12:15.368628 systemd[2252]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:12:15.368857 systemd[2252]: Reached target sockets.target - Sockets. Jan 13 21:12:15.368892 systemd[2252]: Reached target basic.target - Basic System. Jan 13 21:12:15.368973 systemd[2252]: Reached target default.target - Main User Target. Jan 13 21:12:15.369036 systemd[2252]: Startup finished in 251ms. Jan 13 21:12:15.369262 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:12:15.379716 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:12:15.536781 systemd[1]: Started sshd@1-172.31.31.152:22-139.178.89.65:51464.service - OpenSSH per-connection server daemon (139.178.89.65:51464). Jan 13 21:12:15.712726 sshd[2264]: Accepted publickey for core from 139.178.89.65 port 51464 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:15.715312 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:15.722510 systemd-logind[1993]: New session 2 of user core. Jan 13 21:12:15.731526 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:12:15.854790 sshd[2264]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:15.861240 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:12:15.862251 systemd[1]: sshd@1-172.31.31.152:22-139.178.89.65:51464.service: Deactivated successfully. Jan 13 21:12:15.866131 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:12:15.868941 systemd-logind[1993]: Removed session 2. Jan 13 21:12:15.896817 systemd[1]: Started sshd@2-172.31.31.152:22-139.178.89.65:51472.service - OpenSSH per-connection server daemon (139.178.89.65:51472). Jan 13 21:12:16.074486 sshd[2271]: Accepted publickey for core from 139.178.89.65 port 51472 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:16.076485 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:16.085608 systemd-logind[1993]: New session 3 of user core. Jan 13 21:12:16.095517 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:12:16.215189 sshd[2271]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:16.221604 systemd[1]: sshd@2-172.31.31.152:22-139.178.89.65:51472.service: Deactivated successfully. Jan 13 21:12:16.225497 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:12:16.227111 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:12:16.228905 systemd-logind[1993]: Removed session 3. Jan 13 21:12:16.252790 systemd[1]: Started sshd@3-172.31.31.152:22-139.178.89.65:51484.service - OpenSSH per-connection server daemon (139.178.89.65:51484). Jan 13 21:12:16.430382 sshd[2278]: Accepted publickey for core from 139.178.89.65 port 51484 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:16.432383 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:16.441616 systemd-logind[1993]: New session 4 of user core. Jan 13 21:12:16.445528 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:12:16.571433 sshd[2278]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:16.576723 systemd[1]: sshd@3-172.31.31.152:22-139.178.89.65:51484.service: Deactivated successfully. Jan 13 21:12:16.577226 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:12:16.580202 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:12:16.585210 systemd-logind[1993]: Removed session 4. Jan 13 21:12:16.607514 systemd[1]: Started sshd@4-172.31.31.152:22-139.178.89.65:51494.service - OpenSSH per-connection server daemon (139.178.89.65:51494). Jan 13 21:12:16.791750 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 51494 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:16.794647 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:16.801841 systemd-logind[1993]: New session 5 of user core. Jan 13 21:12:16.811516 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:12:16.940635 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:12:16.941810 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:12:16.956867 sudo[2288]: pam_unix(sudo:session): session closed for user root Jan 13 21:12:16.980735 sshd[2285]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:16.987233 systemd[1]: sshd@4-172.31.31.152:22-139.178.89.65:51494.service: Deactivated successfully. Jan 13 21:12:16.991184 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:12:16.992699 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:12:16.994460 systemd-logind[1993]: Removed session 5. Jan 13 21:12:17.022784 systemd[1]: Started sshd@5-172.31.31.152:22-139.178.89.65:51508.service - OpenSSH per-connection server daemon (139.178.89.65:51508). Jan 13 21:12:17.198493 sshd[2293]: Accepted publickey for core from 139.178.89.65 port 51508 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:17.200533 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:17.208825 systemd-logind[1993]: New session 6 of user core. Jan 13 21:12:17.223594 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:12:17.329471 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:12:17.330116 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:12:17.336139 sudo[2298]: pam_unix(sudo:session): session closed for user root Jan 13 21:12:17.346124 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:12:17.346782 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:12:17.373145 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:12:17.375962 auditctl[2301]: No rules Jan 13 21:12:17.376716 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:12:17.377070 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:12:17.385978 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:12:17.437550 augenrules[2319]: No rules Jan 13 21:12:17.440369 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:12:17.442135 sudo[2297]: pam_unix(sudo:session): session closed for user root Jan 13 21:12:17.465974 sshd[2293]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:17.473069 systemd[1]: sshd@5-172.31.31.152:22-139.178.89.65:51508.service: Deactivated successfully. Jan 13 21:12:17.476482 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:12:17.477667 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:12:17.479657 systemd-logind[1993]: Removed session 6. Jan 13 21:12:17.504806 systemd[1]: Started sshd@6-172.31.31.152:22-139.178.89.65:51512.service - OpenSSH per-connection server daemon (139.178.89.65:51512). Jan 13 21:12:17.676492 sshd[2327]: Accepted publickey for core from 139.178.89.65 port 51512 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:17.679067 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:17.686364 systemd-logind[1993]: New session 7 of user core. Jan 13 21:12:17.695512 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:12:17.798352 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:12:17.798958 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:12:18.953139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:12:18.953501 systemd[1]: kubelet.service: Consumed 1.334s CPU time. Jan 13 21:12:18.964767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:12:19.002121 systemd[1]: Reloading requested from client PID 2367 ('systemctl') (unit session-7.scope)... Jan 13 21:12:19.002319 systemd[1]: Reloading... Jan 13 21:12:19.029785 systemd-resolved[1935]: Clock change detected. Flushing caches. Jan 13 21:12:19.100027 zram_generator::config[2410]: No configuration found. Jan 13 21:12:19.341154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:12:19.510228 systemd[1]: Reloading finished in 636 ms. Jan 13 21:12:19.608410 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:12:19.608593 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:12:19.609279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:12:19.620612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:12:19.977103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:12:19.992719 (kubelet)[2471]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:12:20.081416 kubelet[2471]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:12:20.081861 kubelet[2471]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:12:20.081943 kubelet[2471]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:12:20.082256 kubelet[2471]: I0113 21:12:20.082198 2471 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:12:20.922110 kubelet[2471]: I0113 21:12:20.922059 2471 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:12:20.922283 kubelet[2471]: I0113 21:12:20.922264 2471 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:12:20.923013 kubelet[2471]: I0113 21:12:20.922696 2471 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:12:20.943319 kubelet[2471]: I0113 21:12:20.943276 2471 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:12:20.956099 kubelet[2471]: I0113 21:12:20.956063 2471 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:12:20.958106 kubelet[2471]: I0113 21:12:20.958073 2471 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:12:20.959495 kubelet[2471]: I0113 21:12:20.958902 2471 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:12:20.959495 kubelet[2471]: I0113 21:12:20.959123 2471 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:12:20.959495 kubelet[2471]: I0113 21:12:20.959154 2471 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:12:20.960833 kubelet[2471]: I0113 21:12:20.960520 2471 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:12:20.965038 kubelet[2471]: I0113 21:12:20.964678 2471 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:12:20.965038 kubelet[2471]: I0113 21:12:20.964731 2471 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:12:20.965038 kubelet[2471]: I0113 21:12:20.964774 2471 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:12:20.965038 kubelet[2471]: I0113 21:12:20.964808 2471 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:12:20.966593 kubelet[2471]: E0113 21:12:20.966540 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:20.966706 kubelet[2471]: E0113 21:12:20.966644 2471 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:20.969390 kubelet[2471]: I0113 21:12:20.969344 2471 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:12:20.969923 kubelet[2471]: I0113 21:12:20.969884 2471 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:12:20.970076 kubelet[2471]: W0113 21:12:20.970045 2471 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:12:20.971323 kubelet[2471]: I0113 21:12:20.971276 2471 server.go:1256] "Started kubelet" Jan 13 21:12:20.973022 kubelet[2471]: I0113 21:12:20.972951 2471 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:12:20.975391 kubelet[2471]: I0113 21:12:20.975336 2471 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:12:20.980024 kubelet[2471]: I0113 21:12:20.976574 2471 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:12:20.980024 kubelet[2471]: I0113 21:12:20.976951 2471 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:12:20.980931 kubelet[2471]: I0113 21:12:20.979970 2471 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:12:20.995610 kubelet[2471]: E0113 21:12:20.995564 2471 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.152.181a5ce414832129 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.152,UID:172.31.31.152,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.31.152,},FirstTimestamp:2025-01-13 21:12:20.971184425 +0000 UTC m=+0.971273418,LastTimestamp:2025-01-13 21:12:20.971184425 +0000 UTC m=+0.971273418,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.152,}" Jan 13 21:12:20.995976 kubelet[2471]: W0113 21:12:20.995932 2471 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.31.152" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:12:20.997899 kubelet[2471]: E0113 21:12:20.997862 2471 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.31.152" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:12:20.998078 kubelet[2471]: I0113 21:12:20.996142 2471 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:12:20.998351 kubelet[2471]: I0113 21:12:20.996182 2471 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:12:20.998528 kubelet[2471]: I0113 21:12:20.998505 2471 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:12:20.998618 kubelet[2471]: W0113 21:12:20.997470 2471 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:12:21.001309 kubelet[2471]: E0113 21:12:21.001275 2471 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:12:21.002488 kubelet[2471]: E0113 21:12:20.997580 2471 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:12:21.002675 kubelet[2471]: I0113 21:12:21.001660 2471 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:12:21.003009 kubelet[2471]: I0113 21:12:21.002952 2471 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:12:21.004816 kubelet[2471]: I0113 21:12:21.004783 2471 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:12:21.010444 kubelet[2471]: W0113 21:12:21.010406 2471 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:12:21.011295 kubelet[2471]: E0113 21:12:21.011258 2471 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:12:21.011382 kubelet[2471]: E0113 21:12:21.010790 2471 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.31.152.181a5ce416157b1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.31.152,UID:172.31.31.152,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.31.152,},FirstTimestamp:2025-01-13 21:12:20.997552925 +0000 UTC m=+0.997641918,LastTimestamp:2025-01-13 21:12:20.997552925 +0000 UTC m=+0.997641918,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.31.152,}" Jan 13 21:12:21.011382 kubelet[2471]: E0113 21:12:21.011094 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.31.152\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:12:21.034104 kubelet[2471]: I0113 21:12:21.033578 2471 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:12:21.034104 kubelet[2471]: I0113 21:12:21.033614 2471 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:12:21.034104 kubelet[2471]: I0113 21:12:21.033645 2471 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:12:21.037312 kubelet[2471]: I0113 21:12:21.037112 2471 policy_none.go:49] "None policy: Start" Jan 13 21:12:21.038913 kubelet[2471]: I0113 21:12:21.038377 2471 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:12:21.038913 kubelet[2471]: I0113 21:12:21.038439 2471 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:12:21.057548 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:12:21.093976 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:12:21.101075 kubelet[2471]: I0113 21:12:21.100051 2471 kubelet_node_status.go:73] "Attempting to register node" node="172.31.31.152" Jan 13 21:12:21.102229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:12:21.111004 kubelet[2471]: I0113 21:12:21.110909 2471 kubelet_node_status.go:76] "Successfully registered node" node="172.31.31.152" Jan 13 21:12:21.113720 kubelet[2471]: I0113 21:12:21.112959 2471 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:12:21.113720 kubelet[2471]: I0113 21:12:21.113400 2471 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:12:21.117957 kubelet[2471]: E0113 21:12:21.117667 2471 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.31.152\" not found" Jan 13 21:12:21.138419 kubelet[2471]: E0113 21:12:21.138361 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.149379 kubelet[2471]: I0113 21:12:21.149327 2471 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:12:21.151539 kubelet[2471]: I0113 21:12:21.151484 2471 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:12:21.151539 kubelet[2471]: I0113 21:12:21.151531 2471 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:12:21.151735 kubelet[2471]: I0113 21:12:21.151563 2471 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:12:21.151735 kubelet[2471]: E0113 21:12:21.151632 2471 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 21:12:21.239291 kubelet[2471]: E0113 21:12:21.239144 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.340046 kubelet[2471]: E0113 21:12:21.339971 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.440722 kubelet[2471]: E0113 21:12:21.440672 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.541879 kubelet[2471]: E0113 21:12:21.541755 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.642558 kubelet[2471]: E0113 21:12:21.642520 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.743259 kubelet[2471]: E0113 21:12:21.743211 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.843961 kubelet[2471]: E0113 21:12:21.843863 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.925738 kubelet[2471]: I0113 21:12:21.925691 2471 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:12:21.925920 kubelet[2471]: W0113 21:12:21.925870 2471 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:12:21.944038 kubelet[2471]: E0113 21:12:21.943966 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:21.967263 kubelet[2471]: E0113 21:12:21.967223 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:22.044780 kubelet[2471]: E0113 21:12:22.044731 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:22.145873 kubelet[2471]: E0113 21:12:22.145760 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:22.246108 kubelet[2471]: E0113 21:12:22.246062 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:22.346538 kubelet[2471]: E0113 21:12:22.346488 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:22.447287 kubelet[2471]: E0113 21:12:22.447168 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:22.511446 sudo[2330]: pam_unix(sudo:session): session closed for user root Jan 13 21:12:22.534123 sshd[2327]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:22.539069 systemd[1]: sshd@6-172.31.31.152:22-139.178.89.65:51512.service: Deactivated successfully. Jan 13 21:12:22.543072 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:12:22.547461 kubelet[2471]: E0113 21:12:22.547387 2471 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.31.152\" not found" Jan 13 21:12:22.547658 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:12:22.549836 systemd-logind[1993]: Removed session 7. Jan 13 21:12:22.649418 kubelet[2471]: I0113 21:12:22.649346 2471 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:12:22.650036 containerd[2018]: time="2025-01-13T21:12:22.649866749Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:12:22.650597 kubelet[2471]: I0113 21:12:22.650417 2471 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:12:22.967574 kubelet[2471]: E0113 21:12:22.967513 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:22.967574 kubelet[2471]: I0113 21:12:22.967517 2471 apiserver.go:52] "Watching apiserver" Jan 13 21:12:22.973630 kubelet[2471]: I0113 21:12:22.973430 2471 topology_manager.go:215] "Topology Admit Handler" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" podNamespace="kube-system" podName="cilium-9qjdk" Jan 13 21:12:22.973630 kubelet[2471]: I0113 21:12:22.973605 2471 topology_manager.go:215] "Topology Admit Handler" podUID="76b1837a-2e26-432b-a1ef-722552682b9a" podNamespace="kube-system" podName="kube-proxy-6nf7m" Jan 13 21:12:22.990628 systemd[1]: Created slice kubepods-besteffort-pod76b1837a_2e26_432b_a1ef_722552682b9a.slice - libcontainer container kubepods-besteffort-pod76b1837a_2e26_432b_a1ef_722552682b9a.slice. Jan 13 21:12:22.999348 kubelet[2471]: I0113 21:12:22.999299 2471 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:12:23.011913 kubelet[2471]: I0113 21:12:23.010829 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-kernel\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.011913 kubelet[2471]: I0113 21:12:23.010899 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76b1837a-2e26-432b-a1ef-722552682b9a-kube-proxy\") pod \"kube-proxy-6nf7m\" (UID: \"76b1837a-2e26-432b-a1ef-722552682b9a\") " pod="kube-system/kube-proxy-6nf7m" Jan 13 21:12:23.011913 kubelet[2471]: I0113 21:12:23.010946 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76b1837a-2e26-432b-a1ef-722552682b9a-lib-modules\") pod \"kube-proxy-6nf7m\" (UID: \"76b1837a-2e26-432b-a1ef-722552682b9a\") " pod="kube-system/kube-proxy-6nf7m" Jan 13 21:12:23.011913 kubelet[2471]: I0113 21:12:23.011016 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqc2\" (UniqueName: \"kubernetes.io/projected/76b1837a-2e26-432b-a1ef-722552682b9a-kube-api-access-wrqc2\") pod \"kube-proxy-6nf7m\" (UID: \"76b1837a-2e26-432b-a1ef-722552682b9a\") " pod="kube-system/kube-proxy-6nf7m" Jan 13 21:12:23.011913 kubelet[2471]: I0113 21:12:23.011069 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hostproc\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012289 kubelet[2471]: I0113 21:12:23.011114 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-cgroup\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012289 kubelet[2471]: I0113 21:12:23.011159 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-clustermesh-secrets\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012289 kubelet[2471]: I0113 21:12:23.011214 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-bpf-maps\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012289 kubelet[2471]: I0113 21:12:23.011261 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-net\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012289 kubelet[2471]: I0113 21:12:23.011306 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22wc2\" (UniqueName: \"kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-kube-api-access-22wc2\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012289 kubelet[2471]: I0113 21:12:23.011347 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-xtables-lock\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012569 kubelet[2471]: I0113 21:12:23.011427 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-config-path\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012569 kubelet[2471]: I0113 21:12:23.011471 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hubble-tls\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012569 kubelet[2471]: I0113 21:12:23.011530 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76b1837a-2e26-432b-a1ef-722552682b9a-xtables-lock\") pod \"kube-proxy-6nf7m\" (UID: \"76b1837a-2e26-432b-a1ef-722552682b9a\") " pod="kube-system/kube-proxy-6nf7m" Jan 13 21:12:23.012569 kubelet[2471]: I0113 21:12:23.011574 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-run\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012569 kubelet[2471]: I0113 21:12:23.011618 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-etc-cni-netd\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012569 kubelet[2471]: I0113 21:12:23.011661 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-lib-modules\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.012849 kubelet[2471]: I0113 21:12:23.011710 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cni-path\") pod \"cilium-9qjdk\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " pod="kube-system/cilium-9qjdk" Jan 13 21:12:23.019344 systemd[1]: Created slice kubepods-burstable-podcb7f921a_1941_467c_8f69_fd5d81bdb0e4.slice - libcontainer container kubepods-burstable-podcb7f921a_1941_467c_8f69_fd5d81bdb0e4.slice. Jan 13 21:12:23.316673 containerd[2018]: time="2025-01-13T21:12:23.316378745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nf7m,Uid:76b1837a-2e26-432b-a1ef-722552682b9a,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:23.335075 containerd[2018]: time="2025-01-13T21:12:23.334158557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9qjdk,Uid:cb7f921a-1941-467c-8f69-fd5d81bdb0e4,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:23.946888 containerd[2018]: time="2025-01-13T21:12:23.946802408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:12:23.949678 containerd[2018]: time="2025-01-13T21:12:23.949614800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 21:12:23.953030 containerd[2018]: time="2025-01-13T21:12:23.952948304Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:12:23.954645 containerd[2018]: time="2025-01-13T21:12:23.954575204Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:12:23.956290 containerd[2018]: time="2025-01-13T21:12:23.956218700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:12:23.960888 containerd[2018]: time="2025-01-13T21:12:23.960792896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:12:23.963789 containerd[2018]: time="2025-01-13T21:12:23.963715724Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 647.178723ms" Jan 13 21:12:23.968002 containerd[2018]: time="2025-01-13T21:12:23.967933496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 632.601747ms" Jan 13 21:12:23.968680 kubelet[2471]: E0113 21:12:23.968618 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:24.134613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243652815.mount: Deactivated successfully. Jan 13 21:12:24.296226 containerd[2018]: time="2025-01-13T21:12:24.295506978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:24.296226 containerd[2018]: time="2025-01-13T21:12:24.295629354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:24.296226 containerd[2018]: time="2025-01-13T21:12:24.294463950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:24.296226 containerd[2018]: time="2025-01-13T21:12:24.295237746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:24.296226 containerd[2018]: time="2025-01-13T21:12:24.295279698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:24.296226 containerd[2018]: time="2025-01-13T21:12:24.295458366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:24.296226 containerd[2018]: time="2025-01-13T21:12:24.295708254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:24.296678 containerd[2018]: time="2025-01-13T21:12:24.296581998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:24.475344 systemd[1]: Started cri-containerd-646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd.scope - libcontainer container 646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd. Jan 13 21:12:24.479625 systemd[1]: Started cri-containerd-7411da8ae119d97aebdbf494d6232f172344fda0e23f678d693938d66a0e0bbd.scope - libcontainer container 7411da8ae119d97aebdbf494d6232f172344fda0e23f678d693938d66a0e0bbd. Jan 13 21:12:24.529427 containerd[2018]: time="2025-01-13T21:12:24.529376815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9qjdk,Uid:cb7f921a-1941-467c-8f69-fd5d81bdb0e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\"" Jan 13 21:12:24.545855 containerd[2018]: time="2025-01-13T21:12:24.545544127Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:12:24.560353 containerd[2018]: time="2025-01-13T21:12:24.560216971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nf7m,Uid:76b1837a-2e26-432b-a1ef-722552682b9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7411da8ae119d97aebdbf494d6232f172344fda0e23f678d693938d66a0e0bbd\"" Jan 13 21:12:24.969838 kubelet[2471]: E0113 21:12:24.969660 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:25.970632 kubelet[2471]: E0113 21:12:25.970561 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:26.971428 kubelet[2471]: E0113 21:12:26.971378 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:27.971791 kubelet[2471]: E0113 21:12:27.971724 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:28.972531 kubelet[2471]: E0113 21:12:28.972481 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:29.972814 kubelet[2471]: E0113 21:12:29.972766 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:30.973598 kubelet[2471]: E0113 21:12:30.973542 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:31.973858 kubelet[2471]: E0113 21:12:31.973796 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:32.974117 kubelet[2471]: E0113 21:12:32.974066 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:33.974499 kubelet[2471]: E0113 21:12:33.974415 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:34.975108 kubelet[2471]: E0113 21:12:34.975046 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:35.975634 kubelet[2471]: E0113 21:12:35.975580 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:36.976021 kubelet[2471]: E0113 21:12:36.975897 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:37.977804 kubelet[2471]: E0113 21:12:37.977762 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:38.167517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953835501.mount: Deactivated successfully. Jan 13 21:12:38.979544 kubelet[2471]: E0113 21:12:38.979481 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:39.980697 kubelet[2471]: E0113 21:12:39.980634 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:40.546275 containerd[2018]: time="2025-01-13T21:12:40.546196606Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:40.547946 containerd[2018]: time="2025-01-13T21:12:40.547876090Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651498" Jan 13 21:12:40.549462 containerd[2018]: time="2025-01-13T21:12:40.549377026Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:40.552935 containerd[2018]: time="2025-01-13T21:12:40.552710014Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 16.007100919s" Jan 13 21:12:40.552935 containerd[2018]: time="2025-01-13T21:12:40.552765670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 21:12:40.554362 containerd[2018]: time="2025-01-13T21:12:40.554267818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:12:40.557763 containerd[2018]: time="2025-01-13T21:12:40.557412898Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:12:40.575232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2469681299.mount: Deactivated successfully. Jan 13 21:12:40.584614 containerd[2018]: time="2025-01-13T21:12:40.584557715Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\"" Jan 13 21:12:40.586186 containerd[2018]: time="2025-01-13T21:12:40.586133447Z" level=info msg="StartContainer for \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\"" Jan 13 21:12:40.642291 systemd[1]: Started cri-containerd-9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db.scope - libcontainer container 9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db. Jan 13 21:12:40.688805 containerd[2018]: time="2025-01-13T21:12:40.688575395Z" level=info msg="StartContainer for \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\" returns successfully" Jan 13 21:12:40.714633 systemd[1]: cri-containerd-9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db.scope: Deactivated successfully. Jan 13 21:12:40.965637 kubelet[2471]: E0113 21:12:40.965577 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:40.980858 kubelet[2471]: E0113 21:12:40.980797 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:41.572134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db-rootfs.mount: Deactivated successfully. Jan 13 21:12:41.981898 kubelet[2471]: E0113 21:12:41.981785 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:42.376567 containerd[2018]: time="2025-01-13T21:12:42.376405595Z" level=info msg="shim disconnected" id=9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db namespace=k8s.io Jan 13 21:12:42.378517 containerd[2018]: time="2025-01-13T21:12:42.378470879Z" level=warning msg="cleaning up after shim disconnected" id=9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db namespace=k8s.io Jan 13 21:12:42.378701 containerd[2018]: time="2025-01-13T21:12:42.378655379Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:42.517447 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:12:42.982900 kubelet[2471]: E0113 21:12:42.982770 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:43.014390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447404801.mount: Deactivated successfully. Jan 13 21:12:43.229333 containerd[2018]: time="2025-01-13T21:12:43.229259412Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:12:43.269941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849369531.mount: Deactivated successfully. Jan 13 21:12:43.277472 containerd[2018]: time="2025-01-13T21:12:43.277277892Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\"" Jan 13 21:12:43.278314 containerd[2018]: time="2025-01-13T21:12:43.278184492Z" level=info msg="StartContainer for \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\"" Jan 13 21:12:43.340395 systemd[1]: Started cri-containerd-2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924.scope - libcontainer container 2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924. Jan 13 21:12:43.397067 containerd[2018]: time="2025-01-13T21:12:43.396862357Z" level=info msg="StartContainer for \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\" returns successfully" Jan 13 21:12:43.422872 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:12:43.423462 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:43.423577 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:43.438168 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:43.438639 systemd[1]: cri-containerd-2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924.scope: Deactivated successfully. Jan 13 21:12:43.480085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:43.940384 containerd[2018]: time="2025-01-13T21:12:43.940325943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:43.941934 containerd[2018]: time="2025-01-13T21:12:43.941877975Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 13 21:12:43.943606 containerd[2018]: time="2025-01-13T21:12:43.943564095Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:43.947841 containerd[2018]: time="2025-01-13T21:12:43.947792103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:43.949879 containerd[2018]: time="2025-01-13T21:12:43.949235943Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 3.394908281s" Jan 13 21:12:43.949879 containerd[2018]: time="2025-01-13T21:12:43.949291863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 21:12:43.953902 containerd[2018]: time="2025-01-13T21:12:43.953719395Z" level=info msg="CreateContainer within sandbox \"7411da8ae119d97aebdbf494d6232f172344fda0e23f678d693938d66a0e0bbd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:12:43.955322 containerd[2018]: time="2025-01-13T21:12:43.955244379Z" level=info msg="shim disconnected" id=2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924 namespace=k8s.io Jan 13 21:12:43.955668 containerd[2018]: time="2025-01-13T21:12:43.955502883Z" level=warning msg="cleaning up after shim disconnected" id=2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924 namespace=k8s.io Jan 13 21:12:43.955843 containerd[2018]: time="2025-01-13T21:12:43.955540371Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:43.984183 kubelet[2471]: E0113 21:12:43.984083 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:43.990134 containerd[2018]: time="2025-01-13T21:12:43.989961723Z" level=info msg="CreateContainer within sandbox \"7411da8ae119d97aebdbf494d6232f172344fda0e23f678d693938d66a0e0bbd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46d455309870bf1a297001803067ac6123b4ca79036d3113b0c8f4b615dafd42\"" Jan 13 21:12:43.990694 containerd[2018]: time="2025-01-13T21:12:43.990636927Z" level=info msg="StartContainer for \"46d455309870bf1a297001803067ac6123b4ca79036d3113b0c8f4b615dafd42\"" Jan 13 21:12:44.019546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924-rootfs.mount: Deactivated successfully. Jan 13 21:12:44.038428 systemd[1]: run-containerd-runc-k8s.io-46d455309870bf1a297001803067ac6123b4ca79036d3113b0c8f4b615dafd42-runc.XBBUCJ.mount: Deactivated successfully. Jan 13 21:12:44.053303 systemd[1]: Started cri-containerd-46d455309870bf1a297001803067ac6123b4ca79036d3113b0c8f4b615dafd42.scope - libcontainer container 46d455309870bf1a297001803067ac6123b4ca79036d3113b0c8f4b615dafd42. Jan 13 21:12:44.106300 containerd[2018]: time="2025-01-13T21:12:44.106153560Z" level=info msg="StartContainer for \"46d455309870bf1a297001803067ac6123b4ca79036d3113b0c8f4b615dafd42\" returns successfully" Jan 13 21:12:44.233399 containerd[2018]: time="2025-01-13T21:12:44.233072773Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:12:44.256871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2253019232.mount: Deactivated successfully. Jan 13 21:12:44.287601 containerd[2018]: time="2025-01-13T21:12:44.287517421Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\"" Jan 13 21:12:44.288324 containerd[2018]: time="2025-01-13T21:12:44.288255649Z" level=info msg="StartContainer for \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\"" Jan 13 21:12:44.332567 systemd[1]: Started cri-containerd-b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed.scope - libcontainer container b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed. Jan 13 21:12:44.390536 containerd[2018]: time="2025-01-13T21:12:44.390468937Z" level=info msg="StartContainer for \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\" returns successfully" Jan 13 21:12:44.398946 systemd[1]: cri-containerd-b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed.scope: Deactivated successfully. Jan 13 21:12:44.496147 containerd[2018]: time="2025-01-13T21:12:44.495603122Z" level=info msg="shim disconnected" id=b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed namespace=k8s.io Jan 13 21:12:44.496147 containerd[2018]: time="2025-01-13T21:12:44.495681266Z" level=warning msg="cleaning up after shim disconnected" id=b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed namespace=k8s.io Jan 13 21:12:44.496147 containerd[2018]: time="2025-01-13T21:12:44.495702098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:44.523692 containerd[2018]: time="2025-01-13T21:12:44.523577306Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:12:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:12:44.984881 kubelet[2471]: E0113 21:12:44.984818 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:45.247069 containerd[2018]: time="2025-01-13T21:12:45.246866546Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:12:45.283819 kubelet[2471]: I0113 21:12:45.283763 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6nf7m" podStartSLOduration=4.8960557300000005 podStartE2EDuration="24.283701842s" podCreationTimestamp="2025-01-13 21:12:21 +0000 UTC" firstStartedPulling="2025-01-13 21:12:24.562521595 +0000 UTC m=+4.562610576" lastFinishedPulling="2025-01-13 21:12:43.950167707 +0000 UTC m=+23.950256688" observedRunningTime="2025-01-13 21:12:44.284538529 +0000 UTC m=+24.284627534" watchObservedRunningTime="2025-01-13 21:12:45.283701842 +0000 UTC m=+25.283790847" Jan 13 21:12:45.290532 containerd[2018]: time="2025-01-13T21:12:45.290447738Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\"" Jan 13 21:12:45.291910 containerd[2018]: time="2025-01-13T21:12:45.291813002Z" level=info msg="StartContainer for \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\"" Jan 13 21:12:45.342307 systemd[1]: Started cri-containerd-d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf.scope - libcontainer container d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf. Jan 13 21:12:45.384739 systemd[1]: cri-containerd-d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf.scope: Deactivated successfully. Jan 13 21:12:45.389505 containerd[2018]: time="2025-01-13T21:12:45.389341466Z" level=info msg="StartContainer for \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\" returns successfully" Jan 13 21:12:45.430020 containerd[2018]: time="2025-01-13T21:12:45.429868395Z" level=info msg="shim disconnected" id=d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf namespace=k8s.io Jan 13 21:12:45.430020 containerd[2018]: time="2025-01-13T21:12:45.430016763Z" level=warning msg="cleaning up after shim disconnected" id=d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf namespace=k8s.io Jan 13 21:12:45.430442 containerd[2018]: time="2025-01-13T21:12:45.430039335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:45.985933 kubelet[2471]: E0113 21:12:45.985854 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:46.015400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf-rootfs.mount: Deactivated successfully. Jan 13 21:12:46.254849 containerd[2018]: time="2025-01-13T21:12:46.254363067Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:12:46.312488 containerd[2018]: time="2025-01-13T21:12:46.312410415Z" level=info msg="CreateContainer within sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\"" Jan 13 21:12:46.313657 containerd[2018]: time="2025-01-13T21:12:46.313511751Z" level=info msg="StartContainer for \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\"" Jan 13 21:12:46.360295 systemd[1]: Started cri-containerd-579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323.scope - libcontainer container 579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323. Jan 13 21:12:46.410325 containerd[2018]: time="2025-01-13T21:12:46.410202304Z" level=info msg="StartContainer for \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\" returns successfully" Jan 13 21:12:46.537057 kubelet[2471]: I0113 21:12:46.536644 2471 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:12:46.987114 kubelet[2471]: E0113 21:12:46.987032 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:47.274397 kernel: Initializing XFRM netlink socket Jan 13 21:12:47.283041 kubelet[2471]: I0113 21:12:47.282978 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9qjdk" podStartSLOduration=10.268351645 podStartE2EDuration="26.282922108s" podCreationTimestamp="2025-01-13 21:12:21 +0000 UTC" firstStartedPulling="2025-01-13 21:12:24.538963999 +0000 UTC m=+4.539052968" lastFinishedPulling="2025-01-13 21:12:40.55353445 +0000 UTC m=+20.553623431" observedRunningTime="2025-01-13 21:12:47.282692248 +0000 UTC m=+27.282781253" watchObservedRunningTime="2025-01-13 21:12:47.282922108 +0000 UTC m=+27.283011101" Jan 13 21:12:47.988029 kubelet[2471]: E0113 21:12:47.987938 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:48.988874 kubelet[2471]: E0113 21:12:48.988811 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:49.106109 systemd-networkd[1933]: cilium_host: Link UP Jan 13 21:12:49.106429 systemd-networkd[1933]: cilium_net: Link UP Jan 13 21:12:49.108193 systemd-networkd[1933]: cilium_net: Gained carrier Jan 13 21:12:49.108556 systemd-networkd[1933]: cilium_host: Gained carrier Jan 13 21:12:49.112439 (udev-worker)[2905]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:49.113741 (udev-worker)[2904]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:49.162225 systemd-networkd[1933]: cilium_net: Gained IPv6LL Jan 13 21:12:49.293341 systemd-networkd[1933]: cilium_vxlan: Link UP Jan 13 21:12:49.293356 systemd-networkd[1933]: cilium_vxlan: Gained carrier Jan 13 21:12:49.513280 systemd-networkd[1933]: cilium_host: Gained IPv6LL Jan 13 21:12:49.755384 kernel: NET: Registered PF_ALG protocol family Jan 13 21:12:49.989311 kubelet[2471]: E0113 21:12:49.989229 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:50.521291 systemd-networkd[1933]: cilium_vxlan: Gained IPv6LL Jan 13 21:12:50.988315 systemd-networkd[1933]: lxc_health: Link UP Jan 13 21:12:50.990152 kubelet[2471]: E0113 21:12:50.990100 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:50.997388 systemd-networkd[1933]: lxc_health: Gained carrier Jan 13 21:12:51.486860 kubelet[2471]: I0113 21:12:51.486786 2471 topology_manager.go:215] "Topology Admit Handler" podUID="4605cb4d-33d0-4fe4-a22f-6206079826b6" podNamespace="default" podName="nginx-deployment-6d5f899847-rbczc" Jan 13 21:12:51.501420 systemd[1]: Created slice kubepods-besteffort-pod4605cb4d_33d0_4fe4_a22f_6206079826b6.slice - libcontainer container kubepods-besteffort-pod4605cb4d_33d0_4fe4_a22f_6206079826b6.slice. Jan 13 21:12:51.506100 kubelet[2471]: I0113 21:12:51.506044 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbc8j\" (UniqueName: \"kubernetes.io/projected/4605cb4d-33d0-4fe4-a22f-6206079826b6-kube-api-access-cbc8j\") pod \"nginx-deployment-6d5f899847-rbczc\" (UID: \"4605cb4d-33d0-4fe4-a22f-6206079826b6\") " pod="default/nginx-deployment-6d5f899847-rbczc" Jan 13 21:12:51.809381 containerd[2018]: time="2025-01-13T21:12:51.808733158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rbczc,Uid:4605cb4d-33d0-4fe4-a22f-6206079826b6,Namespace:default,Attempt:0,}" Jan 13 21:12:51.892744 systemd-networkd[1933]: lxc408ced3a70b8: Link UP Jan 13 21:12:51.895829 (udev-worker)[3168]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:51.901045 kernel: eth0: renamed from tmpd88ea Jan 13 21:12:51.909219 systemd-networkd[1933]: lxc408ced3a70b8: Gained carrier Jan 13 21:12:51.990762 kubelet[2471]: E0113 21:12:51.990637 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:52.889197 systemd-networkd[1933]: lxc_health: Gained IPv6LL Jan 13 21:12:52.991961 kubelet[2471]: E0113 21:12:52.991881 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:53.084119 systemd-networkd[1933]: lxc408ced3a70b8: Gained IPv6LL Jan 13 21:12:53.992121 kubelet[2471]: E0113 21:12:53.992052 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:54.992762 kubelet[2471]: E0113 21:12:54.992689 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:55.993422 kubelet[2471]: E0113 21:12:55.993361 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:56.029770 ntpd[1989]: Listen normally on 7 cilium_host 192.168.1.143:123 Jan 13 21:12:56.031194 ntpd[1989]: 13 Jan 21:12:56 ntpd[1989]: Listen normally on 7 cilium_host 192.168.1.143:123 Jan 13 21:12:56.031194 ntpd[1989]: 13 Jan 21:12:56 ntpd[1989]: Listen normally on 8 cilium_net [fe80::b4af:11ff:fefb:f1a2%3]:123 Jan 13 21:12:56.031194 ntpd[1989]: 13 Jan 21:12:56 ntpd[1989]: Listen normally on 9 cilium_host [fe80::4409:3dff:fede:c65f%4]:123 Jan 13 21:12:56.031194 ntpd[1989]: 13 Jan 21:12:56 ntpd[1989]: Listen normally on 10 cilium_vxlan [fe80::b872:adff:fe1e:4ce5%5]:123 Jan 13 21:12:56.031194 ntpd[1989]: 13 Jan 21:12:56 ntpd[1989]: Listen normally on 11 lxc_health [fe80::bca7:fcff:fe7a:9826%7]:123 Jan 13 21:12:56.031194 ntpd[1989]: 13 Jan 21:12:56 ntpd[1989]: Listen normally on 12 lxc408ced3a70b8 [fe80::cef:8eff:fea9:bd1a%9]:123 Jan 13 21:12:56.029910 ntpd[1989]: Listen normally on 8 cilium_net [fe80::b4af:11ff:fefb:f1a2%3]:123 Jan 13 21:12:56.030023 ntpd[1989]: Listen normally on 9 cilium_host [fe80::4409:3dff:fede:c65f%4]:123 Jan 13 21:12:56.030098 ntpd[1989]: Listen normally on 10 cilium_vxlan [fe80::b872:adff:fe1e:4ce5%5]:123 Jan 13 21:12:56.030164 ntpd[1989]: Listen normally on 11 lxc_health [fe80::bca7:fcff:fe7a:9826%7]:123 Jan 13 21:12:56.030231 ntpd[1989]: Listen normally on 12 lxc408ced3a70b8 [fe80::cef:8eff:fea9:bd1a%9]:123 Jan 13 21:12:56.994108 kubelet[2471]: E0113 21:12:56.994044 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:57.481164 update_engine[1994]: I20250113 21:12:57.481060 1994 update_attempter.cc:509] Updating boot flags... Jan 13 21:12:57.570029 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3544) Jan 13 21:12:57.995281 kubelet[2471]: E0113 21:12:57.995205 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:58.996162 kubelet[2471]: E0113 21:12:58.996093 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:12:59.938074 containerd[2018]: time="2025-01-13T21:12:59.937877575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:59.938823 containerd[2018]: time="2025-01-13T21:12:59.938034499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:59.938823 containerd[2018]: time="2025-01-13T21:12:59.938125615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:59.939377 containerd[2018]: time="2025-01-13T21:12:59.939273307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:59.975321 systemd[1]: Started cri-containerd-d88ea0c8512b3984d2e677e0b28533ce35bdc26518979042e734cdb3c8da9a2d.scope - libcontainer container d88ea0c8512b3984d2e677e0b28533ce35bdc26518979042e734cdb3c8da9a2d. Jan 13 21:12:59.997365 kubelet[2471]: E0113 21:12:59.997265 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:00.035587 containerd[2018]: time="2025-01-13T21:13:00.035320527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rbczc,Uid:4605cb4d-33d0-4fe4-a22f-6206079826b6,Namespace:default,Attempt:0,} returns sandbox id \"d88ea0c8512b3984d2e677e0b28533ce35bdc26518979042e734cdb3c8da9a2d\"" Jan 13 21:13:00.038573 containerd[2018]: time="2025-01-13T21:13:00.038520855Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:13:00.965149 kubelet[2471]: E0113 21:13:00.965089 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:00.997649 kubelet[2471]: E0113 21:13:00.997579 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:01.997946 kubelet[2471]: E0113 21:13:01.997873 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:02.998542 kubelet[2471]: E0113 21:13:02.998498 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:03.490407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount359837227.mount: Deactivated successfully. Jan 13 21:13:03.999607 kubelet[2471]: E0113 21:13:03.999462 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:04.925868 containerd[2018]: time="2025-01-13T21:13:04.925786715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:04.929417 containerd[2018]: time="2025-01-13T21:13:04.929339519Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 21:13:04.933508 containerd[2018]: time="2025-01-13T21:13:04.933426240Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:04.940418 containerd[2018]: time="2025-01-13T21:13:04.940299228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:04.942481 containerd[2018]: time="2025-01-13T21:13:04.942317136Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 4.903734601s" Jan 13 21:13:04.942481 containerd[2018]: time="2025-01-13T21:13:04.942374184Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 21:13:04.945296 containerd[2018]: time="2025-01-13T21:13:04.945235428Z" level=info msg="CreateContainer within sandbox \"d88ea0c8512b3984d2e677e0b28533ce35bdc26518979042e734cdb3c8da9a2d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:13:04.982540 containerd[2018]: time="2025-01-13T21:13:04.982455348Z" level=info msg="CreateContainer within sandbox \"d88ea0c8512b3984d2e677e0b28533ce35bdc26518979042e734cdb3c8da9a2d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a2143bb47678b84d1f974b4ed6088b7a9e5414dc2bfe2ec1a6e5efbd4b783560\"" Jan 13 21:13:04.984036 containerd[2018]: time="2025-01-13T21:13:04.983655492Z" level=info msg="StartContainer for \"a2143bb47678b84d1f974b4ed6088b7a9e5414dc2bfe2ec1a6e5efbd4b783560\"" Jan 13 21:13:04.999821 kubelet[2471]: E0113 21:13:04.999742 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:05.038311 systemd[1]: Started cri-containerd-a2143bb47678b84d1f974b4ed6088b7a9e5414dc2bfe2ec1a6e5efbd4b783560.scope - libcontainer container a2143bb47678b84d1f974b4ed6088b7a9e5414dc2bfe2ec1a6e5efbd4b783560. Jan 13 21:13:05.081601 containerd[2018]: time="2025-01-13T21:13:05.081400832Z" level=info msg="StartContainer for \"a2143bb47678b84d1f974b4ed6088b7a9e5414dc2bfe2ec1a6e5efbd4b783560\" returns successfully" Jan 13 21:13:05.344752 kubelet[2471]: I0113 21:13:05.344510 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-rbczc" podStartSLOduration=9.439055201 podStartE2EDuration="14.344456518s" podCreationTimestamp="2025-01-13 21:12:51 +0000 UTC" firstStartedPulling="2025-01-13 21:13:00.037523415 +0000 UTC m=+40.037612396" lastFinishedPulling="2025-01-13 21:13:04.942924744 +0000 UTC m=+44.943013713" observedRunningTime="2025-01-13 21:13:05.343873174 +0000 UTC m=+45.343962167" watchObservedRunningTime="2025-01-13 21:13:05.344456518 +0000 UTC m=+45.344545511" Jan 13 21:13:06.000137 kubelet[2471]: E0113 21:13:06.000076 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:07.000446 kubelet[2471]: E0113 21:13:07.000380 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:08.000778 kubelet[2471]: E0113 21:13:08.000720 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:09.001442 kubelet[2471]: E0113 21:13:09.001360 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:10.001804 kubelet[2471]: E0113 21:13:10.001744 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:11.001939 kubelet[2471]: E0113 21:13:11.001893 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:11.914815 kubelet[2471]: I0113 21:13:11.914748 2471 topology_manager.go:215] "Topology Admit Handler" podUID="36b131a5-cbfe-4c38-aa5b-26737e143ea6" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:13:11.925857 systemd[1]: Created slice kubepods-besteffort-pod36b131a5_cbfe_4c38_aa5b_26737e143ea6.slice - libcontainer container kubepods-besteffort-pod36b131a5_cbfe_4c38_aa5b_26737e143ea6.slice. Jan 13 21:13:11.934590 kubelet[2471]: I0113 21:13:11.934411 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkw5\" (UniqueName: \"kubernetes.io/projected/36b131a5-cbfe-4c38-aa5b-26737e143ea6-kube-api-access-2gkw5\") pod \"nfs-server-provisioner-0\" (UID: \"36b131a5-cbfe-4c38-aa5b-26737e143ea6\") " pod="default/nfs-server-provisioner-0" Jan 13 21:13:11.934590 kubelet[2471]: I0113 21:13:11.934479 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/36b131a5-cbfe-4c38-aa5b-26737e143ea6-data\") pod \"nfs-server-provisioner-0\" (UID: \"36b131a5-cbfe-4c38-aa5b-26737e143ea6\") " pod="default/nfs-server-provisioner-0" Jan 13 21:13:12.003022 kubelet[2471]: E0113 21:13:12.002924 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:12.233256 containerd[2018]: time="2025-01-13T21:13:12.232669132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:36b131a5-cbfe-4c38-aa5b-26737e143ea6,Namespace:default,Attempt:0,}" Jan 13 21:13:12.281901 systemd-networkd[1933]: lxc4040b1050b7b: Link UP Jan 13 21:13:12.289047 kernel: eth0: renamed from tmp71b68 Jan 13 21:13:12.292847 systemd-networkd[1933]: lxc4040b1050b7b: Gained carrier Jan 13 21:13:12.293947 (udev-worker)[3758]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:12.650132 containerd[2018]: time="2025-01-13T21:13:12.649937934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:12.650132 containerd[2018]: time="2025-01-13T21:13:12.650087706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:12.650132 containerd[2018]: time="2025-01-13T21:13:12.650126922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:12.650672 containerd[2018]: time="2025-01-13T21:13:12.650309142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:12.694320 systemd[1]: Started cri-containerd-71b68de66048f9a76775cb921152e074682a617dafaa36c16fcc4637a604b243.scope - libcontainer container 71b68de66048f9a76775cb921152e074682a617dafaa36c16fcc4637a604b243. Jan 13 21:13:12.755686 containerd[2018]: time="2025-01-13T21:13:12.755544594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:36b131a5-cbfe-4c38-aa5b-26737e143ea6,Namespace:default,Attempt:0,} returns sandbox id \"71b68de66048f9a76775cb921152e074682a617dafaa36c16fcc4637a604b243\"" Jan 13 21:13:12.759630 containerd[2018]: time="2025-01-13T21:13:12.759234618Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:13:13.003836 kubelet[2471]: E0113 21:13:13.003665 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:13.945478 systemd-networkd[1933]: lxc4040b1050b7b: Gained IPv6LL Jan 13 21:13:14.004806 kubelet[2471]: E0113 21:13:14.004738 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:15.005252 kubelet[2471]: E0113 21:13:15.005031 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:15.602849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714353816.mount: Deactivated successfully. Jan 13 21:13:16.005607 kubelet[2471]: E0113 21:13:16.005367 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:16.030291 ntpd[1989]: Listen normally on 13 lxc4040b1050b7b [fe80::7cf1:94ff:fe61:fdb0%11]:123 Jan 13 21:13:16.030794 ntpd[1989]: 13 Jan 21:13:16 ntpd[1989]: Listen normally on 13 lxc4040b1050b7b [fe80::7cf1:94ff:fe61:fdb0%11]:123 Jan 13 21:13:17.006431 kubelet[2471]: E0113 21:13:17.006374 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:18.008236 kubelet[2471]: E0113 21:13:18.008160 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:18.537106 containerd[2018]: time="2025-01-13T21:13:18.536250251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:18.538324 containerd[2018]: time="2025-01-13T21:13:18.538254335Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 13 21:13:18.540170 containerd[2018]: time="2025-01-13T21:13:18.540083627Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:18.546024 containerd[2018]: time="2025-01-13T21:13:18.545903963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:18.548171 containerd[2018]: time="2025-01-13T21:13:18.547946315Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.788632761s" Jan 13 21:13:18.548171 containerd[2018]: time="2025-01-13T21:13:18.548026691Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 21:13:18.552102 containerd[2018]: time="2025-01-13T21:13:18.552045215Z" level=info msg="CreateContainer within sandbox \"71b68de66048f9a76775cb921152e074682a617dafaa36c16fcc4637a604b243\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:13:18.577123 containerd[2018]: time="2025-01-13T21:13:18.577048391Z" level=info msg="CreateContainer within sandbox \"71b68de66048f9a76775cb921152e074682a617dafaa36c16fcc4637a604b243\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9f5e7475fe4e9e351094e7b78bfdb16e07548c8bb3f061da64ab52e23ac21031\"" Jan 13 21:13:18.577950 containerd[2018]: time="2025-01-13T21:13:18.577884239Z" level=info msg="StartContainer for \"9f5e7475fe4e9e351094e7b78bfdb16e07548c8bb3f061da64ab52e23ac21031\"" Jan 13 21:13:18.631297 systemd[1]: Started cri-containerd-9f5e7475fe4e9e351094e7b78bfdb16e07548c8bb3f061da64ab52e23ac21031.scope - libcontainer container 9f5e7475fe4e9e351094e7b78bfdb16e07548c8bb3f061da64ab52e23ac21031. Jan 13 21:13:18.678066 containerd[2018]: time="2025-01-13T21:13:18.677961276Z" level=info msg="StartContainer for \"9f5e7475fe4e9e351094e7b78bfdb16e07548c8bb3f061da64ab52e23ac21031\" returns successfully" Jan 13 21:13:19.008794 kubelet[2471]: E0113 21:13:19.008719 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:20.009360 kubelet[2471]: E0113 21:13:20.009299 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:20.965469 kubelet[2471]: E0113 21:13:20.965413 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:21.009459 kubelet[2471]: E0113 21:13:21.009401 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:22.010712 kubelet[2471]: E0113 21:13:22.010591 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:23.011307 kubelet[2471]: E0113 21:13:23.011249 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:24.011766 kubelet[2471]: E0113 21:13:24.011695 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:25.012893 kubelet[2471]: E0113 21:13:25.012825 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:26.013741 kubelet[2471]: E0113 21:13:26.013643 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:27.014128 kubelet[2471]: E0113 21:13:27.014074 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:28.014762 kubelet[2471]: E0113 21:13:28.014699 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:28.842480 kubelet[2471]: I0113 21:13:28.842414 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=12.051598765 podStartE2EDuration="17.842357794s" podCreationTimestamp="2025-01-13 21:13:11 +0000 UTC" firstStartedPulling="2025-01-13 21:13:12.757787118 +0000 UTC m=+52.757876099" lastFinishedPulling="2025-01-13 21:13:18.548546147 +0000 UTC m=+58.548635128" observedRunningTime="2025-01-13 21:13:19.387720707 +0000 UTC m=+59.387809772" watchObservedRunningTime="2025-01-13 21:13:28.842357794 +0000 UTC m=+68.842446787" Jan 13 21:13:28.842739 kubelet[2471]: I0113 21:13:28.842608 2471 topology_manager.go:215] "Topology Admit Handler" podUID="ad48539d-5d3f-4fbf-bbd3-5dc12c47892e" podNamespace="default" podName="test-pod-1" Jan 13 21:13:28.854053 systemd[1]: Created slice kubepods-besteffort-podad48539d_5d3f_4fbf_bbd3_5dc12c47892e.slice - libcontainer container kubepods-besteffort-podad48539d_5d3f_4fbf_bbd3_5dc12c47892e.slice. Jan 13 21:13:29.015313 kubelet[2471]: E0113 21:13:29.015249 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:29.034908 kubelet[2471]: I0113 21:13:29.034570 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8acfef98-4654-485a-af74-392564814dd5\" (UniqueName: \"kubernetes.io/nfs/ad48539d-5d3f-4fbf-bbd3-5dc12c47892e-pvc-8acfef98-4654-485a-af74-392564814dd5\") pod \"test-pod-1\" (UID: \"ad48539d-5d3f-4fbf-bbd3-5dc12c47892e\") " pod="default/test-pod-1" Jan 13 21:13:29.034908 kubelet[2471]: I0113 21:13:29.034643 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxcp\" (UniqueName: \"kubernetes.io/projected/ad48539d-5d3f-4fbf-bbd3-5dc12c47892e-kube-api-access-txxcp\") pod \"test-pod-1\" (UID: \"ad48539d-5d3f-4fbf-bbd3-5dc12c47892e\") " pod="default/test-pod-1" Jan 13 21:13:29.174040 kernel: FS-Cache: Loaded Jan 13 21:13:29.218874 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:13:29.219082 kernel: RPC: Registered udp transport module. Jan 13 21:13:29.219127 kernel: RPC: Registered tcp transport module. Jan 13 21:13:29.221308 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:13:29.221372 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:13:29.543722 kernel: NFS: Registering the id_resolver key type Jan 13 21:13:29.543849 kernel: Key type id_resolver registered Jan 13 21:13:29.544812 kernel: Key type id_legacy registered Jan 13 21:13:29.582536 nfsidmap[3946]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 21:13:29.589441 nfsidmap[3947]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 13 21:13:29.759742 containerd[2018]: time="2025-01-13T21:13:29.759598907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ad48539d-5d3f-4fbf-bbd3-5dc12c47892e,Namespace:default,Attempt:0,}" Jan 13 21:13:29.805563 (udev-worker)[3936]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:29.810813 systemd-networkd[1933]: lxc3fe2c15103ef: Link UP Jan 13 21:13:29.815065 kernel: eth0: renamed from tmp6c16c Jan 13 21:13:29.825351 systemd-networkd[1933]: lxc3fe2c15103ef: Gained carrier Jan 13 21:13:29.826832 (udev-worker)[3943]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:30.015623 kubelet[2471]: E0113 21:13:30.015581 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:30.174495 containerd[2018]: time="2025-01-13T21:13:30.174333417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:30.174495 containerd[2018]: time="2025-01-13T21:13:30.174422493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:30.174761 containerd[2018]: time="2025-01-13T21:13:30.174464613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:30.174761 containerd[2018]: time="2025-01-13T21:13:30.174636105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:30.213327 systemd[1]: Started cri-containerd-6c16c6c52a6fbae41bd6143705803eaf7cfc0588a88995fc4a08f2ee32b7449b.scope - libcontainer container 6c16c6c52a6fbae41bd6143705803eaf7cfc0588a88995fc4a08f2ee32b7449b. Jan 13 21:13:30.275105 containerd[2018]: time="2025-01-13T21:13:30.275043573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ad48539d-5d3f-4fbf-bbd3-5dc12c47892e,Namespace:default,Attempt:0,} returns sandbox id \"6c16c6c52a6fbae41bd6143705803eaf7cfc0588a88995fc4a08f2ee32b7449b\"" Jan 13 21:13:30.278022 containerd[2018]: time="2025-01-13T21:13:30.277946997Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:13:30.612763 containerd[2018]: time="2025-01-13T21:13:30.612255023Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:30.614319 containerd[2018]: time="2025-01-13T21:13:30.614246063Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:13:30.620195 containerd[2018]: time="2025-01-13T21:13:30.620046131Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 342.027782ms" Jan 13 21:13:30.620195 containerd[2018]: time="2025-01-13T21:13:30.620103551Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 21:13:30.622729 containerd[2018]: time="2025-01-13T21:13:30.622583195Z" level=info msg="CreateContainer within sandbox \"6c16c6c52a6fbae41bd6143705803eaf7cfc0588a88995fc4a08f2ee32b7449b\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:13:30.660012 containerd[2018]: time="2025-01-13T21:13:30.659934275Z" level=info msg="CreateContainer within sandbox \"6c16c6c52a6fbae41bd6143705803eaf7cfc0588a88995fc4a08f2ee32b7449b\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c96dd510d0914660264f0a4af430ae56784118181215941524407938fbcc6200\"" Jan 13 21:13:30.660970 containerd[2018]: time="2025-01-13T21:13:30.660903611Z" level=info msg="StartContainer for \"c96dd510d0914660264f0a4af430ae56784118181215941524407938fbcc6200\"" Jan 13 21:13:30.703332 systemd[1]: Started cri-containerd-c96dd510d0914660264f0a4af430ae56784118181215941524407938fbcc6200.scope - libcontainer container c96dd510d0914660264f0a4af430ae56784118181215941524407938fbcc6200. Jan 13 21:13:30.747450 containerd[2018]: time="2025-01-13T21:13:30.747290112Z" level=info msg="StartContainer for \"c96dd510d0914660264f0a4af430ae56784118181215941524407938fbcc6200\" returns successfully" Jan 13 21:13:31.018544 kubelet[2471]: E0113 21:13:31.018471 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:31.481517 systemd-networkd[1933]: lxc3fe2c15103ef: Gained IPv6LL Jan 13 21:13:32.019680 kubelet[2471]: E0113 21:13:32.019624 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:33.020111 kubelet[2471]: E0113 21:13:33.020065 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:34.020546 kubelet[2471]: E0113 21:13:34.020484 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:34.029808 ntpd[1989]: Listen normally on 14 lxc3fe2c15103ef [fe80::3851:e5ff:fe92:c4e6%13]:123 Jan 13 21:13:34.030406 ntpd[1989]: 13 Jan 21:13:34 ntpd[1989]: Listen normally on 14 lxc3fe2c15103ef [fe80::3851:e5ff:fe92:c4e6%13]:123 Jan 13 21:13:35.021098 kubelet[2471]: E0113 21:13:35.021023 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:36.022096 kubelet[2471]: E0113 21:13:36.022036 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:37.022748 kubelet[2471]: E0113 21:13:37.022690 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:38.023711 kubelet[2471]: E0113 21:13:38.023652 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:38.598179 kubelet[2471]: I0113 21:13:38.598011 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=26.254682677 podStartE2EDuration="26.597931627s" podCreationTimestamp="2025-01-13 21:13:12 +0000 UTC" firstStartedPulling="2025-01-13 21:13:30.277133433 +0000 UTC m=+70.277222402" lastFinishedPulling="2025-01-13 21:13:30.620382347 +0000 UTC m=+70.620471352" observedRunningTime="2025-01-13 21:13:31.413372591 +0000 UTC m=+71.413461596" watchObservedRunningTime="2025-01-13 21:13:38.597931627 +0000 UTC m=+78.598020620" Jan 13 21:13:38.642670 containerd[2018]: time="2025-01-13T21:13:38.642571807Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:13:38.659536 containerd[2018]: time="2025-01-13T21:13:38.659316199Z" level=info msg="StopContainer for \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\" with timeout 2 (s)" Jan 13 21:13:38.659842 containerd[2018]: time="2025-01-13T21:13:38.659806447Z" level=info msg="Stop container \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\" with signal terminated" Jan 13 21:13:38.673579 systemd-networkd[1933]: lxc_health: Link DOWN Jan 13 21:13:38.673595 systemd-networkd[1933]: lxc_health: Lost carrier Jan 13 21:13:38.698562 systemd[1]: cri-containerd-579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323.scope: Deactivated successfully. Jan 13 21:13:38.699590 systemd[1]: cri-containerd-579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323.scope: Consumed 13.992s CPU time. Jan 13 21:13:38.734830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323-rootfs.mount: Deactivated successfully. Jan 13 21:13:39.024358 kubelet[2471]: E0113 21:13:39.024289 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:39.535709 containerd[2018]: time="2025-01-13T21:13:39.535613119Z" level=info msg="shim disconnected" id=579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323 namespace=k8s.io Jan 13 21:13:39.535709 containerd[2018]: time="2025-01-13T21:13:39.535692463Z" level=warning msg="cleaning up after shim disconnected" id=579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323 namespace=k8s.io Jan 13 21:13:39.535945 containerd[2018]: time="2025-01-13T21:13:39.535715215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:39.561643 containerd[2018]: time="2025-01-13T21:13:39.561568688Z" level=info msg="StopContainer for \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\" returns successfully" Jan 13 21:13:39.562698 containerd[2018]: time="2025-01-13T21:13:39.562631720Z" level=info msg="StopPodSandbox for \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\"" Jan 13 21:13:39.562846 containerd[2018]: time="2025-01-13T21:13:39.562699268Z" level=info msg="Container to stop \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:39.562846 containerd[2018]: time="2025-01-13T21:13:39.562727408Z" level=info msg="Container to stop \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:39.562846 containerd[2018]: time="2025-01-13T21:13:39.562750640Z" level=info msg="Container to stop \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:39.562846 containerd[2018]: time="2025-01-13T21:13:39.562787444Z" level=info msg="Container to stop \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:39.562846 containerd[2018]: time="2025-01-13T21:13:39.562812548Z" level=info msg="Container to stop \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:39.566869 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd-shm.mount: Deactivated successfully. Jan 13 21:13:39.575370 systemd[1]: cri-containerd-646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd.scope: Deactivated successfully. Jan 13 21:13:39.612371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd-rootfs.mount: Deactivated successfully. Jan 13 21:13:39.617232 containerd[2018]: time="2025-01-13T21:13:39.617135312Z" level=info msg="shim disconnected" id=646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd namespace=k8s.io Jan 13 21:13:39.617232 containerd[2018]: time="2025-01-13T21:13:39.617214152Z" level=warning msg="cleaning up after shim disconnected" id=646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd namespace=k8s.io Jan 13 21:13:39.617607 containerd[2018]: time="2025-01-13T21:13:39.617236760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:39.638966 containerd[2018]: time="2025-01-13T21:13:39.638753072Z" level=info msg="TearDown network for sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" successfully" Jan 13 21:13:39.638966 containerd[2018]: time="2025-01-13T21:13:39.638805668Z" level=info msg="StopPodSandbox for \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" returns successfully" Jan 13 21:13:39.795941 kubelet[2471]: I0113 21:13:39.795781 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22wc2\" (UniqueName: \"kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-kube-api-access-22wc2\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.795941 kubelet[2471]: I0113 21:13:39.795860 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hubble-tls\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.795941 kubelet[2471]: I0113 21:13:39.795902 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-xtables-lock\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796232 kubelet[2471]: I0113 21:13:39.795964 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-config-path\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796232 kubelet[2471]: I0113 21:13:39.796032 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-cgroup\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796232 kubelet[2471]: I0113 21:13:39.796076 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-run\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796232 kubelet[2471]: I0113 21:13:39.796117 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-lib-modules\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796232 kubelet[2471]: I0113 21:13:39.796161 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-clustermesh-secrets\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796232 kubelet[2471]: I0113 21:13:39.796209 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-kernel\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796557 kubelet[2471]: I0113 21:13:39.796279 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-etc-cni-netd\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796557 kubelet[2471]: I0113 21:13:39.796318 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-bpf-maps\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796557 kubelet[2471]: I0113 21:13:39.796358 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cni-path\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796557 kubelet[2471]: I0113 21:13:39.796398 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-net\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796557 kubelet[2471]: I0113 21:13:39.796437 2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hostproc\") pod \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\" (UID: \"cb7f921a-1941-467c-8f69-fd5d81bdb0e4\") " Jan 13 21:13:39.796557 kubelet[2471]: I0113 21:13:39.796515 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hostproc" (OuterVolumeSpecName: "hostproc") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.799031 kubelet[2471]: I0113 21:13:39.796908 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.803264 kubelet[2471]: I0113 21:13:39.803208 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:39.804892 systemd[1]: var-lib-kubelet-pods-cb7f921a\x2d1941\x2d467c\x2d8f69\x2dfd5d81bdb0e4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:13:39.811019 kubelet[2471]: I0113 21:13:39.803866 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.811019 kubelet[2471]: I0113 21:13:39.810100 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.811019 kubelet[2471]: I0113 21:13:39.810148 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.811019 kubelet[2471]: I0113 21:13:39.810188 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.811019 kubelet[2471]: I0113 21:13:39.810215 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cni-path" (OuterVolumeSpecName: "cni-path") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.811403 kubelet[2471]: I0113 21:13:39.810244 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.814794 kubelet[2471]: I0113 21:13:39.813159 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.814794 kubelet[2471]: I0113 21:13:39.813295 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:39.813508 systemd[1]: var-lib-kubelet-pods-cb7f921a\x2d1941\x2d467c\x2d8f69\x2dfd5d81bdb0e4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d22wc2.mount: Deactivated successfully. Jan 13 21:13:39.819011 kubelet[2471]: I0113 21:13:39.818258 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-kube-api-access-22wc2" (OuterVolumeSpecName: "kube-api-access-22wc2") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "kube-api-access-22wc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:39.820568 kubelet[2471]: I0113 21:13:39.820192 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:13:39.820568 kubelet[2471]: I0113 21:13:39.820479 2471 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cb7f921a-1941-467c-8f69-fd5d81bdb0e4" (UID: "cb7f921a-1941-467c-8f69-fd5d81bdb0e4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:13:39.821526 systemd[1]: var-lib-kubelet-pods-cb7f921a\x2d1941\x2d467c\x2d8f69\x2dfd5d81bdb0e4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:13:39.897317 kubelet[2471]: I0113 21:13:39.897134 2471 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-22wc2\" (UniqueName: \"kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-kube-api-access-22wc2\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897317 kubelet[2471]: I0113 21:13:39.897189 2471 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hubble-tls\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897317 kubelet[2471]: I0113 21:13:39.897217 2471 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-lib-modules\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897317 kubelet[2471]: I0113 21:13:39.897242 2471 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-xtables-lock\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897317 kubelet[2471]: I0113 21:13:39.897269 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-config-path\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897317 kubelet[2471]: I0113 21:13:39.897293 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-cgroup\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897317 kubelet[2471]: I0113 21:13:39.897316 2471 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cilium-run\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897770 kubelet[2471]: I0113 21:13:39.897340 2471 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-kernel\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897770 kubelet[2471]: I0113 21:13:39.897364 2471 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-etc-cni-netd\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897770 kubelet[2471]: I0113 21:13:39.897401 2471 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-clustermesh-secrets\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897770 kubelet[2471]: I0113 21:13:39.897425 2471 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-cni-path\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897770 kubelet[2471]: I0113 21:13:39.897449 2471 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-host-proc-sys-net\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897770 kubelet[2471]: I0113 21:13:39.897471 2471 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-hostproc\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:39.897770 kubelet[2471]: I0113 21:13:39.897496 2471 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb7f921a-1941-467c-8f69-fd5d81bdb0e4-bpf-maps\") on node \"172.31.31.152\" DevicePath \"\"" Jan 13 21:13:40.025480 kubelet[2471]: E0113 21:13:40.025419 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:40.422910 kubelet[2471]: I0113 21:13:40.422861 2471 scope.go:117] "RemoveContainer" containerID="579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323" Jan 13 21:13:40.425799 containerd[2018]: time="2025-01-13T21:13:40.425733092Z" level=info msg="RemoveContainer for \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\"" Jan 13 21:13:40.431579 containerd[2018]: time="2025-01-13T21:13:40.431490164Z" level=info msg="RemoveContainer for \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\" returns successfully" Jan 13 21:13:40.432367 kubelet[2471]: I0113 21:13:40.431944 2471 scope.go:117] "RemoveContainer" containerID="d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf" Jan 13 21:13:40.433931 containerd[2018]: time="2025-01-13T21:13:40.433870220Z" level=info msg="RemoveContainer for \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\"" Jan 13 21:13:40.438494 systemd[1]: Removed slice kubepods-burstable-podcb7f921a_1941_467c_8f69_fd5d81bdb0e4.slice - libcontainer container kubepods-burstable-podcb7f921a_1941_467c_8f69_fd5d81bdb0e4.slice. Jan 13 21:13:40.438743 systemd[1]: kubepods-burstable-podcb7f921a_1941_467c_8f69_fd5d81bdb0e4.slice: Consumed 14.133s CPU time. Jan 13 21:13:40.442957 containerd[2018]: time="2025-01-13T21:13:40.442850144Z" level=info msg="RemoveContainer for \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\" returns successfully" Jan 13 21:13:40.443580 kubelet[2471]: I0113 21:13:40.443234 2471 scope.go:117] "RemoveContainer" containerID="b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed" Jan 13 21:13:40.446296 containerd[2018]: time="2025-01-13T21:13:40.446065964Z" level=info msg="RemoveContainer for \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\"" Jan 13 21:13:40.450832 containerd[2018]: time="2025-01-13T21:13:40.450775688Z" level=info msg="RemoveContainer for \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\" returns successfully" Jan 13 21:13:40.451229 kubelet[2471]: I0113 21:13:40.451177 2471 scope.go:117] "RemoveContainer" containerID="2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924" Jan 13 21:13:40.452952 containerd[2018]: time="2025-01-13T21:13:40.452834480Z" level=info msg="RemoveContainer for \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\"" Jan 13 21:13:40.458427 containerd[2018]: time="2025-01-13T21:13:40.458351612Z" level=info msg="RemoveContainer for \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\" returns successfully" Jan 13 21:13:40.458849 kubelet[2471]: I0113 21:13:40.458711 2471 scope.go:117] "RemoveContainer" containerID="9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db" Jan 13 21:13:40.461165 containerd[2018]: time="2025-01-13T21:13:40.461064272Z" level=info msg="RemoveContainer for \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\"" Jan 13 21:13:40.466341 containerd[2018]: time="2025-01-13T21:13:40.466269704Z" level=info msg="RemoveContainer for \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\" returns successfully" Jan 13 21:13:40.466746 kubelet[2471]: I0113 21:13:40.466627 2471 scope.go:117] "RemoveContainer" containerID="579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323" Jan 13 21:13:40.467430 containerd[2018]: time="2025-01-13T21:13:40.467151584Z" level=error msg="ContainerStatus for \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\": not found" Jan 13 21:13:40.467567 kubelet[2471]: E0113 21:13:40.467402 2471 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\": not found" containerID="579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323" Jan 13 21:13:40.467567 kubelet[2471]: I0113 21:13:40.467530 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323"} err="failed to get container status \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\": rpc error: code = NotFound desc = an error occurred when try to find container \"579d8365b7e5e77bc8a8d21f0612fcb3f0f517a7257cc83b9addb33f6e660323\": not found" Jan 13 21:13:40.467567 kubelet[2471]: I0113 21:13:40.467559 2471 scope.go:117] "RemoveContainer" containerID="d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf" Jan 13 21:13:40.468000 containerd[2018]: time="2025-01-13T21:13:40.467862536Z" level=error msg="ContainerStatus for \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\": not found" Jan 13 21:13:40.468287 kubelet[2471]: E0113 21:13:40.468252 2471 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\": not found" containerID="d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf" Jan 13 21:13:40.468382 kubelet[2471]: I0113 21:13:40.468309 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf"} err="failed to get container status \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5343498f9dc9b19be0463c0f82413df55cdcc2ce7c07d3b6ea2b9ab8fb7d6bf\": not found" Jan 13 21:13:40.468382 kubelet[2471]: I0113 21:13:40.468332 2471 scope.go:117] "RemoveContainer" containerID="b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed" Jan 13 21:13:40.468697 containerd[2018]: time="2025-01-13T21:13:40.468622520Z" level=error msg="ContainerStatus for \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\": not found" Jan 13 21:13:40.469154 kubelet[2471]: E0113 21:13:40.468900 2471 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\": not found" containerID="b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed" Jan 13 21:13:40.469154 kubelet[2471]: I0113 21:13:40.468957 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed"} err="failed to get container status \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\": rpc error: code = NotFound desc = an error occurred when try to find container \"b58d5798522d3b5fd2869fa15bd2af1e2aec9ccb263deb95d0096918331c3fed\": not found" Jan 13 21:13:40.469154 kubelet[2471]: I0113 21:13:40.469023 2471 scope.go:117] "RemoveContainer" containerID="2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924" Jan 13 21:13:40.469369 containerd[2018]: time="2025-01-13T21:13:40.469328672Z" level=error msg="ContainerStatus for \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\": not found" Jan 13 21:13:40.469570 kubelet[2471]: E0113 21:13:40.469515 2471 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\": not found" containerID="2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924" Jan 13 21:13:40.469721 kubelet[2471]: I0113 21:13:40.469575 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924"} err="failed to get container status \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e2683dbf03fb45bd065fc359b72553aa4a5c0813cb91ff74e32c4557e0f8924\": not found" Jan 13 21:13:40.469721 kubelet[2471]: I0113 21:13:40.469600 2471 scope.go:117] "RemoveContainer" containerID="9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db" Jan 13 21:13:40.470197 containerd[2018]: time="2025-01-13T21:13:40.469880468Z" level=error msg="ContainerStatus for \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\": not found" Jan 13 21:13:40.470506 kubelet[2471]: E0113 21:13:40.470407 2471 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\": not found" containerID="9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db" Jan 13 21:13:40.470506 kubelet[2471]: I0113 21:13:40.470483 2471 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db"} err="failed to get container status \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\": rpc error: code = NotFound desc = an error occurred when try to find container \"9a944c9a3cec27edb77c6fba271da95cb4743e15f176360ed9f9a0955ddad8db\": not found" Jan 13 21:13:40.965312 kubelet[2471]: E0113 21:13:40.965244 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:41.025773 kubelet[2471]: E0113 21:13:41.025724 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:41.029824 ntpd[1989]: Deleting interface #11 lxc_health, fe80::bca7:fcff:fe7a:9826%7#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs Jan 13 21:13:41.030292 ntpd[1989]: 13 Jan 21:13:41 ntpd[1989]: Deleting interface #11 lxc_health, fe80::bca7:fcff:fe7a:9826%7#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs Jan 13 21:13:41.135268 kubelet[2471]: E0113 21:13:41.135214 2471 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:13:41.157028 kubelet[2471]: I0113 21:13:41.156708 2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" path="/var/lib/kubelet/pods/cb7f921a-1941-467c-8f69-fd5d81bdb0e4/volumes" Jan 13 21:13:41.981975 kubelet[2471]: I0113 21:13:41.981926 2471 topology_manager.go:215] "Topology Admit Handler" podUID="b91edce4-6d3b-428f-9375-cdd8c1a5ca7f" podNamespace="kube-system" podName="cilium-operator-5cc964979-cxkrn" Jan 13 21:13:41.982166 kubelet[2471]: E0113 21:13:41.982027 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" containerName="apply-sysctl-overwrites" Jan 13 21:13:41.982166 kubelet[2471]: E0113 21:13:41.982053 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" containerName="mount-bpf-fs" Jan 13 21:13:41.982166 kubelet[2471]: E0113 21:13:41.982072 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" containerName="cilium-agent" Jan 13 21:13:41.982166 kubelet[2471]: E0113 21:13:41.982091 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" containerName="mount-cgroup" Jan 13 21:13:41.982166 kubelet[2471]: E0113 21:13:41.982109 2471 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" containerName="clean-cilium-state" Jan 13 21:13:41.982166 kubelet[2471]: I0113 21:13:41.982150 2471 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb7f921a-1941-467c-8f69-fd5d81bdb0e4" containerName="cilium-agent" Jan 13 21:13:41.994571 systemd[1]: Created slice kubepods-besteffort-podb91edce4_6d3b_428f_9375_cdd8c1a5ca7f.slice - libcontainer container kubepods-besteffort-podb91edce4_6d3b_428f_9375_cdd8c1a5ca7f.slice. Jan 13 21:13:41.998450 kubelet[2471]: W0113 21:13:41.998367 2471 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.31.152" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.31.152' and this object Jan 13 21:13:41.998450 kubelet[2471]: E0113 21:13:41.998417 2471 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.31.152" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.31.152' and this object Jan 13 21:13:42.025943 kubelet[2471]: E0113 21:13:42.025894 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:42.031340 kubelet[2471]: I0113 21:13:42.031285 2471 topology_manager.go:215] "Topology Admit Handler" podUID="1d1bff2b-0b07-461a-bf27-d13aee503c54" podNamespace="kube-system" podName="cilium-vqnsz" Jan 13 21:13:42.043394 systemd[1]: Created slice kubepods-burstable-pod1d1bff2b_0b07_461a_bf27_d13aee503c54.slice - libcontainer container kubepods-burstable-pod1d1bff2b_0b07_461a_bf27_d13aee503c54.slice. Jan 13 21:13:42.109438 kubelet[2471]: I0113 21:13:42.109378 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b91edce4-6d3b-428f-9375-cdd8c1a5ca7f-cilium-config-path\") pod \"cilium-operator-5cc964979-cxkrn\" (UID: \"b91edce4-6d3b-428f-9375-cdd8c1a5ca7f\") " pod="kube-system/cilium-operator-5cc964979-cxkrn" Jan 13 21:13:42.109612 kubelet[2471]: I0113 21:13:42.109460 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjmxx\" (UniqueName: \"kubernetes.io/projected/b91edce4-6d3b-428f-9375-cdd8c1a5ca7f-kube-api-access-hjmxx\") pod \"cilium-operator-5cc964979-cxkrn\" (UID: \"b91edce4-6d3b-428f-9375-cdd8c1a5ca7f\") " pod="kube-system/cilium-operator-5cc964979-cxkrn" Jan 13 21:13:42.210723 kubelet[2471]: I0113 21:13:42.210576 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-xtables-lock\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.210723 kubelet[2471]: I0113 21:13:42.210650 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-bpf-maps\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.210723 kubelet[2471]: I0113 21:13:42.210719 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-cilium-run\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211579 kubelet[2471]: I0113 21:13:42.210765 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-etc-cni-netd\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211579 kubelet[2471]: I0113 21:13:42.210813 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d1bff2b-0b07-461a-bf27-d13aee503c54-cilium-config-path\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211579 kubelet[2471]: I0113 21:13:42.210860 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjr5\" (UniqueName: \"kubernetes.io/projected/1d1bff2b-0b07-461a-bf27-d13aee503c54-kube-api-access-pfjr5\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211579 kubelet[2471]: I0113 21:13:42.210924 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-cilium-cgroup\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211579 kubelet[2471]: I0113 21:13:42.210967 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-cni-path\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211579 kubelet[2471]: I0113 21:13:42.211039 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-hostproc\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211884 kubelet[2471]: I0113 21:13:42.211092 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d1bff2b-0b07-461a-bf27-d13aee503c54-hubble-tls\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211884 kubelet[2471]: I0113 21:13:42.211162 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1d1bff2b-0b07-461a-bf27-d13aee503c54-cilium-ipsec-secrets\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211884 kubelet[2471]: I0113 21:13:42.211289 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d1bff2b-0b07-461a-bf27-d13aee503c54-clustermesh-secrets\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211884 kubelet[2471]: I0113 21:13:42.211335 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-host-proc-sys-kernel\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.211884 kubelet[2471]: I0113 21:13:42.211481 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-lib-modules\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.212178 kubelet[2471]: I0113 21:13:42.211556 2471 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d1bff2b-0b07-461a-bf27-d13aee503c54-host-proc-sys-net\") pod \"cilium-vqnsz\" (UID: \"1d1bff2b-0b07-461a-bf27-d13aee503c54\") " pod="kube-system/cilium-vqnsz" Jan 13 21:13:42.505619 kubelet[2471]: I0113 21:13:42.505567 2471 setters.go:568] "Node became not ready" node="172.31.31.152" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:13:42Z","lastTransitionTime":"2025-01-13T21:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:13:43.026658 kubelet[2471]: E0113 21:13:43.026601 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:43.211961 kubelet[2471]: E0113 21:13:43.211579 2471 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:43.211961 kubelet[2471]: E0113 21:13:43.211705 2471 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b91edce4-6d3b-428f-9375-cdd8c1a5ca7f-cilium-config-path podName:b91edce4-6d3b-428f-9375-cdd8c1a5ca7f nodeName:}" failed. No retries permitted until 2025-01-13 21:13:43.711659706 +0000 UTC m=+83.711748687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b91edce4-6d3b-428f-9375-cdd8c1a5ca7f-cilium-config-path") pod "cilium-operator-5cc964979-cxkrn" (UID: "b91edce4-6d3b-428f-9375-cdd8c1a5ca7f") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:43.320594 kubelet[2471]: E0113 21:13:43.320368 2471 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:43.320594 kubelet[2471]: E0113 21:13:43.320483 2471 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d1bff2b-0b07-461a-bf27-d13aee503c54-cilium-config-path podName:1d1bff2b-0b07-461a-bf27-d13aee503c54 nodeName:}" failed. No retries permitted until 2025-01-13 21:13:43.820454214 +0000 UTC m=+83.820543183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/1d1bff2b-0b07-461a-bf27-d13aee503c54-cilium-config-path") pod "cilium-vqnsz" (UID: "1d1bff2b-0b07-461a-bf27-d13aee503c54") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:43.799258 containerd[2018]: time="2025-01-13T21:13:43.799173361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cxkrn,Uid:b91edce4-6d3b-428f-9375-cdd8c1a5ca7f,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:43.842369 containerd[2018]: time="2025-01-13T21:13:43.842055301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:43.842369 containerd[2018]: time="2025-01-13T21:13:43.842161741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:43.842369 containerd[2018]: time="2025-01-13T21:13:43.842188609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:43.842369 containerd[2018]: time="2025-01-13T21:13:43.842363425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:43.856831 containerd[2018]: time="2025-01-13T21:13:43.856761805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vqnsz,Uid:1d1bff2b-0b07-461a-bf27-d13aee503c54,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:43.895667 systemd[1]: Started cri-containerd-f2a18ced7eafc843f2927decf54550274db25173b7c0e778482bbee9f53e9d0c.scope - libcontainer container f2a18ced7eafc843f2927decf54550274db25173b7c0e778482bbee9f53e9d0c. Jan 13 21:13:43.912200 containerd[2018]: time="2025-01-13T21:13:43.911695225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:43.912200 containerd[2018]: time="2025-01-13T21:13:43.912036469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:43.912713 containerd[2018]: time="2025-01-13T21:13:43.912312997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:43.914325 containerd[2018]: time="2025-01-13T21:13:43.914171749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:43.961305 systemd[1]: Started cri-containerd-e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9.scope - libcontainer container e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9. Jan 13 21:13:43.989744 containerd[2018]: time="2025-01-13T21:13:43.989566454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-cxkrn,Uid:b91edce4-6d3b-428f-9375-cdd8c1a5ca7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2a18ced7eafc843f2927decf54550274db25173b7c0e778482bbee9f53e9d0c\"" Jan 13 21:13:43.994142 containerd[2018]: time="2025-01-13T21:13:43.993941678Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:13:44.018899 containerd[2018]: time="2025-01-13T21:13:44.018748150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vqnsz,Uid:1d1bff2b-0b07-461a-bf27-d13aee503c54,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\"" Jan 13 21:13:44.023647 containerd[2018]: time="2025-01-13T21:13:44.023432386Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:13:44.027488 kubelet[2471]: E0113 21:13:44.027419 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:44.048688 containerd[2018]: time="2025-01-13T21:13:44.048541762Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964\"" Jan 13 21:13:44.050015 containerd[2018]: time="2025-01-13T21:13:44.049456786Z" level=info msg="StartContainer for \"3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964\"" Jan 13 21:13:44.090323 systemd[1]: Started cri-containerd-3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964.scope - libcontainer container 3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964. Jan 13 21:13:44.137779 containerd[2018]: time="2025-01-13T21:13:44.137609338Z" level=info msg="StartContainer for \"3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964\" returns successfully" Jan 13 21:13:44.147334 systemd[1]: cri-containerd-3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964.scope: Deactivated successfully. Jan 13 21:13:44.205468 containerd[2018]: time="2025-01-13T21:13:44.205394123Z" level=info msg="shim disconnected" id=3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964 namespace=k8s.io Jan 13 21:13:44.206021 containerd[2018]: time="2025-01-13T21:13:44.205741403Z" level=warning msg="cleaning up after shim disconnected" id=3f54c6079fd37dd07fe8d9dcbb652309b12e120825249c8ada0e74c325c4a964 namespace=k8s.io Jan 13 21:13:44.206021 containerd[2018]: time="2025-01-13T21:13:44.205771979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:44.442969 containerd[2018]: time="2025-01-13T21:13:44.441633732Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:13:44.461417 containerd[2018]: time="2025-01-13T21:13:44.461347716Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1\"" Jan 13 21:13:44.462292 containerd[2018]: time="2025-01-13T21:13:44.462225432Z" level=info msg="StartContainer for \"8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1\"" Jan 13 21:13:44.503297 systemd[1]: Started cri-containerd-8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1.scope - libcontainer container 8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1. Jan 13 21:13:44.555127 containerd[2018]: time="2025-01-13T21:13:44.555069456Z" level=info msg="StartContainer for \"8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1\" returns successfully" Jan 13 21:13:44.568192 systemd[1]: cri-containerd-8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1.scope: Deactivated successfully. Jan 13 21:13:44.607554 containerd[2018]: time="2025-01-13T21:13:44.607461169Z" level=info msg="shim disconnected" id=8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1 namespace=k8s.io Jan 13 21:13:44.608010 containerd[2018]: time="2025-01-13T21:13:44.607838341Z" level=warning msg="cleaning up after shim disconnected" id=8d37bdc39467c996aa9531f357bd86d96f51cb229b7d0b0b8f09d4f95f1f55d1 namespace=k8s.io Jan 13 21:13:44.608010 containerd[2018]: time="2025-01-13T21:13:44.607866817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:45.028389 kubelet[2471]: E0113 21:13:45.028338 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:45.262263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651180271.mount: Deactivated successfully. Jan 13 21:13:45.466596 containerd[2018]: time="2025-01-13T21:13:45.466505881Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:13:45.565726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116444139.mount: Deactivated successfully. Jan 13 21:13:45.574268 containerd[2018]: time="2025-01-13T21:13:45.574189717Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e\"" Jan 13 21:13:45.575121 containerd[2018]: time="2025-01-13T21:13:45.575073241Z" level=info msg="StartContainer for \"7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e\"" Jan 13 21:13:45.621312 systemd[1]: Started cri-containerd-7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e.scope - libcontainer container 7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e. Jan 13 21:13:45.672525 containerd[2018]: time="2025-01-13T21:13:45.672390218Z" level=info msg="StartContainer for \"7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e\" returns successfully" Jan 13 21:13:45.674535 systemd[1]: cri-containerd-7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e.scope: Deactivated successfully. Jan 13 21:13:45.722648 containerd[2018]: time="2025-01-13T21:13:45.722429114Z" level=info msg="shim disconnected" id=7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e namespace=k8s.io Jan 13 21:13:45.722648 containerd[2018]: time="2025-01-13T21:13:45.722501942Z" level=warning msg="cleaning up after shim disconnected" id=7c4b0b77f14d4f02e74377b5d11703518927b42ddd004de0925e0334cab65f2e namespace=k8s.io Jan 13 21:13:45.722648 containerd[2018]: time="2025-01-13T21:13:45.722537822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:46.028608 kubelet[2471]: E0113 21:13:46.028464 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:46.136817 kubelet[2471]: E0113 21:13:46.136774 2471 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:13:46.458379 containerd[2018]: time="2025-01-13T21:13:46.458206754Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:13:46.486140 containerd[2018]: time="2025-01-13T21:13:46.486062426Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b\"" Jan 13 21:13:46.487444 containerd[2018]: time="2025-01-13T21:13:46.487298042Z" level=info msg="StartContainer for \"c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b\"" Jan 13 21:13:46.547302 systemd[1]: Started cri-containerd-c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b.scope - libcontainer container c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b. Jan 13 21:13:46.586703 systemd[1]: cri-containerd-c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b.scope: Deactivated successfully. Jan 13 21:13:46.593184 containerd[2018]: time="2025-01-13T21:13:46.590896742Z" level=info msg="StartContainer for \"c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b\" returns successfully" Jan 13 21:13:46.636111 containerd[2018]: time="2025-01-13T21:13:46.636015711Z" level=info msg="shim disconnected" id=c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b namespace=k8s.io Jan 13 21:13:46.636111 containerd[2018]: time="2025-01-13T21:13:46.636160791Z" level=warning msg="cleaning up after shim disconnected" id=c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b namespace=k8s.io Jan 13 21:13:46.636111 containerd[2018]: time="2025-01-13T21:13:46.636184503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:46.815863 systemd[1]: run-containerd-runc-k8s.io-c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b-runc.0DhziV.mount: Deactivated successfully. Jan 13 21:13:46.816061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4b20b495007ce2e725b858ebb43038b7cf743d1df412a6bb1531952f264442b-rootfs.mount: Deactivated successfully. Jan 13 21:13:47.028899 kubelet[2471]: E0113 21:13:47.028847 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:47.467128 containerd[2018]: time="2025-01-13T21:13:47.467058147Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:13:47.509966 containerd[2018]: time="2025-01-13T21:13:47.509815875Z" level=info msg="CreateContainer within sandbox \"e5519dd13e4e6d3f69b751e047bde3d18bbc2a4d46f4b52f01a731e8e19569e9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"53a5a7aa6e3584a3dfd81be67adcc16d7bde1c7e14e438eb6988dee563afddc7\"" Jan 13 21:13:47.511252 containerd[2018]: time="2025-01-13T21:13:47.510852867Z" level=info msg="StartContainer for \"53a5a7aa6e3584a3dfd81be67adcc16d7bde1c7e14e438eb6988dee563afddc7\"" Jan 13 21:13:47.564322 systemd[1]: Started cri-containerd-53a5a7aa6e3584a3dfd81be67adcc16d7bde1c7e14e438eb6988dee563afddc7.scope - libcontainer container 53a5a7aa6e3584a3dfd81be67adcc16d7bde1c7e14e438eb6988dee563afddc7. Jan 13 21:13:47.619557 containerd[2018]: time="2025-01-13T21:13:47.619378072Z" level=info msg="StartContainer for \"53a5a7aa6e3584a3dfd81be67adcc16d7bde1c7e14e438eb6988dee563afddc7\" returns successfully" Jan 13 21:13:48.029390 kubelet[2471]: E0113 21:13:48.029297 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:48.356120 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 21:13:49.030110 kubelet[2471]: E0113 21:13:49.030038 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:49.800383 containerd[2018]: time="2025-01-13T21:13:49.800308974Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:49.802054 containerd[2018]: time="2025-01-13T21:13:49.801946758Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137730" Jan 13 21:13:49.805034 containerd[2018]: time="2025-01-13T21:13:49.804476586Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:13:49.808311 containerd[2018]: time="2025-01-13T21:13:49.808241286Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 5.814169252s" Jan 13 21:13:49.808659 containerd[2018]: time="2025-01-13T21:13:49.808306830Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 21:13:49.810888 containerd[2018]: time="2025-01-13T21:13:49.810826794Z" level=info msg="CreateContainer within sandbox \"f2a18ced7eafc843f2927decf54550274db25173b7c0e778482bbee9f53e9d0c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:13:49.835636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287427472.mount: Deactivated successfully. Jan 13 21:13:49.836562 containerd[2018]: time="2025-01-13T21:13:49.836392015Z" level=info msg="CreateContainer within sandbox \"f2a18ced7eafc843f2927decf54550274db25173b7c0e778482bbee9f53e9d0c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1800d6b06776a5f80a44abc1fd3d05894ede5e42abfa7e14ae476a70adc21445\"" Jan 13 21:13:49.838665 containerd[2018]: time="2025-01-13T21:13:49.837270043Z" level=info msg="StartContainer for \"1800d6b06776a5f80a44abc1fd3d05894ede5e42abfa7e14ae476a70adc21445\"" Jan 13 21:13:49.887504 systemd[1]: Started cri-containerd-1800d6b06776a5f80a44abc1fd3d05894ede5e42abfa7e14ae476a70adc21445.scope - libcontainer container 1800d6b06776a5f80a44abc1fd3d05894ede5e42abfa7e14ae476a70adc21445. Jan 13 21:13:49.956117 containerd[2018]: time="2025-01-13T21:13:49.955954375Z" level=info msg="StartContainer for \"1800d6b06776a5f80a44abc1fd3d05894ede5e42abfa7e14ae476a70adc21445\" returns successfully" Jan 13 21:13:50.031009 kubelet[2471]: E0113 21:13:50.030891 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:50.502827 kubelet[2471]: I0113 21:13:50.502155 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-cxkrn" podStartSLOduration=3.686425466 podStartE2EDuration="9.502098282s" podCreationTimestamp="2025-01-13 21:13:41 +0000 UTC" firstStartedPulling="2025-01-13 21:13:43.992950514 +0000 UTC m=+83.993039507" lastFinishedPulling="2025-01-13 21:13:49.80862333 +0000 UTC m=+89.808712323" observedRunningTime="2025-01-13 21:13:50.500863674 +0000 UTC m=+90.500952667" watchObservedRunningTime="2025-01-13 21:13:50.502098282 +0000 UTC m=+90.502187275" Jan 13 21:13:50.502827 kubelet[2471]: I0113 21:13:50.502518 2471 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vqnsz" podStartSLOduration=9.502465746 podStartE2EDuration="9.502465746s" podCreationTimestamp="2025-01-13 21:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:13:48.510236992 +0000 UTC m=+88.510325997" watchObservedRunningTime="2025-01-13 21:13:50.502465746 +0000 UTC m=+90.502554715" Jan 13 21:13:51.031868 kubelet[2471]: E0113 21:13:51.031799 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:52.032140 kubelet[2471]: E0113 21:13:52.032053 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:52.515655 (udev-worker)[5079]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:52.515860 (udev-worker)[5081]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:52.518230 systemd-networkd[1933]: lxc_health: Link UP Jan 13 21:13:52.529398 systemd-networkd[1933]: lxc_health: Gained carrier Jan 13 21:13:53.033159 kubelet[2471]: E0113 21:13:53.033092 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:54.009707 systemd-networkd[1933]: lxc_health: Gained IPv6LL Jan 13 21:13:54.033966 kubelet[2471]: E0113 21:13:54.033893 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:55.034873 kubelet[2471]: E0113 21:13:55.034805 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:56.029859 ntpd[1989]: Listen normally on 15 lxc_health [fe80::8c8a:adff:feab:25c2%15]:123 Jan 13 21:13:56.030443 ntpd[1989]: 13 Jan 21:13:56 ntpd[1989]: Listen normally on 15 lxc_health [fe80::8c8a:adff:feab:25c2%15]:123 Jan 13 21:13:56.035446 kubelet[2471]: E0113 21:13:56.035332 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:56.238333 systemd[1]: run-containerd-runc-k8s.io-53a5a7aa6e3584a3dfd81be67adcc16d7bde1c7e14e438eb6988dee563afddc7-runc.N7GPeG.mount: Deactivated successfully. Jan 13 21:13:57.036604 kubelet[2471]: E0113 21:13:57.036533 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:58.036956 kubelet[2471]: E0113 21:13:58.036889 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:13:59.037409 kubelet[2471]: E0113 21:13:59.037346 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:00.038369 kubelet[2471]: E0113 21:14:00.038301 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:00.965521 kubelet[2471]: E0113 21:14:00.965455 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:01.038708 kubelet[2471]: E0113 21:14:01.038623 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:02.039713 kubelet[2471]: E0113 21:14:02.039646 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:03.040191 kubelet[2471]: E0113 21:14:03.040128 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:04.040770 kubelet[2471]: E0113 21:14:04.040708 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:05.041611 kubelet[2471]: E0113 21:14:05.041552 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:06.042102 kubelet[2471]: E0113 21:14:06.042041 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:07.042744 kubelet[2471]: E0113 21:14:07.042685 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:08.043749 kubelet[2471]: E0113 21:14:08.043686 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:09.044758 kubelet[2471]: E0113 21:14:09.044699 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:10.045220 kubelet[2471]: E0113 21:14:10.045158 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:11.045613 kubelet[2471]: E0113 21:14:11.045543 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:12.046231 kubelet[2471]: E0113 21:14:12.046160 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:13.047297 kubelet[2471]: E0113 21:14:13.047236 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:13.759050 kubelet[2471]: E0113 21:14:13.758859 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:14:14.048122 kubelet[2471]: E0113 21:14:14.047941 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:15.049012 kubelet[2471]: E0113 21:14:15.048942 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:16.049708 kubelet[2471]: E0113 21:14:16.049648 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:17.050532 kubelet[2471]: E0113 21:14:17.050469 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:18.051706 kubelet[2471]: E0113 21:14:18.051637 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:19.052594 kubelet[2471]: E0113 21:14:19.052533 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:20.053072 kubelet[2471]: E0113 21:14:20.053020 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:20.965413 kubelet[2471]: E0113 21:14:20.965365 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:20.995873 containerd[2018]: time="2025-01-13T21:14:20.995805925Z" level=info msg="StopPodSandbox for \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\"" Jan 13 21:14:20.997223 containerd[2018]: time="2025-01-13T21:14:20.995949433Z" level=info msg="TearDown network for sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" successfully" Jan 13 21:14:20.997223 containerd[2018]: time="2025-01-13T21:14:20.995974537Z" level=info msg="StopPodSandbox for \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" returns successfully" Jan 13 21:14:20.997223 containerd[2018]: time="2025-01-13T21:14:20.996634489Z" level=info msg="RemovePodSandbox for \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\"" Jan 13 21:14:20.997223 containerd[2018]: time="2025-01-13T21:14:20.996714649Z" level=info msg="Forcibly stopping sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\"" Jan 13 21:14:20.997223 containerd[2018]: time="2025-01-13T21:14:20.996807997Z" level=info msg="TearDown network for sandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" successfully" Jan 13 21:14:21.003751 containerd[2018]: time="2025-01-13T21:14:21.003466269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:14:21.003751 containerd[2018]: time="2025-01-13T21:14:21.003565197Z" level=info msg="RemovePodSandbox \"646ce69e99f7befea8c4b67880bb38e4a9b0330df181191a894dfbab171f86cd\" returns successfully" Jan 13 21:14:21.053895 kubelet[2471]: E0113 21:14:21.053831 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:22.054561 kubelet[2471]: E0113 21:14:22.054500 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:23.055009 kubelet[2471]: E0113 21:14:23.054926 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:23.759684 kubelet[2471]: E0113 21:14:23.759626 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 21:14:24.055539 kubelet[2471]: E0113 21:14:24.055392 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:25.056546 kubelet[2471]: E0113 21:14:25.056485 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:26.056744 kubelet[2471]: E0113 21:14:26.056687 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:27.057137 kubelet[2471]: E0113 21:14:27.057062 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:28.057554 kubelet[2471]: E0113 21:14:28.057494 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:29.058330 kubelet[2471]: E0113 21:14:29.058272 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:30.058704 kubelet[2471]: E0113 21:14:30.058635 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:31.059177 kubelet[2471]: E0113 21:14:31.059119 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:32.059381 kubelet[2471]: E0113 21:14:32.059309 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:33.060125 kubelet[2471]: E0113 21:14:33.060061 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:33.759953 kubelet[2471]: E0113 21:14:33.759886 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": context deadline exceeded" Jan 13 21:14:34.061084 kubelet[2471]: E0113 21:14:34.060733 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:35.061617 kubelet[2471]: E0113 21:14:35.061563 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:36.062776 kubelet[2471]: E0113 21:14:36.062713 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:37.063499 kubelet[2471]: E0113 21:14:37.063431 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:38.063662 kubelet[2471]: E0113 21:14:38.063603 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:39.063947 kubelet[2471]: E0113 21:14:39.063888 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:40.064826 kubelet[2471]: E0113 21:14:40.064766 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:40.965138 kubelet[2471]: E0113 21:14:40.965070 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:41.065803 kubelet[2471]: E0113 21:14:41.065755 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:42.065936 kubelet[2471]: E0113 21:14:42.065863 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:42.915745 kubelet[2471]: E0113 21:14:42.915432 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": unexpected EOF" Jan 13 21:14:42.931280 kubelet[2471]: E0113 21:14:42.931038 2471 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": read tcp 172.31.31.152:35662->172.31.22.69:6443: read: connection reset by peer" Jan 13 21:14:42.931280 kubelet[2471]: I0113 21:14:42.931119 2471 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 13 21:14:42.932318 kubelet[2471]: E0113 21:14:42.932264 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": dial tcp 172.31.22.69:6443: connect: connection refused" interval="200ms" Jan 13 21:14:43.066959 kubelet[2471]: E0113 21:14:43.066915 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:43.133326 kubelet[2471]: E0113 21:14:43.133262 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": dial tcp 172.31.22.69:6443: connect: connection refused" interval="400ms" Jan 13 21:14:43.534765 kubelet[2471]: E0113 21:14:43.534717 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": dial tcp 172.31.22.69:6443: connect: connection refused" interval="800ms" Jan 13 21:14:44.068131 kubelet[2471]: E0113 21:14:44.068073 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:45.069039 kubelet[2471]: E0113 21:14:45.068946 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:46.070013 kubelet[2471]: E0113 21:14:46.069953 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:47.070464 kubelet[2471]: E0113 21:14:47.070402 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:48.071086 kubelet[2471]: E0113 21:14:48.071014 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:49.071892 kubelet[2471]: E0113 21:14:49.071832 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:50.072706 kubelet[2471]: E0113 21:14:50.072648 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:51.072865 kubelet[2471]: E0113 21:14:51.072776 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:52.073754 kubelet[2471]: E0113 21:14:52.073689 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:53.074469 kubelet[2471]: E0113 21:14:53.074415 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:54.075353 kubelet[2471]: E0113 21:14:54.075292 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:54.336108 kubelet[2471]: E0113 21:14:54.335922 2471 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.31.152?timeout=10s\": dial tcp 172.31.22.69:6443: i/o timeout" interval="1.6s" Jan 13 21:14:54.559880 kubelet[2471]: E0113 21:14:54.559595 2471 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.31.152\": Get \"https://172.31.22.69:6443/api/v1/nodes/172.31.31.152?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 13 21:14:55.076521 kubelet[2471]: E0113 21:14:55.076460 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:56.076873 kubelet[2471]: E0113 21:14:56.076808 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:57.077014 kubelet[2471]: E0113 21:14:57.076928 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:58.077733 kubelet[2471]: E0113 21:14:58.077638 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:14:59.078856 kubelet[2471]: E0113 21:14:59.078793 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:00.079354 kubelet[2471]: E0113 21:15:00.079278 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:00.965225 kubelet[2471]: E0113 21:15:00.965156 2471 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:01.079884 kubelet[2471]: E0113 21:15:01.079832 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:02.080459 kubelet[2471]: E0113 21:15:02.080391 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:03.081320 kubelet[2471]: E0113 21:15:03.081262 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:04.081471 kubelet[2471]: E0113 21:15:04.081411 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:15:04.560347 kubelet[2471]: E0113 21:15:04.560074 2471 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.31.152\": Get \"https://172.31.22.69:6443/api/v1/nodes/172.31.31.152?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 13 21:15:05.082463 kubelet[2471]: E0113 21:15:05.082411 2471 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"