Dec 13 01:54:45.211686 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:54:45.211731 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:54:45.211756 kernel: KASLR disabled due to lack of seed Dec 13 01:54:45.211772 kernel: efi: EFI v2.7 by EDK II Dec 13 01:54:45.211788 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:54:45.211804 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:45.211822 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:54:45.211837 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:54:45.211853 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:54:45.211868 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:54:45.211888 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:54:45.211904 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:54:45.211919 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:54:45.211935 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:54:45.211954 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:54:45.211974 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:54:45.211991 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:54:45.212007 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:54:45.212024 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:54:45.212040 kernel: printk: bootconsole [uart0] enabled Dec 13 01:54:45.212056 kernel: NUMA: Failed to initialise from firmware Dec 13 01:54:45.212072 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:45.212089 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:54:45.212105 kernel: Zone ranges: Dec 13 01:54:45.212121 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:54:45.212137 kernel: DMA32 empty Dec 13 01:54:45.212157 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:54:45.212174 kernel: Movable zone start for each node Dec 13 01:54:45.212190 kernel: Early memory node ranges Dec 13 01:54:45.212206 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:54:45.212222 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:54:45.212255 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:54:45.212278 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:54:45.212295 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:54:45.212311 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:54:45.212328 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:54:45.212344 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:54:45.212360 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:45.212382 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:54:45.212400 kernel: psci: probing for conduit method from ACPI. Dec 13 01:54:45.212423 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:54:45.212440 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:54:45.212458 kernel: psci: Trusted OS migration not required Dec 13 01:54:45.212479 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:54:45.212496 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:54:45.212514 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:54:45.212531 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:54:45.212548 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:54:45.212566 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:54:45.212583 kernel: CPU features: detected: Spectre-v2 Dec 13 01:54:45.212600 kernel: CPU features: detected: Spectre-v3a Dec 13 01:54:45.212617 kernel: CPU features: detected: Spectre-BHB Dec 13 01:54:45.212634 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:54:45.212651 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:54:45.212673 kernel: alternatives: applying boot alternatives Dec 13 01:54:45.212693 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:45.212711 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:45.212729 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:54:45.212746 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:45.212763 kernel: Fallback order for Node 0: 0 Dec 13 01:54:45.212781 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:54:45.212798 kernel: Policy zone: Normal Dec 13 01:54:45.212815 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:45.212832 kernel: software IO TLB: area num 2. Dec 13 01:54:45.212850 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:54:45.212872 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:54:45.212890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:54:45.212907 kernel: trace event string verifier disabled Dec 13 01:54:45.212924 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:45.212942 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:45.212960 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:54:45.212978 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:45.212995 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:45.213013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:45.213030 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:54:45.213047 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:54:45.213068 kernel: GICv3: 96 SPIs implemented Dec 13 01:54:45.213086 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:54:45.213103 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:54:45.213120 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:54:45.213137 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:54:45.213154 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:54:45.213171 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:54:45.213189 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:54:45.213206 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:54:45.213223 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:54:45.213895 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:54:45.213920 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:54:45.213945 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:54:45.213963 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:54:45.213980 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:54:45.213998 kernel: Console: colour dummy device 80x25 Dec 13 01:54:45.214016 kernel: printk: console [tty1] enabled Dec 13 01:54:45.214033 kernel: ACPI: Core revision 20230628 Dec 13 01:54:45.214051 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:54:45.214070 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:54:45.214087 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:45.214105 kernel: landlock: Up and running. Dec 13 01:54:45.214127 kernel: SELinux: Initializing. Dec 13 01:54:45.214145 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:45.214163 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:45.214181 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:45.214199 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:45.214217 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:45.214235 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:45.215409 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:54:45.215448 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:54:45.215467 kernel: Remapping and enabling EFI services. Dec 13 01:54:45.215486 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:45.215503 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:54:45.215521 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:54:45.215539 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:54:45.215558 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:54:45.215576 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:45.215593 kernel: SMP: Total of 2 processors activated. Dec 13 01:54:45.215611 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:54:45.215633 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:54:45.215651 kernel: CPU features: detected: CRC32 instructions Dec 13 01:54:45.215681 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:54:45.215703 kernel: alternatives: applying system-wide alternatives Dec 13 01:54:45.215722 kernel: devtmpfs: initialized Dec 13 01:54:45.215741 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:45.215759 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:54:45.215778 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:45.215797 kernel: SMBIOS 3.0.0 present. Dec 13 01:54:45.215819 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:54:45.215838 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:45.215857 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:54:45.215877 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:54:45.215895 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:54:45.215914 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:45.215933 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:45.215955 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:45.215974 kernel: cpuidle: using governor menu Dec 13 01:54:45.215992 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:54:45.216010 kernel: ASID allocator initialised with 65536 entries Dec 13 01:54:45.216029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:45.216047 kernel: Serial: AMBA PL011 UART driver Dec 13 01:54:45.216066 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:54:45.216084 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:54:45.216103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:45.216125 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:45.216144 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:54:45.216162 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:54:45.216180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:45.216199 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:45.216217 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:54:45.216235 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:54:45.216322 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:45.216343 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:45.216368 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:45.216387 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:45.216405 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:45.216424 kernel: ACPI: Interpreter enabled Dec 13 01:54:45.216442 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:54:45.216460 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:54:45.216479 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:54:45.216763 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:45.216989 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:54:45.217191 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:54:45.217453 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:54:45.217651 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:54:45.217676 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:54:45.217696 kernel: acpiphp: Slot [1] registered Dec 13 01:54:45.217714 kernel: acpiphp: Slot [2] registered Dec 13 01:54:45.217733 kernel: acpiphp: Slot [3] registered Dec 13 01:54:45.217758 kernel: acpiphp: Slot [4] registered Dec 13 01:54:45.217777 kernel: acpiphp: Slot [5] registered Dec 13 01:54:45.217795 kernel: acpiphp: Slot [6] registered Dec 13 01:54:45.217813 kernel: acpiphp: Slot [7] registered Dec 13 01:54:45.217831 kernel: acpiphp: Slot [8] registered Dec 13 01:54:45.217850 kernel: acpiphp: Slot [9] registered Dec 13 01:54:45.217868 kernel: acpiphp: Slot [10] registered Dec 13 01:54:45.217886 kernel: acpiphp: Slot [11] registered Dec 13 01:54:45.217904 kernel: acpiphp: Slot [12] registered Dec 13 01:54:45.217922 kernel: acpiphp: Slot [13] registered Dec 13 01:54:45.217945 kernel: acpiphp: Slot [14] registered Dec 13 01:54:45.217963 kernel: acpiphp: Slot [15] registered Dec 13 01:54:45.217982 kernel: acpiphp: Slot [16] registered Dec 13 01:54:45.218000 kernel: acpiphp: Slot [17] registered Dec 13 01:54:45.218018 kernel: acpiphp: Slot [18] registered Dec 13 01:54:45.218036 kernel: acpiphp: Slot [19] registered Dec 13 01:54:45.218054 kernel: acpiphp: Slot [20] registered Dec 13 01:54:45.218073 kernel: acpiphp: Slot [21] registered Dec 13 01:54:45.218091 kernel: acpiphp: Slot [22] registered Dec 13 01:54:45.218113 kernel: acpiphp: Slot [23] registered Dec 13 01:54:45.218132 kernel: acpiphp: Slot [24] registered Dec 13 01:54:45.218150 kernel: acpiphp: Slot [25] registered Dec 13 01:54:45.218168 kernel: acpiphp: Slot [26] registered Dec 13 01:54:45.218186 kernel: acpiphp: Slot [27] registered Dec 13 01:54:45.218205 kernel: acpiphp: Slot [28] registered Dec 13 01:54:45.218223 kernel: acpiphp: Slot [29] registered Dec 13 01:54:45.218303 kernel: acpiphp: Slot [30] registered Dec 13 01:54:45.218326 kernel: acpiphp: Slot [31] registered Dec 13 01:54:45.218344 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:45.218551 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:54:45.218756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:54:45.218934 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:45.219118 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:54:45.219412 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:54:45.219636 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:54:45.219846 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:54:45.220057 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:54:45.220385 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:54:45.220588 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:45.220813 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:54:45.221012 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:54:45.222382 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:54:45.223317 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:54:45.223531 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:45.223735 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:54:45.223936 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:54:45.224136 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:54:45.224383 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:54:45.224595 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:54:45.224796 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:54:45.224978 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:54:45.225161 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:45.225186 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:54:45.225205 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:54:45.225224 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:54:45.225282 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:54:45.225305 kernel: iommu: Default domain type: Translated Dec 13 01:54:45.225331 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:54:45.225351 kernel: efivars: Registered efivars operations Dec 13 01:54:45.225370 kernel: vgaarb: loaded Dec 13 01:54:45.225388 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:54:45.225407 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:45.225426 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:45.225444 kernel: pnp: PnP ACPI init Dec 13 01:54:45.225670 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:54:45.225704 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:54:45.225724 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:45.225743 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:54:45.225762 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:54:45.225782 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:45.225801 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:45.225820 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:54:45.225840 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:54:45.225858 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:45.225882 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:45.225901 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:45.225919 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:54:45.225938 kernel: kvm [1]: HYP mode not available Dec 13 01:54:45.225956 kernel: Initialise system trusted keyrings Dec 13 01:54:45.225975 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:54:45.225994 kernel: Key type asymmetric registered Dec 13 01:54:45.226012 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:45.226031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:54:45.226053 kernel: io scheduler mq-deadline registered Dec 13 01:54:45.226072 kernel: io scheduler kyber registered Dec 13 01:54:45.226091 kernel: io scheduler bfq registered Dec 13 01:54:45.227401 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:54:45.227433 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:54:45.227453 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:54:45.227472 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:54:45.227490 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:54:45.227516 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:45.227536 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:54:45.227740 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:54:45.227766 kernel: printk: console [ttyS0] disabled Dec 13 01:54:45.227785 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:54:45.227804 kernel: printk: console [ttyS0] enabled Dec 13 01:54:45.227823 kernel: printk: bootconsole [uart0] disabled Dec 13 01:54:45.227841 kernel: thunder_xcv, ver 1.0 Dec 13 01:54:45.227860 kernel: thunder_bgx, ver 1.0 Dec 13 01:54:45.227878 kernel: nicpf, ver 1.0 Dec 13 01:54:45.227901 kernel: nicvf, ver 1.0 Dec 13 01:54:45.228105 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:54:45.228367 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:54:44 UTC (1734054884) Dec 13 01:54:45.228397 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:54:45.228417 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:54:45.228437 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:54:45.228456 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:54:45.228482 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:45.228501 kernel: Segment Routing with IPv6 Dec 13 01:54:45.228519 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:45.228538 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:45.228556 kernel: Key type dns_resolver registered Dec 13 01:54:45.228575 kernel: registered taskstats version 1 Dec 13 01:54:45.228593 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:45.228612 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:54:45.228630 kernel: Key type .fscrypt registered Dec 13 01:54:45.228648 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:45.228671 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:45.228690 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:45.228710 kernel: ima: No architecture policies found Dec 13 01:54:45.228728 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:54:45.228747 kernel: clk: Disabling unused clocks Dec 13 01:54:45.228766 kernel: Freeing unused kernel memory: 39360K Dec 13 01:54:45.228784 kernel: Run /init as init process Dec 13 01:54:45.228803 kernel: with arguments: Dec 13 01:54:45.228821 kernel: /init Dec 13 01:54:45.228843 kernel: with environment: Dec 13 01:54:45.228862 kernel: HOME=/ Dec 13 01:54:45.228880 kernel: TERM=linux Dec 13 01:54:45.228898 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:45.228921 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:45.228944 systemd[1]: Detected virtualization amazon. Dec 13 01:54:45.228965 systemd[1]: Detected architecture arm64. Dec 13 01:54:45.228989 systemd[1]: Running in initrd. Dec 13 01:54:45.229009 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:45.229029 systemd[1]: Hostname set to . Dec 13 01:54:45.229050 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:45.229069 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:45.229090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:45.229110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:45.229132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:45.229157 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:45.229179 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:45.229200 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:45.229223 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:45.229367 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:45.229392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:45.229413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:45.229440 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:45.229461 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:45.229481 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:45.229502 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:45.229523 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:45.229544 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:45.229565 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:45.229585 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:45.229606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:45.229631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:45.229652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:45.229672 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:45.229799 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:45.229827 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:45.229847 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:45.229868 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:45.229889 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:45.229916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:45.229937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:45.229957 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:45.229979 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:45.230000 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:45.230022 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:45.230091 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:54:45.230137 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:45.230158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:45.230184 kernel: Bridge firewalling registered Dec 13 01:54:45.230205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:45.230226 systemd-journald[251]: Journal started Dec 13 01:54:45.230307 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2fd6aea23311729c1fc0ede98fd785) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:45.175234 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:54:45.225271 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:54:45.240754 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:45.245394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:45.252316 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:45.253063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:45.279687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:45.287820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:45.288648 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:45.303705 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:45.311541 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:45.331907 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:45.337479 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:45.350637 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:45.375831 dracut-cmdline[283]: dracut-dracut-053 Dec 13 01:54:45.381746 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:45.423461 systemd-resolved[289]: Positive Trust Anchors: Dec 13 01:54:45.423502 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:45.423566 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:45.537263 kernel: SCSI subsystem initialized Dec 13 01:54:45.543282 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:45.556318 kernel: iscsi: registered transport (tcp) Dec 13 01:54:45.579566 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:45.579643 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:45.659280 kernel: random: crng init done Dec 13 01:54:45.659513 systemd-resolved[289]: Defaulting to hostname 'linux'. Dec 13 01:54:45.663168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:45.666135 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:45.692192 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:45.701528 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:45.747290 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:45.747369 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:45.747397 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:45.814301 kernel: raid6: neonx8 gen() 6716 MB/s Dec 13 01:54:45.831305 kernel: raid6: neonx4 gen() 6563 MB/s Dec 13 01:54:45.848271 kernel: raid6: neonx2 gen() 5471 MB/s Dec 13 01:54:45.865274 kernel: raid6: neonx1 gen() 3963 MB/s Dec 13 01:54:45.882272 kernel: raid6: int64x8 gen() 3807 MB/s Dec 13 01:54:45.899272 kernel: raid6: int64x4 gen() 3728 MB/s Dec 13 01:54:45.916271 kernel: raid6: int64x2 gen() 3599 MB/s Dec 13 01:54:45.934019 kernel: raid6: int64x1 gen() 2775 MB/s Dec 13 01:54:45.934059 kernel: raid6: using algorithm neonx8 gen() 6716 MB/s Dec 13 01:54:45.951998 kernel: raid6: .... xor() 4883 MB/s, rmw enabled Dec 13 01:54:45.952035 kernel: raid6: using neon recovery algorithm Dec 13 01:54:45.960644 kernel: xor: measuring software checksum speed Dec 13 01:54:45.960709 kernel: 8regs : 10957 MB/sec Dec 13 01:54:45.961728 kernel: 32regs : 11945 MB/sec Dec 13 01:54:45.962900 kernel: arm64_neon : 9583 MB/sec Dec 13 01:54:45.962934 kernel: xor: using function: 32regs (11945 MB/sec) Dec 13 01:54:46.049285 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:46.071769 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:46.082909 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:46.121041 systemd-udevd[469]: Using default interface naming scheme 'v255'. Dec 13 01:54:46.130156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:46.151481 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:46.184206 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Dec 13 01:54:46.246000 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:46.255595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:46.377692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:46.388575 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:46.435685 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:46.450188 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:46.457179 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:46.461431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:46.483601 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:46.523557 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:46.594297 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:54:46.594367 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:54:46.609548 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:54:46.609815 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:54:46.610043 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:49:bc:19:3e:73 Dec 13 01:54:46.607164 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:46.607412 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:46.614883 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:46.620273 (udev-worker)[533]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:46.625205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:46.629403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:46.634453 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:46.652732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:46.673269 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:54:46.675275 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:54:46.684272 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:54:46.690013 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:54:46.690081 kernel: GPT:9289727 != 16777215 Dec 13 01:54:46.690107 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:54:46.691379 kernel: GPT:9289727 != 16777215 Dec 13 01:54:46.692380 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:46.693307 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:46.709310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:46.720542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:46.764798 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:46.844967 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:54:46.859300 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (518) Dec 13 01:54:46.890376 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (538) Dec 13 01:54:46.908772 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:54:46.968377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:46.984023 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:46.986343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:46.998505 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:47.017269 disk-uuid[660]: Primary Header is updated. Dec 13 01:54:47.017269 disk-uuid[660]: Secondary Entries is updated. Dec 13 01:54:47.017269 disk-uuid[660]: Secondary Header is updated. Dec 13 01:54:47.027347 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:47.033850 kernel: GPT:disk_guids don't match. Dec 13 01:54:47.033911 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:47.033938 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:47.045313 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:48.046833 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:48.048601 disk-uuid[661]: The operation has completed successfully. Dec 13 01:54:48.061421 kernel: block device autoloading is deprecated and will be removed. Dec 13 01:54:48.256648 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:48.256867 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:48.303547 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:48.320956 sh[1005]: Success Dec 13 01:54:48.358286 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:54:48.479993 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:48.483898 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:48.491786 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:48.533592 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:54:48.533735 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:48.533764 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:48.536451 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:48.536490 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:48.637284 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:48.674881 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:48.678459 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:54:48.694646 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:48.702583 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:48.723707 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:48.723771 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:48.723799 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:48.761289 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:48.779713 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:48.784306 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:48.821429 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:48.832680 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:48.883158 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:48.894709 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:48.954220 systemd-networkd[1198]: lo: Link UP Dec 13 01:54:48.955783 systemd-networkd[1198]: lo: Gained carrier Dec 13 01:54:48.959846 systemd-networkd[1198]: Enumeration completed Dec 13 01:54:48.961121 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:48.963664 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:48.963670 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:48.973557 systemd[1]: Reached target network.target - Network. Dec 13 01:54:48.977740 systemd-networkd[1198]: eth0: Link UP Dec 13 01:54:48.977748 systemd-networkd[1198]: eth0: Gained carrier Dec 13 01:54:48.977766 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:48.994324 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.17.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:49.380313 ignition[1156]: Ignition 2.19.0 Dec 13 01:54:49.380828 ignition[1156]: Stage: fetch-offline Dec 13 01:54:49.381414 ignition[1156]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:49.381437 ignition[1156]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:49.381924 ignition[1156]: Ignition finished successfully Dec 13 01:54:49.390905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:49.406022 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:54:49.430308 ignition[1209]: Ignition 2.19.0 Dec 13 01:54:49.430336 ignition[1209]: Stage: fetch Dec 13 01:54:49.431172 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:49.431198 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:49.431795 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:49.452495 ignition[1209]: PUT result: OK Dec 13 01:54:49.455517 ignition[1209]: parsed url from cmdline: "" Dec 13 01:54:49.455532 ignition[1209]: no config URL provided Dec 13 01:54:49.455547 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:49.455573 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:49.455604 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:49.457236 ignition[1209]: PUT result: OK Dec 13 01:54:49.463690 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:54:49.466415 ignition[1209]: GET result: OK Dec 13 01:54:49.466510 ignition[1209]: parsing config with SHA512: 3c122b6b1a13876bf5275dba2e4588a7468a8e09fa066cba35d73c3551438b8f6d766c6aca23754080086d9fec71a310ab7a55dee9f35a336af4d1db144c9d26 Dec 13 01:54:49.471813 unknown[1209]: fetched base config from "system" Dec 13 01:54:49.472359 unknown[1209]: fetched base config from "system" Dec 13 01:54:49.472832 ignition[1209]: fetch: fetch complete Dec 13 01:54:49.472375 unknown[1209]: fetched user config from "aws" Dec 13 01:54:49.472846 ignition[1209]: fetch: fetch passed Dec 13 01:54:49.480321 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:54:49.472932 ignition[1209]: Ignition finished successfully Dec 13 01:54:49.490149 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:49.519035 ignition[1215]: Ignition 2.19.0 Dec 13 01:54:49.519066 ignition[1215]: Stage: kargs Dec 13 01:54:49.520460 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:49.520489 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:49.520652 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:49.522684 ignition[1215]: PUT result: OK Dec 13 01:54:49.532579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:49.529180 ignition[1215]: kargs: kargs passed Dec 13 01:54:49.529311 ignition[1215]: Ignition finished successfully Dec 13 01:54:49.546628 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:49.583573 ignition[1222]: Ignition 2.19.0 Dec 13 01:54:49.583889 ignition[1222]: Stage: disks Dec 13 01:54:49.584770 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:49.584795 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:49.584958 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:49.589172 ignition[1222]: PUT result: OK Dec 13 01:54:49.596401 ignition[1222]: disks: disks passed Dec 13 01:54:49.596499 ignition[1222]: Ignition finished successfully Dec 13 01:54:49.600762 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:49.602995 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:49.606060 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:49.608368 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:49.610455 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:49.612405 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:49.638630 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:49.702007 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:54:49.708181 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:49.720459 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:49.809268 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:49.810571 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:49.812207 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:49.846466 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:49.862831 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:49.867628 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:49.870678 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:49.871658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:49.881136 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:49.890601 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:49.901078 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1249) Dec 13 01:54:49.901142 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:49.904687 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:49.904820 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:49.910749 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:49.912551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:50.079420 systemd-networkd[1198]: eth0: Gained IPv6LL Dec 13 01:54:50.328332 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:50.337298 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:50.347690 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:50.384671 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:50.903968 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:50.913405 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:50.926554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:50.942881 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:50.947293 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:50.979311 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:50.990910 ignition[1362]: INFO : Ignition 2.19.0 Dec 13 01:54:50.993383 ignition[1362]: INFO : Stage: mount Dec 13 01:54:50.995223 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:50.995223 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:50.995223 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:51.002215 ignition[1362]: INFO : PUT result: OK Dec 13 01:54:51.007162 ignition[1362]: INFO : mount: mount passed Dec 13 01:54:51.009031 ignition[1362]: INFO : Ignition finished successfully Dec 13 01:54:51.012939 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:51.022569 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:51.052654 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:51.071287 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1373) Dec 13 01:54:51.075015 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:51.075058 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:51.075085 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:51.081279 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:51.083897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:51.126318 ignition[1390]: INFO : Ignition 2.19.0 Dec 13 01:54:51.126318 ignition[1390]: INFO : Stage: files Dec 13 01:54:51.126318 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:51.126318 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:51.134392 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:51.134392 ignition[1390]: INFO : PUT result: OK Dec 13 01:54:51.142130 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:54:51.145200 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:54:51.145200 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:54:51.154479 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:54:51.157102 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:54:51.159785 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:54:51.159038 unknown[1390]: wrote ssh authorized keys file for user: core Dec 13 01:54:51.165417 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:51.165417 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:51.165417 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:51.175314 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:51.175314 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:51.175314 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:51.175314 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:51.175314 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 01:54:51.667154 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:54:52.080956 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:54:52.084942 ignition[1390]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:52.084942 ignition[1390]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:52.084942 ignition[1390]: INFO : files: files passed Dec 13 01:54:52.084942 ignition[1390]: INFO : Ignition finished successfully Dec 13 01:54:52.097309 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:54:52.107594 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:54:52.117597 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:54:52.131772 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:54:52.131958 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:54:52.152173 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:52.152173 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:52.158777 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:52.164732 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:52.168896 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:54:52.189668 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:54:52.240817 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:54:52.243306 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:54:52.249942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:54:52.251939 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:54:52.254672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:54:52.272537 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:54:52.297038 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:52.312620 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:54:52.337114 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:52.337651 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:52.338105 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:54:52.339313 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:54:52.339555 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:52.340164 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:54:52.340756 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:54:52.341052 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:54:52.341362 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:52.341630 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:52.341922 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:54:52.342207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:52.343230 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:54:52.343834 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:54:52.344433 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:54:52.345223 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:54:52.345528 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:52.346730 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:52.347063 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:52.347301 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:54:52.368685 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:52.378296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:54:52.378641 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:52.386306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:54:52.397985 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:52.400506 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:54:52.400732 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:54:52.422071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:54:52.442767 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:54:52.448282 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:54:52.448617 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:52.466811 ignition[1442]: INFO : Ignition 2.19.0 Dec 13 01:54:52.466811 ignition[1442]: INFO : Stage: umount Dec 13 01:54:52.466811 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.466811 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.466811 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.485273 ignition[1442]: INFO : PUT result: OK Dec 13 01:54:52.485273 ignition[1442]: INFO : umount: umount passed Dec 13 01:54:52.485273 ignition[1442]: INFO : Ignition finished successfully Dec 13 01:54:52.469855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:54:52.470168 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:52.495814 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:54:52.497228 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:54:52.501081 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:54:52.503424 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:54:52.509596 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:54:52.509722 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:54:52.511674 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:54:52.511765 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:54:52.514675 systemd[1]: Stopped target network.target - Network. Dec 13 01:54:52.514924 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:54:52.515015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:52.515750 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:54:52.515814 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:54:52.523669 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:52.527717 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:54:52.530995 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:54:52.546726 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:54:52.546811 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:52.550132 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:54:52.550219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:52.556970 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:54:52.557080 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:54:52.561433 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:54:52.561532 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:52.563843 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:54:52.565849 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:54:52.569676 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:54:52.571051 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:54:52.572819 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:54:52.585375 systemd-networkd[1198]: eth0: DHCPv6 lease lost Dec 13 01:54:52.587392 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:54:52.589322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:54:52.596774 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:54:52.596982 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:54:52.609361 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:54:52.609829 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:54:52.634600 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:54:52.634720 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:52.638544 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:54:52.638658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:52.663440 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:54:52.668600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:54:52.668847 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:52.675577 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:52.675669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:52.678356 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:54:52.678434 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:52.680386 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:54:52.680460 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:52.682815 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:52.721628 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:54:52.722702 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:52.730986 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:54:52.732221 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:54:52.737633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:54:52.737743 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:52.741556 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:54:52.741626 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:52.743809 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:54:52.744069 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:52.747742 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:54:52.747828 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:52.749981 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:52.750061 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:52.770661 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:54:52.772686 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:54:52.772794 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:52.775197 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:54:52.775298 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:52.777548 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:54:52.777645 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:52.779932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:52.780038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:52.793716 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:54:52.795296 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:54:52.798413 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:54:52.817504 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:54:52.837274 systemd[1]: Switching root. Dec 13 01:54:52.911689 systemd-journald[251]: Journal stopped Dec 13 01:54:56.234381 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:54:56.234520 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:54:56.234575 kernel: SELinux: policy capability open_perms=1 Dec 13 01:54:56.234610 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:54:56.234641 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:54:56.234672 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:54:56.234712 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:54:56.234741 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:54:56.234771 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:54:56.234812 kernel: audit: type=1403 audit(1734054894.411:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:54:56.234844 systemd[1]: Successfully loaded SELinux policy in 49.251ms. Dec 13 01:54:56.234893 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.405ms. Dec 13 01:54:56.234927 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:56.234959 systemd[1]: Detected virtualization amazon. Dec 13 01:54:56.234991 systemd[1]: Detected architecture arm64. Dec 13 01:54:56.235022 systemd[1]: Detected first boot. Dec 13 01:54:56.235055 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:56.235093 zram_generator::config[1485]: No configuration found. Dec 13 01:54:56.235131 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:54:56.235163 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:54:56.235193 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:54:56.235235 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:54:56.235443 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:54:56.235477 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:54:56.235509 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:54:56.235545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:54:56.235578 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:54:56.235620 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:54:56.238321 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:54:56.238391 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:54:56.238429 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:56.238461 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:56.238493 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:54:56.238546 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:54:56.238590 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:54:56.238623 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:56.238656 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:54:56.238688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:56.238720 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:54:56.238749 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:54:56.238781 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:56.238816 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:54:56.238848 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:56.238879 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:56.238909 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:56.238940 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:56.238972 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:54:56.239004 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:54:56.239040 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:56.239072 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:56.239102 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:56.239137 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:54:56.239168 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:54:56.239200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:54:56.239229 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:54:56.242092 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:54:56.242136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:54:56.242171 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:54:56.242202 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:54:56.242418 systemd[1]: Reached target machines.target - Containers. Dec 13 01:54:56.242751 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:54:56.242904 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:56.242937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:56.242966 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:54:56.243000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:56.243030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:56.243061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:56.243091 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:54:56.243128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:56.243161 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:54:56.243190 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:54:56.243220 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:54:56.244776 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:54:56.244827 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:54:56.244858 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:56.244887 kernel: loop: module loaded Dec 13 01:54:56.244927 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:56.244958 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:54:56.244987 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:54:56.245017 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:56.245049 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:54:56.245080 systemd[1]: Stopped verity-setup.service. Dec 13 01:54:56.245117 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:54:56.245151 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:54:56.245185 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:54:56.245220 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:54:56.245372 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:54:56.245406 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:54:56.245437 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:54:56.245465 kernel: fuse: init (API version 7.39) Dec 13 01:54:56.245501 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:56.245584 systemd-journald[1584]: Collecting audit messages is disabled. Dec 13 01:54:56.245638 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:54:56.245668 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:54:56.245698 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:56.245726 systemd-journald[1584]: Journal started Dec 13 01:54:56.245779 systemd-journald[1584]: Runtime Journal (/run/log/journal/ec2fd6aea23311729c1fc0ede98fd785) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:55.609562 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:54:55.726530 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:54:55.727327 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:54:56.249334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:56.253003 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:56.258445 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:56.259302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:56.263336 kernel: ACPI: bus type drm_connector registered Dec 13 01:54:56.264119 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:56.264623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:56.267886 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:54:56.270594 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:54:56.273758 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:56.274199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:56.276969 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:56.279753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:54:56.283203 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:54:56.302416 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:54:56.313606 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:54:56.329492 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:54:56.332458 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:54:56.332527 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:56.338807 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:54:56.349425 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:54:56.359903 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:54:56.362603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:56.375475 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:54:56.389548 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:54:56.392193 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:56.396678 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:54:56.399097 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:56.410603 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:56.420618 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:54:56.426567 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:56.431683 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:54:56.434620 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:54:56.442400 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:54:56.458440 systemd-journald[1584]: Time spent on flushing to /var/log/journal/ec2fd6aea23311729c1fc0ede98fd785 is 59.195ms for 896 entries. Dec 13 01:54:56.458440 systemd-journald[1584]: System Journal (/var/log/journal/ec2fd6aea23311729c1fc0ede98fd785) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:54:56.534658 systemd-journald[1584]: Received client request to flush runtime journal. Dec 13 01:54:56.534766 kernel: loop0: detected capacity change from 0 to 194096 Dec 13 01:54:56.512819 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:54:56.515723 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:54:56.526765 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:54:56.545265 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:54:56.574801 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:56.589352 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:54:56.621310 kernel: loop1: detected capacity change from 0 to 52536 Dec 13 01:54:56.644841 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:56.658432 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:54:56.663865 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:54:56.668092 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:54:56.683058 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Dec 13 01:54:56.684021 systemd-tmpfiles[1615]: ACLs are not supported, ignoring. Dec 13 01:54:56.706729 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:56.723438 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:54:56.735046 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:54:56.775289 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 01:54:56.806320 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:54:56.821512 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:56.867726 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Dec 13 01:54:56.867768 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Dec 13 01:54:56.877115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:56.909293 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:54:57.033466 kernel: loop4: detected capacity change from 0 to 194096 Dec 13 01:54:57.071294 kernel: loop5: detected capacity change from 0 to 52536 Dec 13 01:54:57.100287 kernel: loop6: detected capacity change from 0 to 114328 Dec 13 01:54:57.112290 kernel: loop7: detected capacity change from 0 to 114432 Dec 13 01:54:57.122450 (sd-merge)[1644]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:54:57.125322 (sd-merge)[1644]: Merged extensions into '/usr'. Dec 13 01:54:57.134877 systemd[1]: Reloading requested from client PID 1614 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:54:57.134902 systemd[1]: Reloading... Dec 13 01:54:57.340292 zram_generator::config[1673]: No configuration found. Dec 13 01:54:57.639036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:57.760467 systemd[1]: Reloading finished in 624 ms. Dec 13 01:54:57.803372 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:54:57.807666 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:54:57.824653 systemd[1]: Starting ensure-sysext.service... Dec 13 01:54:57.834777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:57.848756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:57.871184 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:54:57.871228 systemd[1]: Reloading... Dec 13 01:54:57.902473 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:54:57.903144 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:54:57.918011 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:54:57.921891 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Dec 13 01:54:57.922040 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Dec 13 01:54:57.932207 systemd-udevd[1724]: Using default interface naming scheme 'v255'. Dec 13 01:54:57.937565 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:57.937585 systemd-tmpfiles[1723]: Skipping /boot Dec 13 01:54:57.985728 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:54:57.985912 systemd-tmpfiles[1723]: Skipping /boot Dec 13 01:54:58.073276 zram_generator::config[1757]: No configuration found. Dec 13 01:54:58.277058 (udev-worker)[1780]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:58.325499 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1775) Dec 13 01:54:58.336505 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1775) Dec 13 01:54:58.398425 ldconfig[1609]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:54:58.471091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:58.593313 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1751) Dec 13 01:54:58.627545 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:54:58.628573 systemd[1]: Reloading finished in 756 ms. Dec 13 01:54:58.664656 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:58.670785 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:54:58.675450 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:58.744316 systemd[1]: Finished ensure-sysext.service. Dec 13 01:54:58.784226 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:54:58.789680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:54:58.793600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:58.798052 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:58.805483 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:58.813597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:58.819563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:58.821695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:58.824551 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:54:58.833483 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:58.842617 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:58.844636 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:54:58.850595 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:54:58.857573 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:58.913211 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:54:58.952389 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:54:58.979755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:58.981555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:58.985498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:58.986620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:58.989965 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:59.014974 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:59.015434 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:59.033092 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:59.033456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:59.041226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:59.078135 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:54:59.095409 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:54:59.110104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:59.113370 augenrules[1956]: No rules Dec 13 01:54:59.116285 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:54:59.117837 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:54:59.129855 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:54:59.138637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:54:59.144587 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:54:59.145188 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:54:59.145828 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:54:59.213816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:59.228121 lvm[1963]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:59.232367 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:54:59.243936 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:54:59.298326 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:54:59.301203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:59.316595 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:54:59.332281 lvm[1978]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:59.350990 systemd-resolved[1923]: Positive Trust Anchors: Dec 13 01:54:59.351871 systemd-resolved[1923]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:59.354359 systemd-resolved[1923]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:59.365420 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:54:59.368385 systemd-resolved[1923]: Defaulting to hostname 'linux'. Dec 13 01:54:59.371502 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:59.373764 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:59.376008 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:59.378160 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:54:59.381255 systemd-networkd[1920]: lo: Link UP Dec 13 01:54:59.381504 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:54:59.381905 systemd-networkd[1920]: lo: Gained carrier Dec 13 01:54:59.384657 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:54:59.385588 systemd-networkd[1920]: Enumeration completed Dec 13 01:54:59.386423 systemd-networkd[1920]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:59.386431 systemd-networkd[1920]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:59.387857 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:54:59.390378 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:54:59.392783 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:54:59.392840 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:59.394567 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:59.398001 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:54:59.401004 systemd-networkd[1920]: eth0: Link UP Dec 13 01:54:59.402808 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:54:59.405938 systemd-networkd[1920]: eth0: Gained carrier Dec 13 01:54:59.405996 systemd-networkd[1920]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:59.418156 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:54:59.420391 systemd-networkd[1920]: eth0: DHCPv4 address 172.31.17.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:59.421211 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:59.423894 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:54:59.426133 systemd[1]: Reached target network.target - Network. Dec 13 01:54:59.427798 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:59.429639 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:59.431716 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:59.431768 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:54:59.439395 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:54:59.457519 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:54:59.465550 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:54:59.472766 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:54:59.480656 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:54:59.482843 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:54:59.486590 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:54:59.504946 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:54:59.511722 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:54:59.523584 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:54:59.533608 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:54:59.545568 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:54:59.554642 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:54:59.558671 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:54:59.560537 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:54:59.563859 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:54:59.569628 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:54:59.611455 jq[1986]: false Dec 13 01:54:59.601218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:54:59.605832 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:54:59.658985 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:54:59.661114 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:54:59.668326 (ntainerd)[2012]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:54:59.668038 dbus-daemon[1985]: [system] SELinux support is enabled Dec 13 01:54:59.668362 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:54:59.677298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: ---------------------------------------------------- Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: corporation. Support and training for ntp-4 are Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: available at https://www.nwtime.org/support Dec 13 01:54:59.696487 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: ---------------------------------------------------- Dec 13 01:54:59.695419 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:54:59.677346 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:54:59.695467 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:54:59.679920 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:54:59.695488 ntpd[1989]: ---------------------------------------------------- Dec 13 01:54:59.679960 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:54:59.695506 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:54:59.695525 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:54:59.695547 ntpd[1989]: corporation. Support and training for ntp-4 are Dec 13 01:54:59.695566 ntpd[1989]: available at https://www.nwtime.org/support Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: proto: precision = 0.096 usec (-23) Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: basedate set to 2024-11-30 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: gps base set to 2024-12-01 (week 2343) Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Listen normally on 3 eth0 172.31.17.98:123 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Listen normally on 4 lo [::1]:123 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: bind(21) AF_INET6 fe80::449:bcff:fe19:3e73%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: unable to create socket on eth0 (5) for fe80::449:bcff:fe19:3e73%2#123 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: failed to init interface for address fe80::449:bcff:fe19:3e73%2 Dec 13 01:54:59.718694 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Dec 13 01:54:59.695584 ntpd[1989]: ---------------------------------------------------- Dec 13 01:54:59.703570 ntpd[1989]: proto: precision = 0.096 usec (-23) Dec 13 01:54:59.703991 ntpd[1989]: basedate set to 2024-11-30 Dec 13 01:54:59.704016 ntpd[1989]: gps base set to 2024-12-01 (week 2343) Dec 13 01:54:59.713758 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:54:59.713839 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:54:59.714097 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:54:59.714159 ntpd[1989]: Listen normally on 3 eth0 172.31.17.98:123 Dec 13 01:54:59.714234 ntpd[1989]: Listen normally on 4 lo [::1]:123 Dec 13 01:54:59.717519 ntpd[1989]: bind(21) AF_INET6 fe80::449:bcff:fe19:3e73%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:54:59.717566 ntpd[1989]: unable to create socket on eth0 (5) for fe80::449:bcff:fe19:3e73%2#123 Dec 13 01:54:59.717598 ntpd[1989]: failed to init interface for address fe80::449:bcff:fe19:3e73%2 Dec 13 01:54:59.717660 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Dec 13 01:54:59.720226 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1920 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:54:59.721549 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:54:59.723582 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:54:59.751501 jq[1999]: true Dec 13 01:54:59.766560 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:54:59.770012 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:59.770086 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:59.770271 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:59.770271 ntpd[1989]: 13 Dec 01:54:59 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:54:59.803309 extend-filesystems[1987]: Found loop4 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found loop5 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found loop6 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found loop7 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1p1 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1p2 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1p3 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found usr Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1p4 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1p6 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1p7 Dec 13 01:54:59.803309 extend-filesystems[1987]: Found nvme0n1p9 Dec 13 01:54:59.803309 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Dec 13 01:54:59.870391 jq[2023]: true Dec 13 01:54:59.875232 update_engine[1998]: I20241213 01:54:59.875000 1998 main.cc:92] Flatcar Update Engine starting Dec 13 01:54:59.892474 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:54:59.911329 update_engine[1998]: I20241213 01:54:59.907302 1998 update_check_scheduler.cc:74] Next update check in 5m58s Dec 13 01:54:59.919662 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:54:59.927220 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:54:59.952781 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:54:59.955400 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Dec 13 01:54:59.965627 extend-filesystems[2040]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:54:59.985317 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:55:00.016295 coreos-metadata[1984]: Dec 13 01:55:00.016 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:00.017812 coreos-metadata[1984]: Dec 13 01:55:00.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:55:00.019511 coreos-metadata[1984]: Dec 13 01:55:00.018 INFO Fetch successful Dec 13 01:55:00.019511 coreos-metadata[1984]: Dec 13 01:55:00.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:55:00.019511 coreos-metadata[1984]: Dec 13 01:55:00.019 INFO Fetch successful Dec 13 01:55:00.019511 coreos-metadata[1984]: Dec 13 01:55:00.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:55:00.024005 coreos-metadata[1984]: Dec 13 01:55:00.023 INFO Fetch successful Dec 13 01:55:00.024005 coreos-metadata[1984]: Dec 13 01:55:00.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:55:00.025355 coreos-metadata[1984]: Dec 13 01:55:00.024 INFO Fetch successful Dec 13 01:55:00.025355 coreos-metadata[1984]: Dec 13 01:55:00.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:55:00.028491 coreos-metadata[1984]: Dec 13 01:55:00.026 INFO Fetch failed with 404: resource not found Dec 13 01:55:00.028491 coreos-metadata[1984]: Dec 13 01:55:00.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:55:00.028491 coreos-metadata[1984]: Dec 13 01:55:00.027 INFO Fetch successful Dec 13 01:55:00.028491 coreos-metadata[1984]: Dec 13 01:55:00.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:55:00.034568 coreos-metadata[1984]: Dec 13 01:55:00.034 INFO Fetch successful Dec 13 01:55:00.034568 coreos-metadata[1984]: Dec 13 01:55:00.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:55:00.034568 coreos-metadata[1984]: Dec 13 01:55:00.034 INFO Fetch successful Dec 13 01:55:00.034568 coreos-metadata[1984]: Dec 13 01:55:00.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:55:00.041295 coreos-metadata[1984]: Dec 13 01:55:00.039 INFO Fetch successful Dec 13 01:55:00.041295 coreos-metadata[1984]: Dec 13 01:55:00.039 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:55:00.041295 coreos-metadata[1984]: Dec 13 01:55:00.039 INFO Fetch successful Dec 13 01:55:00.076301 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:55:00.145283 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1751) Dec 13 01:55:00.176407 extend-filesystems[2040]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:55:00.176407 extend-filesystems[2040]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:55:00.176407 extend-filesystems[2040]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:55:00.196895 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:55:00.183149 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:55:00.211523 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:55:00.212706 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:55:00.219830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:55:00.220385 bash[2060]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:00.225351 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:55:00.244724 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:55:00.250926 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=2021 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:00.265806 systemd[1]: Starting sshkeys.service... Dec 13 01:55:00.268458 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:55:00.286422 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:55:00.286478 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:55:00.286861 systemd-logind[1993]: New seat seat0. Dec 13 01:55:00.299607 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:55:00.302995 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:55:00.345022 polkitd[2095]: Started polkitd version 121 Dec 13 01:55:00.351105 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:55:00.374962 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:55:00.397310 polkitd[2095]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:55:00.397483 polkitd[2095]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:55:00.401115 polkitd[2095]: Finished loading, compiling and executing 2 rules Dec 13 01:55:00.418441 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:55:00.420537 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:55:00.424377 polkitd[2095]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:55:00.434295 containerd[2012]: time="2024-12-13T01:55:00.433105691Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:55:00.507715 systemd-hostnamed[2021]: Hostname set to (transient) Dec 13 01:55:00.509423 systemd-resolved[1923]: System hostname changed to 'ip-172-31-17-98'. Dec 13 01:55:00.529720 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:55:00.542186 containerd[2012]: time="2024-12-13T01:55:00.540015671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:00.543724 containerd[2012]: time="2024-12-13T01:55:00.543655151Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:00.544563 containerd[2012]: time="2024-12-13T01:55:00.543863471Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:55:00.544563 containerd[2012]: time="2024-12-13T01:55:00.543907499Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:55:00.544563 containerd[2012]: time="2024-12-13T01:55:00.544216415Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:55:00.544563 containerd[2012]: time="2024-12-13T01:55:00.544275335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:00.544563 containerd[2012]: time="2024-12-13T01:55:00.544407131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:00.544563 containerd[2012]: time="2024-12-13T01:55:00.544437911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:00.545088 containerd[2012]: time="2024-12-13T01:55:00.545051267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:00.545206 containerd[2012]: time="2024-12-13T01:55:00.545177123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:00.545336 containerd[2012]: time="2024-12-13T01:55:00.545305883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:00.545434 containerd[2012]: time="2024-12-13T01:55:00.545407499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:00.545684 containerd[2012]: time="2024-12-13T01:55:00.545656199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:00.546267 containerd[2012]: time="2024-12-13T01:55:00.546218183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:00.548221 containerd[2012]: time="2024-12-13T01:55:00.547774331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:00.548221 containerd[2012]: time="2024-12-13T01:55:00.547827647Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:55:00.548221 containerd[2012]: time="2024-12-13T01:55:00.548056667Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:55:00.548221 containerd[2012]: time="2024-12-13T01:55:00.548156027Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:55:00.555210 containerd[2012]: time="2024-12-13T01:55:00.555153311Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:55:00.556907 containerd[2012]: time="2024-12-13T01:55:00.556341527Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:55:00.556907 containerd[2012]: time="2024-12-13T01:55:00.556484447Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:55:00.556907 containerd[2012]: time="2024-12-13T01:55:00.556524947Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:55:00.556907 containerd[2012]: time="2024-12-13T01:55:00.556558583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:55:00.556907 containerd[2012]: time="2024-12-13T01:55:00.556816523Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:55:00.557830 containerd[2012]: time="2024-12-13T01:55:00.557786063Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:55:00.558198 containerd[2012]: time="2024-12-13T01:55:00.558158363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.558991451Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559044419Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559080179Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559114643Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559201691Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559235903Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559294391Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559325219Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559356587Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559389431Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559431347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559464215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559506227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560079 containerd[2012]: time="2024-12-13T01:55:00.559537499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559580435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559612187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559640267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559672655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559703099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559740035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559770287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559800599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559829663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559862519Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559915391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559945043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.560759 containerd[2012]: time="2024-12-13T01:55:00.559971827Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562094951Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562166627Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562196471Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562225163Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562270967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562307195Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562332671Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:55:00.563800 containerd[2012]: time="2024-12-13T01:55:00.562358507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:55:00.564221 containerd[2012]: time="2024-12-13T01:55:00.562891763Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:55:00.564221 containerd[2012]: time="2024-12-13T01:55:00.563010275Z" level=info msg="Connect containerd service" Dec 13 01:55:00.564221 containerd[2012]: time="2024-12-13T01:55:00.563075183Z" level=info msg="using legacy CRI server" Dec 13 01:55:00.564221 containerd[2012]: time="2024-12-13T01:55:00.563093135Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:55:00.565235 containerd[2012]: time="2024-12-13T01:55:00.564683087Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:55:00.566674 containerd[2012]: time="2024-12-13T01:55:00.566553143Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.568408499Z" level=info msg="Start subscribing containerd event" Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.568510775Z" level=info msg="Start recovering state" Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.568631507Z" level=info msg="Start event monitor" Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.568656179Z" level=info msg="Start snapshots syncer" Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.568683755Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.568702595Z" level=info msg="Start streaming server" Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.569700359Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.569813255Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:55:00.570268 containerd[2012]: time="2024-12-13T01:55:00.570107495Z" level=info msg="containerd successfully booted in 0.139906s" Dec 13 01:55:00.570228 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:55:00.664363 coreos-metadata[2108]: Dec 13 01:55:00.664 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:00.666384 coreos-metadata[2108]: Dec 13 01:55:00.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:55:00.668851 coreos-metadata[2108]: Dec 13 01:55:00.668 INFO Fetch successful Dec 13 01:55:00.668851 coreos-metadata[2108]: Dec 13 01:55:00.668 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:55:00.671261 coreos-metadata[2108]: Dec 13 01:55:00.669 INFO Fetch successful Dec 13 01:55:00.674276 unknown[2108]: wrote ssh authorized keys file for user: core Dec 13 01:55:00.696608 ntpd[1989]: bind(24) AF_INET6 fe80::449:bcff:fe19:3e73%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:00.696682 ntpd[1989]: unable to create socket on eth0 (6) for fe80::449:bcff:fe19:3e73%2#123 Dec 13 01:55:00.697129 ntpd[1989]: 13 Dec 01:55:00 ntpd[1989]: bind(24) AF_INET6 fe80::449:bcff:fe19:3e73%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:00.697129 ntpd[1989]: 13 Dec 01:55:00 ntpd[1989]: unable to create socket on eth0 (6) for fe80::449:bcff:fe19:3e73%2#123 Dec 13 01:55:00.697129 ntpd[1989]: 13 Dec 01:55:00 ntpd[1989]: failed to init interface for address fe80::449:bcff:fe19:3e73%2 Dec 13 01:55:00.696712 ntpd[1989]: failed to init interface for address fe80::449:bcff:fe19:3e73%2 Dec 13 01:55:00.757905 update-ssh-keys[2184]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:00.767441 systemd-networkd[1920]: eth0: Gained IPv6LL Dec 13 01:55:00.768537 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:55:00.785155 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:55:00.789379 systemd[1]: Finished sshkeys.service. Dec 13 01:55:00.792896 sshd_keygen[2022]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:55:00.798653 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:55:00.809737 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:55:00.820624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:00.831727 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:55:00.893780 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:55:00.903780 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:55:00.913064 systemd[1]: Started sshd@0-172.31.17.98:22-139.178.68.195:37486.service - OpenSSH per-connection server daemon (139.178.68.195:37486). Dec 13 01:55:00.948089 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:55:00.949765 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:55:00.963639 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:00.970931 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:55:01.010271 amazon-ssm-agent[2193]: Initializing new seelog logger Dec 13 01:55:01.010271 amazon-ssm-agent[2193]: New Seelog Logger Creation Complete Dec 13 01:55:01.010271 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.010271 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.010271 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 processing appconfig overrides Dec 13 01:55:01.016081 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.016081 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.016081 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 processing appconfig overrides Dec 13 01:55:01.016081 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.016081 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.016081 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 processing appconfig overrides Dec 13 01:55:01.017100 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:01.018884 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO Proxy environment variables: Dec 13 01:55:01.023652 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.023830 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:01.024675 amazon-ssm-agent[2193]: 2024/12/13 01:55:01 processing appconfig overrides Dec 13 01:55:01.029077 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:01.037837 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:01.040775 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:01.118027 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO https_proxy: Dec 13 01:55:01.214726 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 37486 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:01.219568 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO http_proxy: Dec 13 01:55:01.221129 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:01.245526 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:01.263768 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:01.278541 systemd-logind[1993]: New session 1 of user core. Dec 13 01:55:01.299303 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:01.313699 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:01.318640 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO no_proxy: Dec 13 01:55:01.335312 (systemd)[2225]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:01.420275 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:55:01.517071 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:55:01.565713 systemd[2225]: Queued start job for default target default.target. Dec 13 01:55:01.574268 systemd[2225]: Created slice app.slice - User Application Slice. Dec 13 01:55:01.574331 systemd[2225]: Reached target paths.target - Paths. Dec 13 01:55:01.574364 systemd[2225]: Reached target timers.target - Timers. Dec 13 01:55:01.579534 systemd[2225]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:01.617343 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO Agent will take identity from EC2 Dec 13 01:55:01.620866 systemd[2225]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:01.620993 systemd[2225]: Reached target sockets.target - Sockets. Dec 13 01:55:01.621025 systemd[2225]: Reached target basic.target - Basic System. Dec 13 01:55:01.621109 systemd[2225]: Reached target default.target - Main User Target. Dec 13 01:55:01.621172 systemd[2225]: Startup finished in 262ms. Dec 13 01:55:01.621193 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:01.633570 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:01.717271 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:01.810813 systemd[1]: Started sshd@1-172.31.17.98:22-139.178.68.195:37490.service - OpenSSH per-connection server daemon (139.178.68.195:37490). Dec 13 01:55:01.819128 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:01.917780 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:02.017145 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:55:02.043650 sshd[2239]: Accepted publickey for core from 139.178.68.195 port 37490 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:02.046503 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:02.065630 systemd-logind[1993]: New session 2 of user core. Dec 13 01:55:02.070933 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:02.117954 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:55:02.177105 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:55:02.177105 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:55:02.177105 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [Registrar] Starting registrar module Dec 13 01:55:02.177331 amazon-ssm-agent[2193]: 2024-12-13 01:55:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:55:02.177331 amazon-ssm-agent[2193]: 2024-12-13 01:55:02 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:55:02.177331 amazon-ssm-agent[2193]: 2024-12-13 01:55:02 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:55:02.177331 amazon-ssm-agent[2193]: 2024-12-13 01:55:02 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:55:02.177331 amazon-ssm-agent[2193]: 2024-12-13 01:55:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:55:02.206797 sshd[2239]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:02.214430 systemd[1]: sshd@1-172.31.17.98:22-139.178.68.195:37490.service: Deactivated successfully. Dec 13 01:55:02.218416 amazon-ssm-agent[2193]: 2024-12-13 01:55:02 INFO [CredentialRefresher] Next credential rotation will be in 32.29165789766667 minutes Dec 13 01:55:02.220432 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:55:02.223951 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:55:02.227409 systemd-logind[1993]: Removed session 2. Dec 13 01:55:02.252666 systemd[1]: Started sshd@2-172.31.17.98:22-139.178.68.195:37496.service - OpenSSH per-connection server daemon (139.178.68.195:37496). Dec 13 01:55:02.302101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:02.305232 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:02.308566 systemd[1]: Startup finished in 1.231s (kernel) + 9.613s (initrd) + 7.944s (userspace) = 18.789s. Dec 13 01:55:02.315427 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:02.444028 sshd[2246]: Accepted publickey for core from 139.178.68.195 port 37496 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:02.448347 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:02.458597 systemd-logind[1993]: New session 3 of user core. Dec 13 01:55:02.470579 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:02.603518 sshd[2246]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:02.615734 systemd[1]: sshd@2-172.31.17.98:22-139.178.68.195:37496.service: Deactivated successfully. Dec 13 01:55:02.620603 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:55:02.622726 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:55:02.625315 systemd-logind[1993]: Removed session 3. Dec 13 01:55:03.073548 kubelet[2253]: E1213 01:55:03.073469 2253 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:03.078896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:03.079226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:03.079778 systemd[1]: kubelet.service: Consumed 1.327s CPU time. Dec 13 01:55:03.206123 amazon-ssm-agent[2193]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:55:03.307387 amazon-ssm-agent[2193]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2270) started Dec 13 01:55:03.408255 amazon-ssm-agent[2193]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:55:03.696150 ntpd[1989]: Listen normally on 7 eth0 [fe80::449:bcff:fe19:3e73%2]:123 Dec 13 01:55:03.696905 ntpd[1989]: 13 Dec 01:55:03 ntpd[1989]: Listen normally on 7 eth0 [fe80::449:bcff:fe19:3e73%2]:123 Dec 13 01:55:07.178497 systemd-resolved[1923]: Clock change detected. Flushing caches. Dec 13 01:55:13.126895 systemd[1]: Started sshd@3-172.31.17.98:22-139.178.68.195:42328.service - OpenSSH per-connection server daemon (139.178.68.195:42328). Dec 13 01:55:13.308366 sshd[2281]: Accepted publickey for core from 139.178.68.195 port 42328 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.310963 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.319351 systemd-logind[1993]: New session 4 of user core. Dec 13 01:55:13.329642 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:13.457951 sshd[2281]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.463122 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:13.463898 systemd[1]: sshd@3-172.31.17.98:22-139.178.68.195:42328.service: Deactivated successfully. Dec 13 01:55:13.467361 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:13.471357 systemd-logind[1993]: Removed session 4. Dec 13 01:55:13.495889 systemd[1]: Started sshd@4-172.31.17.98:22-139.178.68.195:42338.service - OpenSSH per-connection server daemon (139.178.68.195:42338). Dec 13 01:55:13.635403 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:13.646732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:13.670350 sshd[2288]: Accepted publickey for core from 139.178.68.195 port 42338 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.672048 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.680837 systemd-logind[1993]: New session 5 of user core. Dec 13 01:55:13.691648 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:13.815726 sshd[2288]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.821984 systemd[1]: sshd@4-172.31.17.98:22-139.178.68.195:42338.service: Deactivated successfully. Dec 13 01:55:13.827005 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:13.830833 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:13.832794 systemd-logind[1993]: Removed session 5. Dec 13 01:55:13.855114 systemd[1]: Started sshd@5-172.31.17.98:22-139.178.68.195:42340.service - OpenSSH per-connection server daemon (139.178.68.195:42340). Dec 13 01:55:14.015707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:14.024904 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:14.028446 sshd[2298]: Accepted publickey for core from 139.178.68.195 port 42340 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:14.032639 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:14.041868 systemd-logind[1993]: New session 6 of user core. Dec 13 01:55:14.050755 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:55:14.127921 kubelet[2305]: E1213 01:55:14.127748 2305 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:14.135642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:14.135969 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:14.182704 sshd[2298]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:14.190313 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:55:14.191182 systemd[1]: sshd@5-172.31.17.98:22-139.178.68.195:42340.service: Deactivated successfully. Dec 13 01:55:14.195032 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:55:14.198173 systemd-logind[1993]: Removed session 6. Dec 13 01:55:14.225907 systemd[1]: Started sshd@6-172.31.17.98:22-139.178.68.195:42350.service - OpenSSH per-connection server daemon (139.178.68.195:42350). Dec 13 01:55:14.400473 sshd[2318]: Accepted publickey for core from 139.178.68.195 port 42350 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:14.403040 sshd[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:14.411482 systemd-logind[1993]: New session 7 of user core. Dec 13 01:55:14.421654 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:55:14.538204 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:55:14.538875 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:14.557917 sudo[2321]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:14.582206 sshd[2318]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:14.588333 systemd[1]: sshd@6-172.31.17.98:22-139.178.68.195:42350.service: Deactivated successfully. Dec 13 01:55:14.592084 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:55:14.594690 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:55:14.596525 systemd-logind[1993]: Removed session 7. Dec 13 01:55:14.625863 systemd[1]: Started sshd@7-172.31.17.98:22-139.178.68.195:42354.service - OpenSSH per-connection server daemon (139.178.68.195:42354). Dec 13 01:55:14.794142 sshd[2326]: Accepted publickey for core from 139.178.68.195 port 42354 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:14.797680 sshd[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:14.807693 systemd-logind[1993]: New session 8 of user core. Dec 13 01:55:14.818659 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:55:14.926367 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:55:14.927113 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:14.934464 sudo[2330]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:14.944833 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:55:14.945974 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:14.969899 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:14.974495 auditctl[2333]: No rules Dec 13 01:55:14.976738 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:55:14.978467 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:14.990503 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:15.041821 augenrules[2351]: No rules Dec 13 01:55:15.044636 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:15.048225 sudo[2329]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:15.072774 sshd[2326]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:15.078013 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:55:15.079487 systemd[1]: sshd@7-172.31.17.98:22-139.178.68.195:42354.service: Deactivated successfully. Dec 13 01:55:15.082421 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:55:15.086506 systemd-logind[1993]: Removed session 8. Dec 13 01:55:15.112912 systemd[1]: Started sshd@8-172.31.17.98:22-139.178.68.195:42356.service - OpenSSH per-connection server daemon (139.178.68.195:42356). Dec 13 01:55:15.276127 sshd[2359]: Accepted publickey for core from 139.178.68.195 port 42356 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:15.279058 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:15.286793 systemd-logind[1993]: New session 9 of user core. Dec 13 01:55:15.297650 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:55:15.401717 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:15.402344 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:16.411011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:16.422870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:16.463323 systemd[1]: Reloading requested from client PID 2400 ('systemctl') (unit session-9.scope)... Dec 13 01:55:16.463354 systemd[1]: Reloading... Dec 13 01:55:16.683080 zram_generator::config[2440]: No configuration found. Dec 13 01:55:16.932760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:17.101176 systemd[1]: Reloading finished in 637 ms. Dec 13 01:55:17.203492 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:55:17.203736 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:55:17.204419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:17.212947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:17.620866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:17.637878 (kubelet)[2503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:17.720308 kubelet[2503]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:17.720308 kubelet[2503]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:17.720308 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:17.724758 kubelet[2503]: I1213 01:55:17.724677 2503 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:18.165074 kubelet[2503]: I1213 01:55:18.165010 2503 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:55:18.165074 kubelet[2503]: I1213 01:55:18.165057 2503 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:18.165503 kubelet[2503]: I1213 01:55:18.165461 2503 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:55:18.193226 kubelet[2503]: I1213 01:55:18.192981 2503 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:18.206509 kubelet[2503]: I1213 01:55:18.206473 2503 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:18.209339 kubelet[2503]: I1213 01:55:18.208471 2503 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:18.209339 kubelet[2503]: I1213 01:55:18.208546 2503 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.17.98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:55:18.209339 kubelet[2503]: I1213 01:55:18.208844 2503 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:18.209339 kubelet[2503]: I1213 01:55:18.208864 2503 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:55:18.209339 kubelet[2503]: I1213 01:55:18.209098 2503 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:18.211038 kubelet[2503]: I1213 01:55:18.211005 2503 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:55:18.211208 kubelet[2503]: I1213 01:55:18.211183 2503 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:18.211469 kubelet[2503]: I1213 01:55:18.211448 2503 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:55:18.211614 kubelet[2503]: I1213 01:55:18.211594 2503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:18.211732 kubelet[2503]: E1213 01:55:18.211652 2503 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:18.211821 kubelet[2503]: E1213 01:55:18.211787 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:18.213593 kubelet[2503]: I1213 01:55:18.213545 2503 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:18.213978 kubelet[2503]: I1213 01:55:18.213935 2503 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:18.214053 kubelet[2503]: W1213 01:55:18.214015 2503 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:55:18.215147 kubelet[2503]: I1213 01:55:18.215105 2503 server.go:1264] "Started kubelet" Dec 13 01:55:18.217418 kubelet[2503]: I1213 01:55:18.215573 2503 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:18.217418 kubelet[2503]: I1213 01:55:18.217225 2503 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:55:18.217931 kubelet[2503]: I1213 01:55:18.217903 2503 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:18.218960 kubelet[2503]: I1213 01:55:18.218879 2503 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:18.222412 kubelet[2503]: I1213 01:55:18.220775 2503 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:18.231290 kubelet[2503]: I1213 01:55:18.231250 2503 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:55:18.240754 kubelet[2503]: I1213 01:55:18.240707 2503 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:55:18.245365 kubelet[2503]: I1213 01:55:18.245331 2503 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:55:18.256133 kubelet[2503]: I1213 01:55:18.256082 2503 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:18.260792 kubelet[2503]: E1213 01:55:18.256569 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.17.98\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:55:18.263085 kubelet[2503]: E1213 01:55:18.263023 2503 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:18.265238 kubelet[2503]: I1213 01:55:18.265167 2503 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:18.265794 kubelet[2503]: I1213 01:55:18.265754 2503 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:18.285785 kubelet[2503]: E1213 01:55:18.285525 2503 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.17.98.181099c2bd3a67ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.17.98,UID:172.31.17.98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.17.98,},FirstTimestamp:2024-12-13 01:55:18.21506763 +0000 UTC m=+0.571011208,LastTimestamp:2024-12-13 01:55:18.21506763 +0000 UTC m=+0.571011208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.17.98,}" Dec 13 01:55:18.286811 kubelet[2503]: W1213 01:55:18.286764 2503 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:55:18.287002 kubelet[2503]: E1213 01:55:18.286979 2503 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:55:18.287812 kubelet[2503]: W1213 01:55:18.287766 2503 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:55:18.288795 kubelet[2503]: E1213 01:55:18.288767 2503 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:55:18.290955 kubelet[2503]: W1213 01:55:18.288616 2503 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.17.98" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:55:18.290955 kubelet[2503]: E1213 01:55:18.290912 2503 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.17.98" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:55:18.296200 kubelet[2503]: I1213 01:55:18.295947 2503 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:18.296200 kubelet[2503]: I1213 01:55:18.295979 2503 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:18.296200 kubelet[2503]: I1213 01:55:18.296178 2503 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:18.304260 kubelet[2503]: I1213 01:55:18.303543 2503 policy_none.go:49] "None policy: Start" Dec 13 01:55:18.308355 kubelet[2503]: I1213 01:55:18.308311 2503 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:18.309129 kubelet[2503]: I1213 01:55:18.308595 2503 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:18.325164 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:55:18.335452 kubelet[2503]: I1213 01:55:18.334254 2503 kubelet_node_status.go:73] "Attempting to register node" node="172.31.17.98" Dec 13 01:55:18.341319 kubelet[2503]: I1213 01:55:18.341116 2503 kubelet_node_status.go:76] "Successfully registered node" node="172.31.17.98" Dec 13 01:55:18.353733 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:55:18.359691 kubelet[2503]: I1213 01:55:18.359597 2503 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:18.362566 kubelet[2503]: I1213 01:55:18.362494 2503 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:18.362725 kubelet[2503]: I1213 01:55:18.362596 2503 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:18.362725 kubelet[2503]: I1213 01:55:18.362632 2503 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:55:18.362725 kubelet[2503]: E1213 01:55:18.362711 2503 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:18.372333 kubelet[2503]: E1213 01:55:18.372178 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:18.377155 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:55:18.381263 kubelet[2503]: I1213 01:55:18.381213 2503 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:18.383134 kubelet[2503]: I1213 01:55:18.382144 2503 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:55:18.384194 kubelet[2503]: I1213 01:55:18.384123 2503 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:18.386517 kubelet[2503]: E1213 01:55:18.386483 2503 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.17.98\" not found" Dec 13 01:55:18.472878 kubelet[2503]: E1213 01:55:18.472726 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:18.574081 kubelet[2503]: E1213 01:55:18.574028 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:18.652711 sudo[2362]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:18.675275 kubelet[2503]: E1213 01:55:18.675185 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:18.677754 sshd[2359]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:18.683783 systemd[1]: sshd@8-172.31.17.98:22-139.178.68.195:42356.service: Deactivated successfully. Dec 13 01:55:18.688156 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:55:18.692222 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:55:18.694681 systemd-logind[1993]: Removed session 9. Dec 13 01:55:18.775741 kubelet[2503]: E1213 01:55:18.775557 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:18.876329 kubelet[2503]: E1213 01:55:18.876271 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:18.976861 kubelet[2503]: E1213 01:55:18.976823 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:19.077560 kubelet[2503]: E1213 01:55:19.077506 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:19.169136 kubelet[2503]: I1213 01:55:19.169085 2503 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:55:19.169360 kubelet[2503]: W1213 01:55:19.169260 2503 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:55:19.169360 kubelet[2503]: W1213 01:55:19.169326 2503 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:55:19.178545 kubelet[2503]: E1213 01:55:19.178511 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:19.211922 kubelet[2503]: E1213 01:55:19.211873 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:19.278812 kubelet[2503]: E1213 01:55:19.278755 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:19.379981 kubelet[2503]: E1213 01:55:19.379847 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:19.480662 kubelet[2503]: E1213 01:55:19.480612 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:19.580761 kubelet[2503]: E1213 01:55:19.580676 2503 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.17.98\" not found" Dec 13 01:55:19.682296 kubelet[2503]: I1213 01:55:19.681677 2503 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:55:19.682447 containerd[2012]: time="2024-12-13T01:55:19.682084066Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:55:19.682957 kubelet[2503]: I1213 01:55:19.682855 2503 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:55:20.212223 kubelet[2503]: E1213 01:55:20.212152 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:20.213445 kubelet[2503]: I1213 01:55:20.213309 2503 apiserver.go:52] "Watching apiserver" Dec 13 01:55:20.219687 kubelet[2503]: I1213 01:55:20.219433 2503 topology_manager.go:215] "Topology Admit Handler" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" podNamespace="kube-system" podName="cilium-6m6sn" Dec 13 01:55:20.219687 kubelet[2503]: I1213 01:55:20.219658 2503 topology_manager.go:215] "Topology Admit Handler" podUID="bf81b659-d146-40f1-8766-60eb56b102cf" podNamespace="kube-system" podName="kube-proxy-gqqf6" Dec 13 01:55:20.241249 systemd[1]: Created slice kubepods-burstable-podf911ea9a_0877_4aa6_9610_94ec110afb5a.slice - libcontainer container kubepods-burstable-podf911ea9a_0877_4aa6_9610_94ec110afb5a.slice. Dec 13 01:55:20.242965 kubelet[2503]: I1213 01:55:20.241628 2503 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:55:20.245406 kubelet[2503]: W1213 01:55:20.245323 2503 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf911ea9a_0877_4aa6_9610_94ec110afb5a.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf911ea9a_0877_4aa6_9610_94ec110afb5a.slice/cpuset.cpus.effective: no such device Dec 13 01:55:20.257759 kubelet[2503]: I1213 01:55:20.257704 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-kernel\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.258034 kubelet[2503]: I1213 01:55:20.258002 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zm4n\" (UniqueName: \"kubernetes.io/projected/bf81b659-d146-40f1-8766-60eb56b102cf-kube-api-access-7zm4n\") pod \"kube-proxy-gqqf6\" (UID: \"bf81b659-d146-40f1-8766-60eb56b102cf\") " pod="kube-system/kube-proxy-gqqf6" Dec 13 01:55:20.258321 kubelet[2503]: I1213 01:55:20.258292 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-hostproc\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.258498 kubelet[2503]: I1213 01:55:20.258473 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-lib-modules\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.258836 kubelet[2503]: I1213 01:55:20.258679 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-config-path\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.258836 kubelet[2503]: I1213 01:55:20.258723 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-run\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.258836 kubelet[2503]: I1213 01:55:20.258777 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-etc-cni-netd\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.258836 kubelet[2503]: I1213 01:55:20.258813 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf81b659-d146-40f1-8766-60eb56b102cf-xtables-lock\") pod \"kube-proxy-gqqf6\" (UID: \"bf81b659-d146-40f1-8766-60eb56b102cf\") " pod="kube-system/kube-proxy-gqqf6" Dec 13 01:55:20.260000 kubelet[2503]: I1213 01:55:20.259116 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf81b659-d146-40f1-8766-60eb56b102cf-kube-proxy\") pod \"kube-proxy-gqqf6\" (UID: \"bf81b659-d146-40f1-8766-60eb56b102cf\") " pod="kube-system/kube-proxy-gqqf6" Dec 13 01:55:20.260000 kubelet[2503]: I1213 01:55:20.259168 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf81b659-d146-40f1-8766-60eb56b102cf-lib-modules\") pod \"kube-proxy-gqqf6\" (UID: \"bf81b659-d146-40f1-8766-60eb56b102cf\") " pod="kube-system/kube-proxy-gqqf6" Dec 13 01:55:20.260000 kubelet[2503]: I1213 01:55:20.259203 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-bpf-maps\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.260000 kubelet[2503]: I1213 01:55:20.259238 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-cgroup\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.260000 kubelet[2503]: I1213 01:55:20.259278 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khw6j\" (UniqueName: \"kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-kube-api-access-khw6j\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.260000 kubelet[2503]: I1213 01:55:20.259318 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-net\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.259821 systemd[1]: Created slice kubepods-besteffort-podbf81b659_d146_40f1_8766_60eb56b102cf.slice - libcontainer container kubepods-besteffort-podbf81b659_d146_40f1_8766_60eb56b102cf.slice. Dec 13 01:55:20.260583 kubelet[2503]: I1213 01:55:20.259355 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-hubble-tls\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.260583 kubelet[2503]: I1213 01:55:20.259422 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cni-path\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.260583 kubelet[2503]: I1213 01:55:20.259460 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-xtables-lock\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.260583 kubelet[2503]: I1213 01:55:20.259497 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f911ea9a-0877-4aa6-9610-94ec110afb5a-clustermesh-secrets\") pod \"cilium-6m6sn\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " pod="kube-system/cilium-6m6sn" Dec 13 01:55:20.556084 containerd[2012]: time="2024-12-13T01:55:20.555990274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6m6sn,Uid:f911ea9a-0877-4aa6-9610-94ec110afb5a,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:20.571911 containerd[2012]: time="2024-12-13T01:55:20.571848970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqqf6,Uid:bf81b659-d146-40f1-8766-60eb56b102cf,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:21.174234 containerd[2012]: time="2024-12-13T01:55:21.174054573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:21.176172 containerd[2012]: time="2024-12-13T01:55:21.176075217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:21.178163 containerd[2012]: time="2024-12-13T01:55:21.178098549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:21.178946 containerd[2012]: time="2024-12-13T01:55:21.178887681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:55:21.181034 containerd[2012]: time="2024-12-13T01:55:21.180920757Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:21.187725 containerd[2012]: time="2024-12-13T01:55:21.187601193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:21.191587 containerd[2012]: time="2024-12-13T01:55:21.191028645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 634.861767ms" Dec 13 01:55:21.194339 containerd[2012]: time="2024-12-13T01:55:21.194270445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 622.283607ms" Dec 13 01:55:21.213167 kubelet[2503]: E1213 01:55:21.213101 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:21.355300 containerd[2012]: time="2024-12-13T01:55:21.354946714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:21.355300 containerd[2012]: time="2024-12-13T01:55:21.355042762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:21.355300 containerd[2012]: time="2024-12-13T01:55:21.355109638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.357006 containerd[2012]: time="2024-12-13T01:55:21.356877910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.362079 containerd[2012]: time="2024-12-13T01:55:21.361783738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:21.362079 containerd[2012]: time="2024-12-13T01:55:21.361887046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:21.362634 containerd[2012]: time="2024-12-13T01:55:21.362309050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.364538 containerd[2012]: time="2024-12-13T01:55:21.364118650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.384821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333975062.mount: Deactivated successfully. Dec 13 01:55:21.478189 systemd[1]: run-containerd-runc-k8s.io-6ec80b5e022da479be435f8d71b8fd87d4b8f64fea832965d5c85edf46edfd69-runc.SXLOx6.mount: Deactivated successfully. Dec 13 01:55:21.496738 systemd[1]: Started cri-containerd-4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7.scope - libcontainer container 4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7. Dec 13 01:55:21.501348 systemd[1]: Started cri-containerd-6ec80b5e022da479be435f8d71b8fd87d4b8f64fea832965d5c85edf46edfd69.scope - libcontainer container 6ec80b5e022da479be435f8d71b8fd87d4b8f64fea832965d5c85edf46edfd69. Dec 13 01:55:21.558858 containerd[2012]: time="2024-12-13T01:55:21.558704651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6m6sn,Uid:f911ea9a-0877-4aa6-9610-94ec110afb5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\"" Dec 13 01:55:21.566733 containerd[2012]: time="2024-12-13T01:55:21.566681603Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:55:21.576135 containerd[2012]: time="2024-12-13T01:55:21.576063203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gqqf6,Uid:bf81b659-d146-40f1-8766-60eb56b102cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ec80b5e022da479be435f8d71b8fd87d4b8f64fea832965d5c85edf46edfd69\"" Dec 13 01:55:22.214272 kubelet[2503]: E1213 01:55:22.214212 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:23.215016 kubelet[2503]: E1213 01:55:23.214937 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:24.215297 kubelet[2503]: E1213 01:55:24.215242 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:25.215801 kubelet[2503]: E1213 01:55:25.215728 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:26.218238 kubelet[2503]: E1213 01:55:26.218137 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:27.219179 kubelet[2503]: E1213 01:55:27.219102 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:28.219785 kubelet[2503]: E1213 01:55:28.219562 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:29.220761 kubelet[2503]: E1213 01:55:29.220536 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:30.221869 kubelet[2503]: E1213 01:55:30.221768 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:30.794132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652812398.mount: Deactivated successfully. Dec 13 01:55:30.998123 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:55:31.224799 kubelet[2503]: E1213 01:55:31.223532 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:32.224075 kubelet[2503]: E1213 01:55:32.223996 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:33.224349 kubelet[2503]: E1213 01:55:33.224301 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:33.304956 containerd[2012]: time="2024-12-13T01:55:33.304859397Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:33.306861 containerd[2012]: time="2024-12-13T01:55:33.306769557Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650910" Dec 13 01:55:33.308686 containerd[2012]: time="2024-12-13T01:55:33.308596377Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:33.312717 containerd[2012]: time="2024-12-13T01:55:33.312426549Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.745386758s" Dec 13 01:55:33.312717 containerd[2012]: time="2024-12-13T01:55:33.312518649Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:55:33.316471 containerd[2012]: time="2024-12-13T01:55:33.316137117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:55:33.318544 containerd[2012]: time="2024-12-13T01:55:33.318184209Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:55:33.348430 containerd[2012]: time="2024-12-13T01:55:33.347983149Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\"" Dec 13 01:55:33.349364 containerd[2012]: time="2024-12-13T01:55:33.349228989Z" level=info msg="StartContainer for \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\"" Dec 13 01:55:33.414728 systemd[1]: Started cri-containerd-6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2.scope - libcontainer container 6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2. Dec 13 01:55:33.465835 containerd[2012]: time="2024-12-13T01:55:33.465761998Z" level=info msg="StartContainer for \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\" returns successfully" Dec 13 01:55:33.486848 systemd[1]: cri-containerd-6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2.scope: Deactivated successfully. Dec 13 01:55:34.224952 kubelet[2503]: E1213 01:55:34.224894 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:34.335547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2-rootfs.mount: Deactivated successfully. Dec 13 01:55:35.225978 kubelet[2503]: E1213 01:55:35.225908 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:35.521691 containerd[2012]: time="2024-12-13T01:55:35.519813996Z" level=info msg="shim disconnected" id=6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2 namespace=k8s.io Dec 13 01:55:35.525128 containerd[2012]: time="2024-12-13T01:55:35.523194192Z" level=warning msg="cleaning up after shim disconnected" id=6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2 namespace=k8s.io Dec 13 01:55:35.525128 containerd[2012]: time="2024-12-13T01:55:35.523246332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:36.226719 kubelet[2503]: E1213 01:55:36.226630 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:36.254007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831468179.mount: Deactivated successfully. Dec 13 01:55:36.453591 containerd[2012]: time="2024-12-13T01:55:36.452793061Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:55:36.496154 containerd[2012]: time="2024-12-13T01:55:36.495976297Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\"" Dec 13 01:55:36.499330 containerd[2012]: time="2024-12-13T01:55:36.498585001Z" level=info msg="StartContainer for \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\"" Dec 13 01:55:36.595434 systemd[1]: Started cri-containerd-102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172.scope - libcontainer container 102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172. Dec 13 01:55:36.678094 containerd[2012]: time="2024-12-13T01:55:36.677995202Z" level=info msg="StartContainer for \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\" returns successfully" Dec 13 01:55:36.711167 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:55:36.713325 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:36.713876 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:36.725001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:36.725554 systemd[1]: cri-containerd-102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172.scope: Deactivated successfully. Dec 13 01:55:36.791728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:36.819239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172-rootfs.mount: Deactivated successfully. Dec 13 01:55:37.150723 containerd[2012]: time="2024-12-13T01:55:37.150191964Z" level=info msg="shim disconnected" id=102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172 namespace=k8s.io Dec 13 01:55:37.150723 containerd[2012]: time="2024-12-13T01:55:37.150324300Z" level=warning msg="cleaning up after shim disconnected" id=102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172 namespace=k8s.io Dec 13 01:55:37.150723 containerd[2012]: time="2024-12-13T01:55:37.150347640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:37.207409 containerd[2012]: time="2024-12-13T01:55:37.207283525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:37.209311 containerd[2012]: time="2024-12-13T01:55:37.209230105Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Dec 13 01:55:37.211550 containerd[2012]: time="2024-12-13T01:55:37.211341493Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:37.218822 containerd[2012]: time="2024-12-13T01:55:37.218699857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:37.221193 containerd[2012]: time="2024-12-13T01:55:37.220932577Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 3.904734152s" Dec 13 01:55:37.221193 containerd[2012]: time="2024-12-13T01:55:37.221009329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 01:55:37.227451 kubelet[2503]: E1213 01:55:37.226883 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:37.228607 containerd[2012]: time="2024-12-13T01:55:37.227444785Z" level=info msg="CreateContainer within sandbox \"6ec80b5e022da479be435f8d71b8fd87d4b8f64fea832965d5c85edf46edfd69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:55:37.263612 containerd[2012]: time="2024-12-13T01:55:37.263516833Z" level=info msg="CreateContainer within sandbox \"6ec80b5e022da479be435f8d71b8fd87d4b8f64fea832965d5c85edf46edfd69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42b0cba9e61f18079de7ae55a1814131e8cbd9d398ee0b7de51ff8c436667907\"" Dec 13 01:55:37.265960 containerd[2012]: time="2024-12-13T01:55:37.264501997Z" level=info msg="StartContainer for \"42b0cba9e61f18079de7ae55a1814131e8cbd9d398ee0b7de51ff8c436667907\"" Dec 13 01:55:37.317810 systemd[1]: Started cri-containerd-42b0cba9e61f18079de7ae55a1814131e8cbd9d398ee0b7de51ff8c436667907.scope - libcontainer container 42b0cba9e61f18079de7ae55a1814131e8cbd9d398ee0b7de51ff8c436667907. Dec 13 01:55:37.381125 containerd[2012]: time="2024-12-13T01:55:37.381044330Z" level=info msg="StartContainer for \"42b0cba9e61f18079de7ae55a1814131e8cbd9d398ee0b7de51ff8c436667907\" returns successfully" Dec 13 01:55:37.464357 containerd[2012]: time="2024-12-13T01:55:37.463613750Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:55:37.501907 kubelet[2503]: I1213 01:55:37.501751 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gqqf6" podStartSLOduration=3.857396124 podStartE2EDuration="19.501726014s" podCreationTimestamp="2024-12-13 01:55:18 +0000 UTC" firstStartedPulling="2024-12-13 01:55:21.579286427 +0000 UTC m=+3.935230005" lastFinishedPulling="2024-12-13 01:55:37.223616317 +0000 UTC m=+19.579559895" observedRunningTime="2024-12-13 01:55:37.468745634 +0000 UTC m=+19.824689248" watchObservedRunningTime="2024-12-13 01:55:37.501726014 +0000 UTC m=+19.857669604" Dec 13 01:55:37.504643 containerd[2012]: time="2024-12-13T01:55:37.504572102Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\"" Dec 13 01:55:37.505739 containerd[2012]: time="2024-12-13T01:55:37.505676678Z" level=info msg="StartContainer for \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\"" Dec 13 01:55:37.586047 systemd[1]: Started cri-containerd-dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4.scope - libcontainer container dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4. Dec 13 01:55:37.651246 containerd[2012]: time="2024-12-13T01:55:37.649553679Z" level=info msg="StartContainer for \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\" returns successfully" Dec 13 01:55:37.656545 systemd[1]: cri-containerd-dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4.scope: Deactivated successfully. Dec 13 01:55:37.725105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4-rootfs.mount: Deactivated successfully. Dec 13 01:55:37.823475 containerd[2012]: time="2024-12-13T01:55:37.823344616Z" level=info msg="shim disconnected" id=dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4 namespace=k8s.io Dec 13 01:55:37.825213 containerd[2012]: time="2024-12-13T01:55:37.824498668Z" level=warning msg="cleaning up after shim disconnected" id=dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4 namespace=k8s.io Dec 13 01:55:37.825213 containerd[2012]: time="2024-12-13T01:55:37.824552908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:38.211906 kubelet[2503]: E1213 01:55:38.211823 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:38.227319 kubelet[2503]: E1213 01:55:38.227230 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:38.472741 containerd[2012]: time="2024-12-13T01:55:38.472211691Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:55:38.504539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660767504.mount: Deactivated successfully. Dec 13 01:55:38.513772 containerd[2012]: time="2024-12-13T01:55:38.513695031Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\"" Dec 13 01:55:38.514928 containerd[2012]: time="2024-12-13T01:55:38.514841727Z" level=info msg="StartContainer for \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\"" Dec 13 01:55:38.585779 systemd[1]: Started cri-containerd-3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59.scope - libcontainer container 3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59. Dec 13 01:55:38.633769 systemd[1]: cri-containerd-3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59.scope: Deactivated successfully. Dec 13 01:55:38.637767 containerd[2012]: time="2024-12-13T01:55:38.637702720Z" level=info msg="StartContainer for \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\" returns successfully" Dec 13 01:55:38.677075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59-rootfs.mount: Deactivated successfully. Dec 13 01:55:38.689678 containerd[2012]: time="2024-12-13T01:55:38.689531428Z" level=info msg="shim disconnected" id=3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59 namespace=k8s.io Dec 13 01:55:38.690130 containerd[2012]: time="2024-12-13T01:55:38.689650696Z" level=warning msg="cleaning up after shim disconnected" id=3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59 namespace=k8s.io Dec 13 01:55:38.690130 containerd[2012]: time="2024-12-13T01:55:38.689708596Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:38.712640 containerd[2012]: time="2024-12-13T01:55:38.712518820Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:55:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:55:39.228189 kubelet[2503]: E1213 01:55:39.228028 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:39.486314 containerd[2012]: time="2024-12-13T01:55:39.485986288Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:55:39.520909 containerd[2012]: time="2024-12-13T01:55:39.520818748Z" level=info msg="CreateContainer within sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\"" Dec 13 01:55:39.522181 containerd[2012]: time="2024-12-13T01:55:39.522005968Z" level=info msg="StartContainer for \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\"" Dec 13 01:55:39.593726 systemd[1]: Started cri-containerd-932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5.scope - libcontainer container 932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5. Dec 13 01:55:39.650347 containerd[2012]: time="2024-12-13T01:55:39.650264429Z" level=info msg="StartContainer for \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\" returns successfully" Dec 13 01:55:39.909419 kubelet[2503]: I1213 01:55:39.909091 2503 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:55:40.228702 kubelet[2503]: E1213 01:55:40.228533 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:40.625639 kernel: Initializing XFRM netlink socket Dec 13 01:55:41.229762 kubelet[2503]: E1213 01:55:41.229690 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:41.640685 kubelet[2503]: I1213 01:55:41.640601 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6m6sn" podStartSLOduration=11.891597321 podStartE2EDuration="23.640577251s" podCreationTimestamp="2024-12-13 01:55:18 +0000 UTC" firstStartedPulling="2024-12-13 01:55:21.565502399 +0000 UTC m=+3.921445977" lastFinishedPulling="2024-12-13 01:55:33.314482341 +0000 UTC m=+15.670425907" observedRunningTime="2024-12-13 01:55:40.535483973 +0000 UTC m=+22.891427659" watchObservedRunningTime="2024-12-13 01:55:41.640577251 +0000 UTC m=+23.996520841" Dec 13 01:55:41.641100 kubelet[2503]: I1213 01:55:41.641048 2503 topology_manager.go:215] "Topology Admit Handler" podUID="487c58fe-b8c0-4546-9fed-b79b74712bd3" podNamespace="default" podName="nginx-deployment-85f456d6dd-7svgp" Dec 13 01:55:41.648028 kubelet[2503]: W1213 01:55:41.647851 2503 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.17.98" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.17.98' and this object Dec 13 01:55:41.648028 kubelet[2503]: E1213 01:55:41.647952 2503 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.17.98" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.17.98' and this object Dec 13 01:55:41.654150 systemd[1]: Created slice kubepods-besteffort-pod487c58fe_b8c0_4546_9fed_b79b74712bd3.slice - libcontainer container kubepods-besteffort-pod487c58fe_b8c0_4546_9fed_b79b74712bd3.slice. Dec 13 01:55:41.710751 kubelet[2503]: I1213 01:55:41.710678 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmhk\" (UniqueName: \"kubernetes.io/projected/487c58fe-b8c0-4546-9fed-b79b74712bd3-kube-api-access-djmhk\") pod \"nginx-deployment-85f456d6dd-7svgp\" (UID: \"487c58fe-b8c0-4546-9fed-b79b74712bd3\") " pod="default/nginx-deployment-85f456d6dd-7svgp" Dec 13 01:55:42.230462 kubelet[2503]: E1213 01:55:42.230351 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:42.475601 systemd-networkd[1920]: cilium_host: Link UP Dec 13 01:55:42.475998 systemd-networkd[1920]: cilium_net: Link UP Dec 13 01:55:42.478349 (udev-worker)[2943]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:42.479154 (udev-worker)[2944]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:42.479595 systemd-networkd[1920]: cilium_net: Gained carrier Dec 13 01:55:42.480112 systemd-networkd[1920]: cilium_host: Gained carrier Dec 13 01:55:42.642533 systemd-networkd[1920]: cilium_host: Gained IPv6LL Dec 13 01:55:42.701585 systemd-networkd[1920]: cilium_vxlan: Link UP Dec 13 01:55:42.701602 systemd-networkd[1920]: cilium_vxlan: Gained carrier Dec 13 01:55:42.822156 kubelet[2503]: E1213 01:55:42.822080 2503 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:55:42.822156 kubelet[2503]: E1213 01:55:42.822154 2503 projected.go:200] Error preparing data for projected volume kube-api-access-djmhk for pod default/nginx-deployment-85f456d6dd-7svgp: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:55:42.822475 kubelet[2503]: E1213 01:55:42.822272 2503 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/487c58fe-b8c0-4546-9fed-b79b74712bd3-kube-api-access-djmhk podName:487c58fe-b8c0-4546-9fed-b79b74712bd3 nodeName:}" failed. No retries permitted until 2024-12-13 01:55:43.322238329 +0000 UTC m=+25.678181919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djmhk" (UniqueName: "kubernetes.io/projected/487c58fe-b8c0-4546-9fed-b79b74712bd3-kube-api-access-djmhk") pod "nginx-deployment-85f456d6dd-7svgp" (UID: "487c58fe-b8c0-4546-9fed-b79b74712bd3") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:55:42.969622 systemd-networkd[1920]: cilium_net: Gained IPv6LL Dec 13 01:55:43.231543 kubelet[2503]: E1213 01:55:43.231283 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:43.257462 kernel: NET: Registered PF_ALG protocol family Dec 13 01:55:43.460252 containerd[2012]: time="2024-12-13T01:55:43.460164884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7svgp,Uid:487c58fe-b8c0-4546-9fed-b79b74712bd3,Namespace:default,Attempt:0,}" Dec 13 01:55:44.231685 kubelet[2503]: E1213 01:55:44.231610 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:44.673758 systemd-networkd[1920]: lxc_health: Link UP Dec 13 01:55:44.681898 systemd-networkd[1920]: lxc_health: Gained carrier Dec 13 01:55:44.682684 (udev-worker)[3212]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:44.707498 systemd-networkd[1920]: cilium_vxlan: Gained IPv6LL Dec 13 01:55:45.047601 systemd-networkd[1920]: lxcc1260ef506af: Link UP Dec 13 01:55:45.056022 kernel: eth0: renamed from tmp9e01e Dec 13 01:55:45.062305 systemd-networkd[1920]: lxcc1260ef506af: Gained carrier Dec 13 01:55:45.231873 kubelet[2503]: E1213 01:55:45.231810 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:46.007565 update_engine[1998]: I20241213 01:55:46.007454 1998 update_attempter.cc:509] Updating boot flags... Dec 13 01:55:46.154578 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3538) Dec 13 01:55:46.234851 kubelet[2503]: E1213 01:55:46.234705 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:46.563762 systemd-networkd[1920]: lxc_health: Gained IPv6LL Dec 13 01:55:46.629542 systemd-networkd[1920]: lxcc1260ef506af: Gained IPv6LL Dec 13 01:55:46.679439 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (2943) Dec 13 01:55:47.084508 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (2943) Dec 13 01:55:47.235528 kubelet[2503]: E1213 01:55:47.235475 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:48.238692 kubelet[2503]: E1213 01:55:48.238616 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:49.178334 ntpd[1989]: Listen normally on 8 cilium_host 192.168.1.232:123 Dec 13 01:55:49.180258 ntpd[1989]: 13 Dec 01:55:49 ntpd[1989]: Listen normally on 8 cilium_host 192.168.1.232:123 Dec 13 01:55:49.180258 ntpd[1989]: 13 Dec 01:55:49 ntpd[1989]: Listen normally on 9 cilium_net [fe80::c4c5:7aff:fe64:93d%3]:123 Dec 13 01:55:49.180258 ntpd[1989]: 13 Dec 01:55:49 ntpd[1989]: Listen normally on 10 cilium_host [fe80::f8ce:e7ff:fe0d:8dc0%4]:123 Dec 13 01:55:49.180258 ntpd[1989]: 13 Dec 01:55:49 ntpd[1989]: Listen normally on 11 cilium_vxlan [fe80::e8ea:a9ff:fe59:af61%5]:123 Dec 13 01:55:49.180258 ntpd[1989]: 13 Dec 01:55:49 ntpd[1989]: Listen normally on 12 lxc_health [fe80::1cdc:e2ff:fe46:b32e%7]:123 Dec 13 01:55:49.180258 ntpd[1989]: 13 Dec 01:55:49 ntpd[1989]: Listen normally on 13 lxcc1260ef506af [fe80::30a5:d5ff:fea0:76ac%9]:123 Dec 13 01:55:49.179717 ntpd[1989]: Listen normally on 9 cilium_net [fe80::c4c5:7aff:fe64:93d%3]:123 Dec 13 01:55:49.179831 ntpd[1989]: Listen normally on 10 cilium_host [fe80::f8ce:e7ff:fe0d:8dc0%4]:123 Dec 13 01:55:49.179907 ntpd[1989]: Listen normally on 11 cilium_vxlan [fe80::e8ea:a9ff:fe59:af61%5]:123 Dec 13 01:55:49.179979 ntpd[1989]: Listen normally on 12 lxc_health [fe80::1cdc:e2ff:fe46:b32e%7]:123 Dec 13 01:55:49.180086 ntpd[1989]: Listen normally on 13 lxcc1260ef506af [fe80::30a5:d5ff:fea0:76ac%9]:123 Dec 13 01:55:49.239745 kubelet[2503]: E1213 01:55:49.239630 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:50.240486 kubelet[2503]: E1213 01:55:50.240410 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:51.241431 kubelet[2503]: E1213 01:55:51.240802 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:52.241957 kubelet[2503]: E1213 01:55:52.241832 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:53.242346 kubelet[2503]: E1213 01:55:53.242272 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:54.242676 kubelet[2503]: E1213 01:55:54.242601 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:54.740427 containerd[2012]: time="2024-12-13T01:55:54.740187464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:54.740427 containerd[2012]: time="2024-12-13T01:55:54.740319860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:54.741277 containerd[2012]: time="2024-12-13T01:55:54.740428016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:54.741277 containerd[2012]: time="2024-12-13T01:55:54.740796872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:54.786739 systemd[1]: Started cri-containerd-9e01e692f78e46ae2994ee36f2c7ecb0bc3447444d2d6dd3fb9aa7f1a2d1f2b3.scope - libcontainer container 9e01e692f78e46ae2994ee36f2c7ecb0bc3447444d2d6dd3fb9aa7f1a2d1f2b3. Dec 13 01:55:54.852360 containerd[2012]: time="2024-12-13T01:55:54.852186860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-7svgp,Uid:487c58fe-b8c0-4546-9fed-b79b74712bd3,Namespace:default,Attempt:0,} returns sandbox id \"9e01e692f78e46ae2994ee36f2c7ecb0bc3447444d2d6dd3fb9aa7f1a2d1f2b3\"" Dec 13 01:55:54.856410 containerd[2012]: time="2024-12-13T01:55:54.856271984Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:55:55.243258 kubelet[2503]: E1213 01:55:55.243057 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.243662 kubelet[2503]: E1213 01:55:56.243529 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:57.244755 kubelet[2503]: E1213 01:55:57.244632 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:58.212268 kubelet[2503]: E1213 01:55:58.212224 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:58.245371 kubelet[2503]: E1213 01:55:58.245319 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:58.280121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45545210.mount: Deactivated successfully. Dec 13 01:55:59.246119 kubelet[2503]: E1213 01:55:59.245813 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:59.726580 containerd[2012]: time="2024-12-13T01:55:59.726496321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:59.728397 containerd[2012]: time="2024-12-13T01:55:59.728230369Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 01:55:59.730273 containerd[2012]: time="2024-12-13T01:55:59.730176133Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:59.735233 containerd[2012]: time="2024-12-13T01:55:59.735122401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:59.737551 containerd[2012]: time="2024-12-13T01:55:59.737285077Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 4.880905261s" Dec 13 01:55:59.737551 containerd[2012]: time="2024-12-13T01:55:59.737344705Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:55:59.741553 containerd[2012]: time="2024-12-13T01:55:59.741474313Z" level=info msg="CreateContainer within sandbox \"9e01e692f78e46ae2994ee36f2c7ecb0bc3447444d2d6dd3fb9aa7f1a2d1f2b3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:55:59.762086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039996356.mount: Deactivated successfully. Dec 13 01:55:59.765973 containerd[2012]: time="2024-12-13T01:55:59.765918433Z" level=info msg="CreateContainer within sandbox \"9e01e692f78e46ae2994ee36f2c7ecb0bc3447444d2d6dd3fb9aa7f1a2d1f2b3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"53f31294139cc23b54a50ffb7d79de35e85ed8beab24a0286c96e19e6208a824\"" Dec 13 01:55:59.768290 containerd[2012]: time="2024-12-13T01:55:59.768236401Z" level=info msg="StartContainer for \"53f31294139cc23b54a50ffb7d79de35e85ed8beab24a0286c96e19e6208a824\"" Dec 13 01:55:59.825747 systemd[1]: Started cri-containerd-53f31294139cc23b54a50ffb7d79de35e85ed8beab24a0286c96e19e6208a824.scope - libcontainer container 53f31294139cc23b54a50ffb7d79de35e85ed8beab24a0286c96e19e6208a824. Dec 13 01:55:59.885232 containerd[2012]: time="2024-12-13T01:55:59.884755789Z" level=info msg="StartContainer for \"53f31294139cc23b54a50ffb7d79de35e85ed8beab24a0286c96e19e6208a824\" returns successfully" Dec 13 01:56:00.246665 kubelet[2503]: E1213 01:56:00.246527 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:00.574265 kubelet[2503]: I1213 01:56:00.574176 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-7svgp" podStartSLOduration=14.690524144 podStartE2EDuration="19.574110553s" podCreationTimestamp="2024-12-13 01:55:41 +0000 UTC" firstStartedPulling="2024-12-13 01:55:54.855782756 +0000 UTC m=+37.211726346" lastFinishedPulling="2024-12-13 01:55:59.739369165 +0000 UTC m=+42.095312755" observedRunningTime="2024-12-13 01:56:00.573775417 +0000 UTC m=+42.929719523" watchObservedRunningTime="2024-12-13 01:56:00.574110553 +0000 UTC m=+42.930054131" Dec 13 01:56:01.246791 kubelet[2503]: E1213 01:56:01.246719 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:02.247265 kubelet[2503]: E1213 01:56:02.247183 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:03.248075 kubelet[2503]: E1213 01:56:03.247990 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:04.248707 kubelet[2503]: E1213 01:56:04.248630 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:05.249846 kubelet[2503]: E1213 01:56:05.249770 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:06.250514 kubelet[2503]: E1213 01:56:06.250449 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:07.250999 kubelet[2503]: E1213 01:56:07.250924 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:07.555748 kubelet[2503]: I1213 01:56:07.555685 2503 topology_manager.go:215] "Topology Admit Handler" podUID="ea64a569-ef54-4c13-992f-5027d464bf54" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:56:07.570568 systemd[1]: Created slice kubepods-besteffort-podea64a569_ef54_4c13_992f_5027d464bf54.slice - libcontainer container kubepods-besteffort-podea64a569_ef54_4c13_992f_5027d464bf54.slice. Dec 13 01:56:07.625687 kubelet[2503]: I1213 01:56:07.625604 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ea64a569-ef54-4c13-992f-5027d464bf54-data\") pod \"nfs-server-provisioner-0\" (UID: \"ea64a569-ef54-4c13-992f-5027d464bf54\") " pod="default/nfs-server-provisioner-0" Dec 13 01:56:07.625687 kubelet[2503]: I1213 01:56:07.625693 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzrj5\" (UniqueName: \"kubernetes.io/projected/ea64a569-ef54-4c13-992f-5027d464bf54-kube-api-access-zzrj5\") pod \"nfs-server-provisioner-0\" (UID: \"ea64a569-ef54-4c13-992f-5027d464bf54\") " pod="default/nfs-server-provisioner-0" Dec 13 01:56:07.877014 containerd[2012]: time="2024-12-13T01:56:07.876848457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea64a569-ef54-4c13-992f-5027d464bf54,Namespace:default,Attempt:0,}" Dec 13 01:56:07.936071 systemd-networkd[1920]: lxcc9de21bd98e1: Link UP Dec 13 01:56:07.941545 kernel: eth0: renamed from tmp928f0 Dec 13 01:56:07.948971 (udev-worker)[3959]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:07.951722 systemd-networkd[1920]: lxcc9de21bd98e1: Gained carrier Dec 13 01:56:08.252237 kubelet[2503]: E1213 01:56:08.252087 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:08.318797 containerd[2012]: time="2024-12-13T01:56:08.318652891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:08.319125 containerd[2012]: time="2024-12-13T01:56:08.319069987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:08.319295 containerd[2012]: time="2024-12-13T01:56:08.319241743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:08.319701 containerd[2012]: time="2024-12-13T01:56:08.319649311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:08.357717 systemd[1]: Started cri-containerd-928f0afb12e83fd3bfeb1e10979bd20408effc28a9f86bd0ebd0cb07555ed94e.scope - libcontainer container 928f0afb12e83fd3bfeb1e10979bd20408effc28a9f86bd0ebd0cb07555ed94e. Dec 13 01:56:08.430272 containerd[2012]: time="2024-12-13T01:56:08.430064336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea64a569-ef54-4c13-992f-5027d464bf54,Namespace:default,Attempt:0,} returns sandbox id \"928f0afb12e83fd3bfeb1e10979bd20408effc28a9f86bd0ebd0cb07555ed94e\"" Dec 13 01:56:08.433971 containerd[2012]: time="2024-12-13T01:56:08.433688768Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:56:08.744968 systemd[1]: run-containerd-runc-k8s.io-928f0afb12e83fd3bfeb1e10979bd20408effc28a9f86bd0ebd0cb07555ed94e-runc.R4VWo8.mount: Deactivated successfully. Dec 13 01:56:09.253830 kubelet[2503]: E1213 01:56:09.253696 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:09.665837 systemd-networkd[1920]: lxcc9de21bd98e1: Gained IPv6LL Dec 13 01:56:10.254607 kubelet[2503]: E1213 01:56:10.254461 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:11.255200 kubelet[2503]: E1213 01:56:11.255047 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:11.581679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575646256.mount: Deactivated successfully. Dec 13 01:56:12.178629 ntpd[1989]: Listen normally on 14 lxcc9de21bd98e1 [fe80::bcf6:b9ff:fe47:9c51%11]:123 Dec 13 01:56:12.179270 ntpd[1989]: 13 Dec 01:56:12 ntpd[1989]: Listen normally on 14 lxcc9de21bd98e1 [fe80::bcf6:b9ff:fe47:9c51%11]:123 Dec 13 01:56:12.256109 kubelet[2503]: E1213 01:56:12.255906 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:13.256578 kubelet[2503]: E1213 01:56:13.256502 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:14.258700 kubelet[2503]: E1213 01:56:14.258639 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:15.260441 kubelet[2503]: E1213 01:56:15.259617 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:15.317442 containerd[2012]: time="2024-12-13T01:56:15.317260766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:15.319745 containerd[2012]: time="2024-12-13T01:56:15.319607354Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Dec 13 01:56:15.321641 containerd[2012]: time="2024-12-13T01:56:15.321436754Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:15.327758 containerd[2012]: time="2024-12-13T01:56:15.327698198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:15.332054 containerd[2012]: time="2024-12-13T01:56:15.331820258Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 6.898065178s" Dec 13 01:56:15.332054 containerd[2012]: time="2024-12-13T01:56:15.331885838Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 01:56:15.336840 containerd[2012]: time="2024-12-13T01:56:15.336663578Z" level=info msg="CreateContainer within sandbox \"928f0afb12e83fd3bfeb1e10979bd20408effc28a9f86bd0ebd0cb07555ed94e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:56:15.364017 containerd[2012]: time="2024-12-13T01:56:15.363936950Z" level=info msg="CreateContainer within sandbox \"928f0afb12e83fd3bfeb1e10979bd20408effc28a9f86bd0ebd0cb07555ed94e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bc78273f0fde58348ddf3794824f30ce649f70ebb8b6efbf301c534e8bde29c2\"" Dec 13 01:56:15.365052 containerd[2012]: time="2024-12-13T01:56:15.364994738Z" level=info msg="StartContainer for \"bc78273f0fde58348ddf3794824f30ce649f70ebb8b6efbf301c534e8bde29c2\"" Dec 13 01:56:15.421422 systemd[1]: run-containerd-runc-k8s.io-bc78273f0fde58348ddf3794824f30ce649f70ebb8b6efbf301c534e8bde29c2-runc.MKklVc.mount: Deactivated successfully. Dec 13 01:56:15.431972 systemd[1]: Started cri-containerd-bc78273f0fde58348ddf3794824f30ce649f70ebb8b6efbf301c534e8bde29c2.scope - libcontainer container bc78273f0fde58348ddf3794824f30ce649f70ebb8b6efbf301c534e8bde29c2. Dec 13 01:56:15.488971 containerd[2012]: time="2024-12-13T01:56:15.488585967Z" level=info msg="StartContainer for \"bc78273f0fde58348ddf3794824f30ce649f70ebb8b6efbf301c534e8bde29c2\" returns successfully" Dec 13 01:56:15.622984 kubelet[2503]: I1213 01:56:15.622509 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.7220416649999999 podStartE2EDuration="8.622488111s" podCreationTimestamp="2024-12-13 01:56:07 +0000 UTC" firstStartedPulling="2024-12-13 01:56:08.433215704 +0000 UTC m=+50.789159282" lastFinishedPulling="2024-12-13 01:56:15.333662138 +0000 UTC m=+57.689605728" observedRunningTime="2024-12-13 01:56:15.622337439 +0000 UTC m=+57.978281053" watchObservedRunningTime="2024-12-13 01:56:15.622488111 +0000 UTC m=+57.978431737" Dec 13 01:56:16.259968 kubelet[2503]: E1213 01:56:16.259843 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:17.260109 kubelet[2503]: E1213 01:56:17.260043 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:18.212303 kubelet[2503]: E1213 01:56:18.212240 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:18.260674 kubelet[2503]: E1213 01:56:18.260578 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:19.261457 kubelet[2503]: E1213 01:56:19.261247 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:20.261762 kubelet[2503]: E1213 01:56:20.261694 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:21.262208 kubelet[2503]: E1213 01:56:21.262138 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:22.263216 kubelet[2503]: E1213 01:56:22.263117 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:23.264220 kubelet[2503]: E1213 01:56:23.264124 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:24.264808 kubelet[2503]: E1213 01:56:24.264730 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:25.264997 kubelet[2503]: E1213 01:56:25.264918 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:25.594512 kubelet[2503]: I1213 01:56:25.594421 2503 topology_manager.go:215] "Topology Admit Handler" podUID="314aaf3c-8379-4aad-aecb-2a4d94c7da7f" podNamespace="default" podName="test-pod-1" Dec 13 01:56:25.607343 systemd[1]: Created slice kubepods-besteffort-pod314aaf3c_8379_4aad_aecb_2a4d94c7da7f.slice - libcontainer container kubepods-besteffort-pod314aaf3c_8379_4aad_aecb_2a4d94c7da7f.slice. Dec 13 01:56:25.648500 kubelet[2503]: I1213 01:56:25.648430 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmwhn\" (UniqueName: \"kubernetes.io/projected/314aaf3c-8379-4aad-aecb-2a4d94c7da7f-kube-api-access-vmwhn\") pod \"test-pod-1\" (UID: \"314aaf3c-8379-4aad-aecb-2a4d94c7da7f\") " pod="default/test-pod-1" Dec 13 01:56:25.648691 kubelet[2503]: I1213 01:56:25.648516 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5a949a02-80b0-4a6a-b8a2-89496eb1aceb\" (UniqueName: \"kubernetes.io/nfs/314aaf3c-8379-4aad-aecb-2a4d94c7da7f-pvc-5a949a02-80b0-4a6a-b8a2-89496eb1aceb\") pod \"test-pod-1\" (UID: \"314aaf3c-8379-4aad-aecb-2a4d94c7da7f\") " pod="default/test-pod-1" Dec 13 01:56:25.785488 kernel: FS-Cache: Loaded Dec 13 01:56:25.828392 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:56:25.828519 kernel: RPC: Registered udp transport module. Dec 13 01:56:25.828562 kernel: RPC: Registered tcp transport module. Dec 13 01:56:25.830371 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:56:25.830474 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:56:26.165589 kernel: NFS: Registering the id_resolver key type Dec 13 01:56:26.165691 kernel: Key type id_resolver registered Dec 13 01:56:26.165753 kernel: Key type id_legacy registered Dec 13 01:56:26.209763 nfsidmap[4144]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:26.216868 nfsidmap[4145]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:26.266227 kubelet[2503]: E1213 01:56:26.266115 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:26.514982 containerd[2012]: time="2024-12-13T01:56:26.514788194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:314aaf3c-8379-4aad-aecb-2a4d94c7da7f,Namespace:default,Attempt:0,}" Dec 13 01:56:26.566049 (udev-worker)[4131]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:26.568929 systemd-networkd[1920]: lxc5ad05c6edf73: Link UP Dec 13 01:56:26.577439 kernel: eth0: renamed from tmp67acb Dec 13 01:56:26.585877 systemd-networkd[1920]: lxc5ad05c6edf73: Gained carrier Dec 13 01:56:26.589082 (udev-worker)[4135]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:26.900508 containerd[2012]: time="2024-12-13T01:56:26.900280743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:26.900928 containerd[2012]: time="2024-12-13T01:56:26.900699639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:26.900928 containerd[2012]: time="2024-12-13T01:56:26.900736263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:26.901548 containerd[2012]: time="2024-12-13T01:56:26.901337175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:26.941695 systemd[1]: Started cri-containerd-67acb0f0b6b05c5001852ad530e3f62f8dd7feff9fcad7abb5c0d2a17339d359.scope - libcontainer container 67acb0f0b6b05c5001852ad530e3f62f8dd7feff9fcad7abb5c0d2a17339d359. Dec 13 01:56:27.000604 containerd[2012]: time="2024-12-13T01:56:27.000483072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:314aaf3c-8379-4aad-aecb-2a4d94c7da7f,Namespace:default,Attempt:0,} returns sandbox id \"67acb0f0b6b05c5001852ad530e3f62f8dd7feff9fcad7abb5c0d2a17339d359\"" Dec 13 01:56:27.003362 containerd[2012]: time="2024-12-13T01:56:27.003308856Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:56:27.266596 kubelet[2503]: E1213 01:56:27.266423 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:27.316341 containerd[2012]: time="2024-12-13T01:56:27.316263410Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:27.317816 containerd[2012]: time="2024-12-13T01:56:27.317748878Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:56:27.323966 containerd[2012]: time="2024-12-13T01:56:27.323875082Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 320.475842ms" Dec 13 01:56:27.324395 containerd[2012]: time="2024-12-13T01:56:27.324173582Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:56:27.328348 containerd[2012]: time="2024-12-13T01:56:27.328064450Z" level=info msg="CreateContainer within sandbox \"67acb0f0b6b05c5001852ad530e3f62f8dd7feff9fcad7abb5c0d2a17339d359\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:56:27.355216 containerd[2012]: time="2024-12-13T01:56:27.355135526Z" level=info msg="CreateContainer within sandbox \"67acb0f0b6b05c5001852ad530e3f62f8dd7feff9fcad7abb5c0d2a17339d359\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a2040b6d36a37e95fafbceaecafb6c3b33c5cf2fc0cb1dd410e81ef8529dd27b\"" Dec 13 01:56:27.356081 containerd[2012]: time="2024-12-13T01:56:27.355978226Z" level=info msg="StartContainer for \"a2040b6d36a37e95fafbceaecafb6c3b33c5cf2fc0cb1dd410e81ef8529dd27b\"" Dec 13 01:56:27.405791 systemd[1]: Started cri-containerd-a2040b6d36a37e95fafbceaecafb6c3b33c5cf2fc0cb1dd410e81ef8529dd27b.scope - libcontainer container a2040b6d36a37e95fafbceaecafb6c3b33c5cf2fc0cb1dd410e81ef8529dd27b. Dec 13 01:56:27.460073 containerd[2012]: time="2024-12-13T01:56:27.459998282Z" level=info msg="StartContainer for \"a2040b6d36a37e95fafbceaecafb6c3b33c5cf2fc0cb1dd410e81ef8529dd27b\" returns successfully" Dec 13 01:56:27.649977 kubelet[2503]: I1213 01:56:27.649846 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.327083713 podStartE2EDuration="19.649817595s" podCreationTimestamp="2024-12-13 01:56:08 +0000 UTC" firstStartedPulling="2024-12-13 01:56:27.002768604 +0000 UTC m=+69.358712170" lastFinishedPulling="2024-12-13 01:56:27.325502462 +0000 UTC m=+69.681446052" observedRunningTime="2024-12-13 01:56:27.648875307 +0000 UTC m=+70.004818897" watchObservedRunningTime="2024-12-13 01:56:27.649817595 +0000 UTC m=+70.005761197" Dec 13 01:56:28.097712 systemd-networkd[1920]: lxc5ad05c6edf73: Gained IPv6LL Dec 13 01:56:28.267064 kubelet[2503]: E1213 01:56:28.266986 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:29.267195 kubelet[2503]: E1213 01:56:29.267124 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:30.178490 ntpd[1989]: Listen normally on 15 lxc5ad05c6edf73 [fe80::f46d:22ff:fe35:74f1%13]:123 Dec 13 01:56:30.179156 ntpd[1989]: 13 Dec 01:56:30 ntpd[1989]: Listen normally on 15 lxc5ad05c6edf73 [fe80::f46d:22ff:fe35:74f1%13]:123 Dec 13 01:56:30.268001 kubelet[2503]: E1213 01:56:30.267924 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:31.268098 kubelet[2503]: E1213 01:56:31.268032 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:32.268823 kubelet[2503]: E1213 01:56:32.268747 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:33.269592 kubelet[2503]: E1213 01:56:33.269515 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:34.270135 kubelet[2503]: E1213 01:56:34.270073 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:34.427217 containerd[2012]: time="2024-12-13T01:56:34.427123065Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:56:34.440647 containerd[2012]: time="2024-12-13T01:56:34.440461689Z" level=info msg="StopContainer for \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\" with timeout 2 (s)" Dec 13 01:56:34.440980 containerd[2012]: time="2024-12-13T01:56:34.440936697Z" level=info msg="Stop container \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\" with signal terminated" Dec 13 01:56:34.455108 systemd-networkd[1920]: lxc_health: Link DOWN Dec 13 01:56:34.455131 systemd-networkd[1920]: lxc_health: Lost carrier Dec 13 01:56:34.470103 systemd[1]: cri-containerd-932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5.scope: Deactivated successfully. Dec 13 01:56:34.471064 systemd[1]: cri-containerd-932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5.scope: Consumed 15.420s CPU time. Dec 13 01:56:34.508047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5-rootfs.mount: Deactivated successfully. Dec 13 01:56:35.191213 containerd[2012]: time="2024-12-13T01:56:35.190935669Z" level=info msg="shim disconnected" id=932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5 namespace=k8s.io Dec 13 01:56:35.191213 containerd[2012]: time="2024-12-13T01:56:35.191013849Z" level=warning msg="cleaning up after shim disconnected" id=932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5 namespace=k8s.io Dec 13 01:56:35.191213 containerd[2012]: time="2024-12-13T01:56:35.191034177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:35.217364 containerd[2012]: time="2024-12-13T01:56:35.216806049Z" level=info msg="StopContainer for \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\" returns successfully" Dec 13 01:56:35.218235 containerd[2012]: time="2024-12-13T01:56:35.218194269Z" level=info msg="StopPodSandbox for \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\"" Dec 13 01:56:35.218706 containerd[2012]: time="2024-12-13T01:56:35.218368857Z" level=info msg="Container to stop \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:35.218706 containerd[2012]: time="2024-12-13T01:56:35.218497065Z" level=info msg="Container to stop \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:35.218706 containerd[2012]: time="2024-12-13T01:56:35.218523489Z" level=info msg="Container to stop \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:35.218706 containerd[2012]: time="2024-12-13T01:56:35.218546217Z" level=info msg="Container to stop \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:35.218706 containerd[2012]: time="2024-12-13T01:56:35.218568609Z" level=info msg="Container to stop \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:35.222597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7-shm.mount: Deactivated successfully. Dec 13 01:56:35.230984 systemd[1]: cri-containerd-4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7.scope: Deactivated successfully. Dec 13 01:56:35.264927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7-rootfs.mount: Deactivated successfully. Dec 13 01:56:35.270520 kubelet[2503]: E1213 01:56:35.270424 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:35.275141 containerd[2012]: time="2024-12-13T01:56:35.274683177Z" level=info msg="shim disconnected" id=4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7 namespace=k8s.io Dec 13 01:56:35.275141 containerd[2012]: time="2024-12-13T01:56:35.274764453Z" level=warning msg="cleaning up after shim disconnected" id=4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7 namespace=k8s.io Dec 13 01:56:35.275141 containerd[2012]: time="2024-12-13T01:56:35.274788993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:35.301184 containerd[2012]: time="2024-12-13T01:56:35.301051581Z" level=info msg="TearDown network for sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" successfully" Dec 13 01:56:35.301184 containerd[2012]: time="2024-12-13T01:56:35.301124145Z" level=info msg="StopPodSandbox for \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" returns successfully" Dec 13 01:56:35.413447 kubelet[2503]: I1213 01:56:35.412776 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-hubble-tls\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413447 kubelet[2503]: I1213 01:56:35.412837 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cni-path\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413447 kubelet[2503]: I1213 01:56:35.412915 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-hostproc\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413447 kubelet[2503]: I1213 01:56:35.412977 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-lib-modules\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413447 kubelet[2503]: I1213 01:56:35.413018 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khw6j\" (UniqueName: \"kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-kube-api-access-khw6j\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413447 kubelet[2503]: I1213 01:56:35.413051 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-run\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413980 kubelet[2503]: I1213 01:56:35.413091 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-config-path\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413980 kubelet[2503]: I1213 01:56:35.413125 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-kernel\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413980 kubelet[2503]: I1213 01:56:35.413158 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-bpf-maps\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413980 kubelet[2503]: I1213 01:56:35.413188 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-etc-cni-netd\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413980 kubelet[2503]: I1213 01:56:35.413225 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f911ea9a-0877-4aa6-9610-94ec110afb5a-clustermesh-secrets\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.413980 kubelet[2503]: I1213 01:56:35.413256 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-cgroup\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.414299 kubelet[2503]: I1213 01:56:35.413289 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-net\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.414299 kubelet[2503]: I1213 01:56:35.413322 2503 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-xtables-lock\") pod \"f911ea9a-0877-4aa6-9610-94ec110afb5a\" (UID: \"f911ea9a-0877-4aa6-9610-94ec110afb5a\") " Dec 13 01:56:35.414299 kubelet[2503]: I1213 01:56:35.413726 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.415053 kubelet[2503]: I1213 01:56:35.414544 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cni-path" (OuterVolumeSpecName: "cni-path") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.415053 kubelet[2503]: I1213 01:56:35.414607 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.415053 kubelet[2503]: I1213 01:56:35.414611 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.415053 kubelet[2503]: I1213 01:56:35.414643 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.415053 kubelet[2503]: I1213 01:56:35.414679 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-hostproc" (OuterVolumeSpecName: "hostproc") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.415405 kubelet[2503]: I1213 01:56:35.414720 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.415405 kubelet[2503]: I1213 01:56:35.414973 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.419247 kubelet[2503]: I1213 01:56:35.417682 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.419415 kubelet[2503]: I1213 01:56:35.419280 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:35.426692 kubelet[2503]: I1213 01:56:35.425944 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:56:35.428246 systemd[1]: var-lib-kubelet-pods-f911ea9a\x2d0877\x2d4aa6\x2d9610\x2d94ec110afb5a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:56:35.428522 systemd[1]: var-lib-kubelet-pods-f911ea9a\x2d0877\x2d4aa6\x2d9610\x2d94ec110afb5a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:56:35.428709 kubelet[2503]: I1213 01:56:35.427236 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f911ea9a-0877-4aa6-9610-94ec110afb5a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:56:35.429067 kubelet[2503]: I1213 01:56:35.429030 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:35.429686 kubelet[2503]: I1213 01:56:35.429602 2503 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-kube-api-access-khw6j" (OuterVolumeSpecName: "kube-api-access-khw6j") pod "f911ea9a-0877-4aa6-9610-94ec110afb5a" (UID: "f911ea9a-0877-4aa6-9610-94ec110afb5a"). InnerVolumeSpecName "kube-api-access-khw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:35.435445 systemd[1]: var-lib-kubelet-pods-f911ea9a\x2d0877\x2d4aa6\x2d9610\x2d94ec110afb5a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkhw6j.mount: Deactivated successfully. Dec 13 01:56:35.514162 kubelet[2503]: I1213 01:56:35.514002 2503 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-kernel\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.514162 kubelet[2503]: I1213 01:56:35.514057 2503 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-bpf-maps\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.514162 kubelet[2503]: I1213 01:56:35.514080 2503 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-etc-cni-netd\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.514162 kubelet[2503]: I1213 01:56:35.514103 2503 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f911ea9a-0877-4aa6-9610-94ec110afb5a-clustermesh-secrets\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.514162 kubelet[2503]: I1213 01:56:35.514125 2503 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-cgroup\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.514815 2503 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-host-proc-sys-net\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.514853 2503 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-xtables-lock\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.514907 2503 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-hubble-tls\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.514927 2503 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cni-path\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.514946 2503 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-hostproc\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.514964 2503 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-lib-modules\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.514984 2503 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-khw6j\" (UniqueName: \"kubernetes.io/projected/f911ea9a-0877-4aa6-9610-94ec110afb5a-kube-api-access-khw6j\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515128 kubelet[2503]: I1213 01:56:35.515004 2503 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-run\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.515603 kubelet[2503]: I1213 01:56:35.515023 2503 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f911ea9a-0877-4aa6-9610-94ec110afb5a-cilium-config-path\") on node \"172.31.17.98\" DevicePath \"\"" Dec 13 01:56:35.662250 kubelet[2503]: I1213 01:56:35.661362 2503 scope.go:117] "RemoveContainer" containerID="932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5" Dec 13 01:56:35.670041 containerd[2012]: time="2024-12-13T01:56:35.669498575Z" level=info msg="RemoveContainer for \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\"" Dec 13 01:56:35.672987 systemd[1]: Removed slice kubepods-burstable-podf911ea9a_0877_4aa6_9610_94ec110afb5a.slice - libcontainer container kubepods-burstable-podf911ea9a_0877_4aa6_9610_94ec110afb5a.slice. Dec 13 01:56:35.673221 systemd[1]: kubepods-burstable-podf911ea9a_0877_4aa6_9610_94ec110afb5a.slice: Consumed 15.586s CPU time. Dec 13 01:56:35.679042 containerd[2012]: time="2024-12-13T01:56:35.678961751Z" level=info msg="RemoveContainer for \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\" returns successfully" Dec 13 01:56:35.679662 kubelet[2503]: I1213 01:56:35.679511 2503 scope.go:117] "RemoveContainer" containerID="3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59" Dec 13 01:56:35.682882 containerd[2012]: time="2024-12-13T01:56:35.681881603Z" level=info msg="RemoveContainer for \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\"" Dec 13 01:56:35.687609 containerd[2012]: time="2024-12-13T01:56:35.687559199Z" level=info msg="RemoveContainer for \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\" returns successfully" Dec 13 01:56:35.688055 kubelet[2503]: I1213 01:56:35.688010 2503 scope.go:117] "RemoveContainer" containerID="dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4" Dec 13 01:56:35.690104 containerd[2012]: time="2024-12-13T01:56:35.690041255Z" level=info msg="RemoveContainer for \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\"" Dec 13 01:56:35.696509 containerd[2012]: time="2024-12-13T01:56:35.696447971Z" level=info msg="RemoveContainer for \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\" returns successfully" Dec 13 01:56:35.696789 kubelet[2503]: I1213 01:56:35.696745 2503 scope.go:117] "RemoveContainer" containerID="102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172" Dec 13 01:56:35.698729 containerd[2012]: time="2024-12-13T01:56:35.698669999Z" level=info msg="RemoveContainer for \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\"" Dec 13 01:56:35.703174 containerd[2012]: time="2024-12-13T01:56:35.703116839Z" level=info msg="RemoveContainer for \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\" returns successfully" Dec 13 01:56:35.703603 kubelet[2503]: I1213 01:56:35.703567 2503 scope.go:117] "RemoveContainer" containerID="6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2" Dec 13 01:56:35.705960 containerd[2012]: time="2024-12-13T01:56:35.705910019Z" level=info msg="RemoveContainer for \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\"" Dec 13 01:56:35.710487 containerd[2012]: time="2024-12-13T01:56:35.710433887Z" level=info msg="RemoveContainer for \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\" returns successfully" Dec 13 01:56:35.710823 kubelet[2503]: I1213 01:56:35.710795 2503 scope.go:117] "RemoveContainer" containerID="932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5" Dec 13 01:56:35.711497 containerd[2012]: time="2024-12-13T01:56:35.711436187Z" level=error msg="ContainerStatus for \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\": not found" Dec 13 01:56:35.711717 kubelet[2503]: E1213 01:56:35.711671 2503 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\": not found" containerID="932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5" Dec 13 01:56:35.711854 kubelet[2503]: I1213 01:56:35.711728 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5"} err="failed to get container status \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"932d1e1f720e10fd886048e0f085b72156b746863e18d55625549cab1b9165a5\": not found" Dec 13 01:56:35.711941 kubelet[2503]: I1213 01:56:35.711854 2503 scope.go:117] "RemoveContainer" containerID="3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59" Dec 13 01:56:35.712345 containerd[2012]: time="2024-12-13T01:56:35.712244063Z" level=error msg="ContainerStatus for \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\": not found" Dec 13 01:56:35.712564 kubelet[2503]: E1213 01:56:35.712505 2503 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\": not found" containerID="3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59" Dec 13 01:56:35.712564 kubelet[2503]: I1213 01:56:35.712545 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59"} err="failed to get container status \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\": rpc error: code = NotFound desc = an error occurred when try to find container \"3aaa6b412364ce9ffe1d449df955fac8b3e9351ba1a65b3649e34b7513b1be59\": not found" Dec 13 01:56:35.712709 kubelet[2503]: I1213 01:56:35.712577 2503 scope.go:117] "RemoveContainer" containerID="dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4" Dec 13 01:56:35.713330 containerd[2012]: time="2024-12-13T01:56:35.713018267Z" level=error msg="ContainerStatus for \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\": not found" Dec 13 01:56:35.713530 kubelet[2503]: E1213 01:56:35.713285 2503 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\": not found" containerID="dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4" Dec 13 01:56:35.713530 kubelet[2503]: I1213 01:56:35.713322 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4"} err="failed to get container status \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfa29fa7a90ae1682c10517b5dceddcbc17ef7812e3f35e3aacc41b67fed61f4\": not found" Dec 13 01:56:35.713530 kubelet[2503]: I1213 01:56:35.713352 2503 scope.go:117] "RemoveContainer" containerID="102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172" Dec 13 01:56:35.713841 containerd[2012]: time="2024-12-13T01:56:35.713735483Z" level=error msg="ContainerStatus for \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\": not found" Dec 13 01:56:35.714429 kubelet[2503]: E1213 01:56:35.714070 2503 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\": not found" containerID="102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172" Dec 13 01:56:35.714429 kubelet[2503]: I1213 01:56:35.714194 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172"} err="failed to get container status \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\": rpc error: code = NotFound desc = an error occurred when try to find container \"102b89de9721390adaa95110dde428c5416b6ec1b064fe74bfd832b6e9382172\": not found" Dec 13 01:56:35.714429 kubelet[2503]: I1213 01:56:35.714247 2503 scope.go:117] "RemoveContainer" containerID="6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2" Dec 13 01:56:35.714984 kubelet[2503]: E1213 01:56:35.714884 2503 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\": not found" containerID="6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2" Dec 13 01:56:35.714984 kubelet[2503]: I1213 01:56:35.714927 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2"} err="failed to get container status \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\": not found" Dec 13 01:56:35.715095 containerd[2012]: time="2024-12-13T01:56:35.714581999Z" level=error msg="ContainerStatus for \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6816f8ae038015ae04f46bfb53664f3f5a49eddfcb52d22b34868b5d3f5411d2\": not found" Dec 13 01:56:36.270931 kubelet[2503]: E1213 01:56:36.270860 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:36.368303 kubelet[2503]: I1213 01:56:36.368238 2503 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" path="/var/lib/kubelet/pods/f911ea9a-0877-4aa6-9610-94ec110afb5a/volumes" Dec 13 01:56:37.178526 ntpd[1989]: Deleting interface #12 lxc_health, fe80::1cdc:e2ff:fe46:b32e%7#123, interface stats: received=0, sent=0, dropped=0, active_time=48 secs Dec 13 01:56:37.179127 ntpd[1989]: 13 Dec 01:56:37 ntpd[1989]: Deleting interface #12 lxc_health, fe80::1cdc:e2ff:fe46:b32e%7#123, interface stats: received=0, sent=0, dropped=0, active_time=48 secs Dec 13 01:56:37.271746 kubelet[2503]: E1213 01:56:37.271681 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:38.212135 kubelet[2503]: E1213 01:56:38.212050 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:38.272856 kubelet[2503]: E1213 01:56:38.272796 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:38.412363 kubelet[2503]: E1213 01:56:38.412299 2503 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:56:38.454247 kubelet[2503]: I1213 01:56:38.454185 2503 topology_manager.go:215] "Topology Admit Handler" podUID="8d4f1f32-3653-46af-b752-d4624baa86b6" podNamespace="kube-system" podName="cilium-operator-599987898-lz2m9" Dec 13 01:56:38.454415 kubelet[2503]: E1213 01:56:38.454258 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" containerName="apply-sysctl-overwrites" Dec 13 01:56:38.454415 kubelet[2503]: E1213 01:56:38.454279 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" containerName="clean-cilium-state" Dec 13 01:56:38.454415 kubelet[2503]: E1213 01:56:38.454295 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" containerName="mount-bpf-fs" Dec 13 01:56:38.454415 kubelet[2503]: E1213 01:56:38.454309 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" containerName="cilium-agent" Dec 13 01:56:38.454415 kubelet[2503]: E1213 01:56:38.454326 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" containerName="mount-cgroup" Dec 13 01:56:38.454415 kubelet[2503]: I1213 01:56:38.454360 2503 memory_manager.go:354] "RemoveStaleState removing state" podUID="f911ea9a-0877-4aa6-9610-94ec110afb5a" containerName="cilium-agent" Dec 13 01:56:38.466247 systemd[1]: Created slice kubepods-besteffort-pod8d4f1f32_3653_46af_b752_d4624baa86b6.slice - libcontainer container kubepods-besteffort-pod8d4f1f32_3653_46af_b752_d4624baa86b6.slice. Dec 13 01:56:38.469669 kubelet[2503]: W1213 01:56:38.469529 2503 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.98" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.98' and this object Dec 13 01:56:38.469669 kubelet[2503]: E1213 01:56:38.469631 2503 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.17.98" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.17.98' and this object Dec 13 01:56:38.519723 kubelet[2503]: I1213 01:56:38.519400 2503 topology_manager.go:215] "Topology Admit Handler" podUID="81f38bed-b0ca-4930-95c6-18b02f23361a" podNamespace="kube-system" podName="cilium-q5cf7" Dec 13 01:56:38.530831 kubelet[2503]: I1213 01:56:38.530743 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whzgj\" (UniqueName: \"kubernetes.io/projected/8d4f1f32-3653-46af-b752-d4624baa86b6-kube-api-access-whzgj\") pod \"cilium-operator-599987898-lz2m9\" (UID: \"8d4f1f32-3653-46af-b752-d4624baa86b6\") " pod="kube-system/cilium-operator-599987898-lz2m9" Dec 13 01:56:38.531030 kubelet[2503]: I1213 01:56:38.530838 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d4f1f32-3653-46af-b752-d4624baa86b6-cilium-config-path\") pod \"cilium-operator-599987898-lz2m9\" (UID: \"8d4f1f32-3653-46af-b752-d4624baa86b6\") " pod="kube-system/cilium-operator-599987898-lz2m9" Dec 13 01:56:38.532532 systemd[1]: Created slice kubepods-burstable-pod81f38bed_b0ca_4930_95c6_18b02f23361a.slice - libcontainer container kubepods-burstable-pod81f38bed_b0ca_4930_95c6_18b02f23361a.slice. Dec 13 01:56:38.631804 kubelet[2503]: I1213 01:56:38.631694 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/81f38bed-b0ca-4930-95c6-18b02f23361a-cilium-ipsec-secrets\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.632796 kubelet[2503]: I1213 01:56:38.632368 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-host-proc-sys-net\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.632796 kubelet[2503]: I1213 01:56:38.632529 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-bpf-maps\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.632796 kubelet[2503]: I1213 01:56:38.632597 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-cilium-cgroup\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.632796 kubelet[2503]: I1213 01:56:38.632647 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/81f38bed-b0ca-4930-95c6-18b02f23361a-clustermesh-secrets\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.632796 kubelet[2503]: I1213 01:56:38.632709 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-host-proc-sys-kernel\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633227 kubelet[2503]: I1213 01:56:38.632799 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-hostproc\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633227 kubelet[2503]: I1213 01:56:38.632860 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-etc-cni-netd\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633227 kubelet[2503]: I1213 01:56:38.632904 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-lib-modules\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633227 kubelet[2503]: I1213 01:56:38.632967 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-xtables-lock\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633227 kubelet[2503]: I1213 01:56:38.633028 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/81f38bed-b0ca-4930-95c6-18b02f23361a-hubble-tls\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633227 kubelet[2503]: I1213 01:56:38.633132 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-cilium-run\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633646 kubelet[2503]: I1213 01:56:38.633173 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/81f38bed-b0ca-4930-95c6-18b02f23361a-cni-path\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633646 kubelet[2503]: I1213 01:56:38.633233 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/81f38bed-b0ca-4930-95c6-18b02f23361a-cilium-config-path\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:38.633646 kubelet[2503]: I1213 01:56:38.633294 2503 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmq2p\" (UniqueName: \"kubernetes.io/projected/81f38bed-b0ca-4930-95c6-18b02f23361a-kube-api-access-qmq2p\") pod \"cilium-q5cf7\" (UID: \"81f38bed-b0ca-4930-95c6-18b02f23361a\") " pod="kube-system/cilium-q5cf7" Dec 13 01:56:39.273314 kubelet[2503]: E1213 01:56:39.273238 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:39.674295 containerd[2012]: time="2024-12-13T01:56:39.674179803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lz2m9,Uid:8d4f1f32-3653-46af-b752-d4624baa86b6,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:39.720175 containerd[2012]: time="2024-12-13T01:56:39.719610675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:39.720175 containerd[2012]: time="2024-12-13T01:56:39.719758623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:39.720175 containerd[2012]: time="2024-12-13T01:56:39.719789883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:39.720175 containerd[2012]: time="2024-12-13T01:56:39.719942643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:39.748657 containerd[2012]: time="2024-12-13T01:56:39.748130031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q5cf7,Uid:81f38bed-b0ca-4930-95c6-18b02f23361a,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:39.760955 systemd[1]: Started cri-containerd-57b0dfd26467278333b1a20d52c9ddc3a0f6401881b35c47d220fd5d247a9ec4.scope - libcontainer container 57b0dfd26467278333b1a20d52c9ddc3a0f6401881b35c47d220fd5d247a9ec4. Dec 13 01:56:39.801690 containerd[2012]: time="2024-12-13T01:56:39.800988088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:39.801690 containerd[2012]: time="2024-12-13T01:56:39.801088312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:39.801690 containerd[2012]: time="2024-12-13T01:56:39.801124552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:39.801690 containerd[2012]: time="2024-12-13T01:56:39.801297556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:39.838704 kubelet[2503]: I1213 01:56:39.837191 2503 setters.go:580] "Node became not ready" node="172.31.17.98" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:56:39Z","lastTransitionTime":"2024-12-13T01:56:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:56:39.852851 systemd[1]: Started cri-containerd-493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01.scope - libcontainer container 493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01. Dec 13 01:56:39.875407 containerd[2012]: time="2024-12-13T01:56:39.873786340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lz2m9,Uid:8d4f1f32-3653-46af-b752-d4624baa86b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"57b0dfd26467278333b1a20d52c9ddc3a0f6401881b35c47d220fd5d247a9ec4\"" Dec 13 01:56:39.886051 containerd[2012]: time="2024-12-13T01:56:39.885888088Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:56:39.913021 containerd[2012]: time="2024-12-13T01:56:39.912884596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q5cf7,Uid:81f38bed-b0ca-4930-95c6-18b02f23361a,Namespace:kube-system,Attempt:0,} returns sandbox id \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\"" Dec 13 01:56:39.918496 containerd[2012]: time="2024-12-13T01:56:39.918426232Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:56:39.941698 containerd[2012]: time="2024-12-13T01:56:39.940726732Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2\"" Dec 13 01:56:39.942724 containerd[2012]: time="2024-12-13T01:56:39.942040156Z" level=info msg="StartContainer for \"685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2\"" Dec 13 01:56:39.985729 systemd[1]: Started cri-containerd-685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2.scope - libcontainer container 685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2. Dec 13 01:56:40.043549 containerd[2012]: time="2024-12-13T01:56:40.043311145Z" level=info msg="StartContainer for \"685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2\" returns successfully" Dec 13 01:56:40.056773 systemd[1]: cri-containerd-685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2.scope: Deactivated successfully. Dec 13 01:56:40.116032 containerd[2012]: time="2024-12-13T01:56:40.115763161Z" level=info msg="shim disconnected" id=685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2 namespace=k8s.io Dec 13 01:56:40.116032 containerd[2012]: time="2024-12-13T01:56:40.115905445Z" level=warning msg="cleaning up after shim disconnected" id=685324a584d405f430d394dadd338569c60dc64fa3c164eda079645ec74c11a2 namespace=k8s.io Dec 13 01:56:40.116032 containerd[2012]: time="2024-12-13T01:56:40.115930945Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:40.274246 kubelet[2503]: E1213 01:56:40.274072 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:40.681090 containerd[2012]: time="2024-12-13T01:56:40.681016384Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:56:40.704365 containerd[2012]: time="2024-12-13T01:56:40.704285656Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0\"" Dec 13 01:56:40.705157 containerd[2012]: time="2024-12-13T01:56:40.705090676Z" level=info msg="StartContainer for \"7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0\"" Dec 13 01:56:40.760035 systemd[1]: Started cri-containerd-7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0.scope - libcontainer container 7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0. Dec 13 01:56:40.806362 containerd[2012]: time="2024-12-13T01:56:40.806298413Z" level=info msg="StartContainer for \"7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0\" returns successfully" Dec 13 01:56:40.818023 systemd[1]: cri-containerd-7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0.scope: Deactivated successfully. Dec 13 01:56:40.867836 containerd[2012]: time="2024-12-13T01:56:40.867653237Z" level=info msg="shim disconnected" id=7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0 namespace=k8s.io Dec 13 01:56:40.867836 containerd[2012]: time="2024-12-13T01:56:40.867764825Z" level=warning msg="cleaning up after shim disconnected" id=7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0 namespace=k8s.io Dec 13 01:56:40.867836 containerd[2012]: time="2024-12-13T01:56:40.867791069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:40.894802 containerd[2012]: time="2024-12-13T01:56:40.894720449Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:56:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:56:41.275478 kubelet[2503]: E1213 01:56:41.275415 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:41.691078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7219553b99bfcff38cb7ed20c1f1181d892d74be1de8f17140af4a94271849c0-rootfs.mount: Deactivated successfully. Dec 13 01:56:41.698508 containerd[2012]: time="2024-12-13T01:56:41.698453825Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:56:41.732783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485923322.mount: Deactivated successfully. Dec 13 01:56:41.737564 containerd[2012]: time="2024-12-13T01:56:41.737500997Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d\"" Dec 13 01:56:41.739477 containerd[2012]: time="2024-12-13T01:56:41.739340537Z" level=info msg="StartContainer for \"9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d\"" Dec 13 01:56:41.822937 systemd[1]: Started cri-containerd-9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d.scope - libcontainer container 9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d. Dec 13 01:56:41.906535 containerd[2012]: time="2024-12-13T01:56:41.905901906Z" level=info msg="StartContainer for \"9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d\" returns successfully" Dec 13 01:56:41.911657 systemd[1]: cri-containerd-9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d.scope: Deactivated successfully. Dec 13 01:56:42.201194 containerd[2012]: time="2024-12-13T01:56:42.200893083Z" level=info msg="shim disconnected" id=9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d namespace=k8s.io Dec 13 01:56:42.201194 containerd[2012]: time="2024-12-13T01:56:42.200984139Z" level=warning msg="cleaning up after shim disconnected" id=9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d namespace=k8s.io Dec 13 01:56:42.201194 containerd[2012]: time="2024-12-13T01:56:42.201006831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:42.226966 containerd[2012]: time="2024-12-13T01:56:42.226890100Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:56:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:56:42.229521 containerd[2012]: time="2024-12-13T01:56:42.229156060Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:42.231804 containerd[2012]: time="2024-12-13T01:56:42.231721060Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137738" Dec 13 01:56:42.233313 containerd[2012]: time="2024-12-13T01:56:42.233221912Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:42.236771 containerd[2012]: time="2024-12-13T01:56:42.236700640Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.350747356s" Dec 13 01:56:42.236911 containerd[2012]: time="2024-12-13T01:56:42.236767996Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:56:42.241183 containerd[2012]: time="2024-12-13T01:56:42.241126588Z" level=info msg="CreateContainer within sandbox \"57b0dfd26467278333b1a20d52c9ddc3a0f6401881b35c47d220fd5d247a9ec4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:56:42.264665 containerd[2012]: time="2024-12-13T01:56:42.264582688Z" level=info msg="CreateContainer within sandbox \"57b0dfd26467278333b1a20d52c9ddc3a0f6401881b35c47d220fd5d247a9ec4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3d363c532b4fe646d957a2c3cb2e02acf92ad723a60a82209c239c23d3d03277\"" Dec 13 01:56:42.265699 containerd[2012]: time="2024-12-13T01:56:42.265650640Z" level=info msg="StartContainer for \"3d363c532b4fe646d957a2c3cb2e02acf92ad723a60a82209c239c23d3d03277\"" Dec 13 01:56:42.276549 kubelet[2503]: E1213 01:56:42.276228 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:42.307750 systemd[1]: Started cri-containerd-3d363c532b4fe646d957a2c3cb2e02acf92ad723a60a82209c239c23d3d03277.scope - libcontainer container 3d363c532b4fe646d957a2c3cb2e02acf92ad723a60a82209c239c23d3d03277. Dec 13 01:56:42.350046 containerd[2012]: time="2024-12-13T01:56:42.349939984Z" level=info msg="StartContainer for \"3d363c532b4fe646d957a2c3cb2e02acf92ad723a60a82209c239c23d3d03277\" returns successfully" Dec 13 01:56:42.695464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a82c4bbd4477d98b0358df94062837ca2e3b8b6b3cf58566d06bb08ccfa676d-rootfs.mount: Deactivated successfully. Dec 13 01:56:42.705992 containerd[2012]: time="2024-12-13T01:56:42.705670758Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:56:42.736772 containerd[2012]: time="2024-12-13T01:56:42.736703322Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350\"" Dec 13 01:56:42.738479 containerd[2012]: time="2024-12-13T01:56:42.738095274Z" level=info msg="StartContainer for \"45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350\"" Dec 13 01:56:42.795709 systemd[1]: Started cri-containerd-45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350.scope - libcontainer container 45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350. Dec 13 01:56:42.843091 systemd[1]: cri-containerd-45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350.scope: Deactivated successfully. Dec 13 01:56:42.846109 containerd[2012]: time="2024-12-13T01:56:42.845987059Z" level=info msg="StartContainer for \"45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350\" returns successfully" Dec 13 01:56:42.890769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350-rootfs.mount: Deactivated successfully. Dec 13 01:56:42.903656 containerd[2012]: time="2024-12-13T01:56:42.903515683Z" level=info msg="shim disconnected" id=45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350 namespace=k8s.io Dec 13 01:56:42.903656 containerd[2012]: time="2024-12-13T01:56:42.903601867Z" level=warning msg="cleaning up after shim disconnected" id=45b502ec9bba9b6a94f6ebdd83bf7d2a7a554debb507bcdcac6ddf639261d350 namespace=k8s.io Dec 13 01:56:42.903656 containerd[2012]: time="2024-12-13T01:56:42.903625039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:43.276546 kubelet[2503]: E1213 01:56:43.276487 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:43.414306 kubelet[2503]: E1213 01:56:43.414252 2503 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:56:43.717926 containerd[2012]: time="2024-12-13T01:56:43.717864307Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:56:43.750253 containerd[2012]: time="2024-12-13T01:56:43.750190411Z" level=info msg="CreateContainer within sandbox \"493b96b61a20ab29edefe20ff7f99d1b00b75e21024df59d0c80ab5e7e2ebd01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e0689865f10a6be3c31bcf02094dd3ff69698b81f39caf396556a83da4aa76f1\"" Dec 13 01:56:43.750989 containerd[2012]: time="2024-12-13T01:56:43.750925123Z" level=info msg="StartContainer for \"e0689865f10a6be3c31bcf02094dd3ff69698b81f39caf396556a83da4aa76f1\"" Dec 13 01:56:43.757871 kubelet[2503]: I1213 01:56:43.757790 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-lz2m9" podStartSLOduration=3.404185359 podStartE2EDuration="5.757769107s" podCreationTimestamp="2024-12-13 01:56:38 +0000 UTC" firstStartedPulling="2024-12-13 01:56:39.88464478 +0000 UTC m=+82.240588358" lastFinishedPulling="2024-12-13 01:56:42.23822854 +0000 UTC m=+84.594172106" observedRunningTime="2024-12-13 01:56:42.852672979 +0000 UTC m=+85.208616569" watchObservedRunningTime="2024-12-13 01:56:43.757769107 +0000 UTC m=+86.113712685" Dec 13 01:56:43.810701 systemd[1]: Started cri-containerd-e0689865f10a6be3c31bcf02094dd3ff69698b81f39caf396556a83da4aa76f1.scope - libcontainer container e0689865f10a6be3c31bcf02094dd3ff69698b81f39caf396556a83da4aa76f1. Dec 13 01:56:43.863051 containerd[2012]: time="2024-12-13T01:56:43.862975472Z" level=info msg="StartContainer for \"e0689865f10a6be3c31bcf02094dd3ff69698b81f39caf396556a83da4aa76f1\" returns successfully" Dec 13 01:56:44.276861 kubelet[2503]: E1213 01:56:44.276781 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:44.632511 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:56:45.277825 kubelet[2503]: E1213 01:56:45.277749 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:46.278493 kubelet[2503]: E1213 01:56:46.278426 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:47.279698 kubelet[2503]: E1213 01:56:47.279555 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:48.280503 kubelet[2503]: E1213 01:56:48.280366 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:48.755998 systemd-networkd[1920]: lxc_health: Link UP Dec 13 01:56:48.776620 (udev-worker)[5272]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:48.783010 systemd-networkd[1920]: lxc_health: Gained carrier Dec 13 01:56:49.281575 kubelet[2503]: E1213 01:56:49.281505 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:49.787354 kubelet[2503]: I1213 01:56:49.787245 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q5cf7" podStartSLOduration=11.787222297 podStartE2EDuration="11.787222297s" podCreationTimestamp="2024-12-13 01:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:44.74914484 +0000 UTC m=+87.105088442" watchObservedRunningTime="2024-12-13 01:56:49.787222297 +0000 UTC m=+92.143165875" Dec 13 01:56:49.986561 systemd-networkd[1920]: lxc_health: Gained IPv6LL Dec 13 01:56:50.282673 kubelet[2503]: E1213 01:56:50.282518 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:51.283198 kubelet[2503]: E1213 01:56:51.283130 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:52.179170 ntpd[1989]: Listen normally on 16 lxc_health [fe80::2451:79ff:fe90:a22f%15]:123 Dec 13 01:56:52.181220 ntpd[1989]: 13 Dec 01:56:52 ntpd[1989]: Listen normally on 16 lxc_health [fe80::2451:79ff:fe90:a22f%15]:123 Dec 13 01:56:52.284139 kubelet[2503]: E1213 01:56:52.284063 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:53.284833 kubelet[2503]: E1213 01:56:53.284763 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:54.285420 kubelet[2503]: E1213 01:56:54.285317 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:55.286619 kubelet[2503]: E1213 01:56:55.286508 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:56.286902 kubelet[2503]: E1213 01:56:56.286837 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:57.287895 kubelet[2503]: E1213 01:56:57.287794 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:58.212178 kubelet[2503]: E1213 01:56:58.212105 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:58.288994 kubelet[2503]: E1213 01:56:58.288905 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:59.290142 kubelet[2503]: E1213 01:56:59.290070 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:00.290875 kubelet[2503]: E1213 01:57:00.290804 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:01.291049 kubelet[2503]: E1213 01:57:01.290967 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:02.292098 kubelet[2503]: E1213 01:57:02.292036 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:03.292543 kubelet[2503]: E1213 01:57:03.292461 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:04.293685 kubelet[2503]: E1213 01:57:04.293601 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:05.294181 kubelet[2503]: E1213 01:57:05.294112 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:06.294967 kubelet[2503]: E1213 01:57:06.294896 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:07.295117 kubelet[2503]: E1213 01:57:07.295035 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:08.296010 kubelet[2503]: E1213 01:57:08.295939 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:09.296228 kubelet[2503]: E1213 01:57:09.296130 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:10.296874 kubelet[2503]: E1213 01:57:10.296770 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:10.498320 kubelet[2503]: E1213 01:57:10.498023 2503 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:57:11.297016 kubelet[2503]: E1213 01:57:11.296935 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:12.297169 kubelet[2503]: E1213 01:57:12.297097 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:13.297863 kubelet[2503]: E1213 01:57:13.297800 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:14.298648 kubelet[2503]: E1213 01:57:14.298569 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:15.299212 kubelet[2503]: E1213 01:57:15.299136 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:16.299642 kubelet[2503]: E1213 01:57:16.299553 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:17.300603 kubelet[2503]: E1213 01:57:17.300534 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:18.212449 kubelet[2503]: E1213 01:57:18.212351 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:18.269861 containerd[2012]: time="2024-12-13T01:57:18.269790279Z" level=info msg="StopPodSandbox for \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\"" Dec 13 01:57:18.270595 containerd[2012]: time="2024-12-13T01:57:18.269937927Z" level=info msg="TearDown network for sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" successfully" Dec 13 01:57:18.270595 containerd[2012]: time="2024-12-13T01:57:18.269963511Z" level=info msg="StopPodSandbox for \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" returns successfully" Dec 13 01:57:18.271551 containerd[2012]: time="2024-12-13T01:57:18.271488507Z" level=info msg="RemovePodSandbox for \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\"" Dec 13 01:57:18.271666 containerd[2012]: time="2024-12-13T01:57:18.271572015Z" level=info msg="Forcibly stopping sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\"" Dec 13 01:57:18.271726 containerd[2012]: time="2024-12-13T01:57:18.271700535Z" level=info msg="TearDown network for sandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" successfully" Dec 13 01:57:18.278876 containerd[2012]: time="2024-12-13T01:57:18.278775843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:18.279137 containerd[2012]: time="2024-12-13T01:57:18.278919591Z" level=info msg="RemovePodSandbox \"4ab3c6fc6698f25f1d134c79013c1897bffc2e96d67e8fce706b9a95471430a7\" returns successfully" Dec 13 01:57:18.300954 kubelet[2503]: E1213 01:57:18.300871 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:19.302032 kubelet[2503]: E1213 01:57:19.301969 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:20.302702 kubelet[2503]: E1213 01:57:20.302632 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:20.498701 kubelet[2503]: E1213 01:57:20.498481 2503 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:57:21.303519 kubelet[2503]: E1213 01:57:21.303450 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:22.304299 kubelet[2503]: E1213 01:57:22.304228 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:23.305293 kubelet[2503]: E1213 01:57:23.305230 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:24.306161 kubelet[2503]: E1213 01:57:24.306093 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:25.306451 kubelet[2503]: E1213 01:57:25.306286 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:26.307155 kubelet[2503]: E1213 01:57:26.307087 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:27.308410 kubelet[2503]: E1213 01:57:27.308303 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:28.309198 kubelet[2503]: E1213 01:57:28.309063 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:29.309344 kubelet[2503]: E1213 01:57:29.309268 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:30.310022 kubelet[2503]: E1213 01:57:30.309954 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:30.499693 kubelet[2503]: E1213 01:57:30.499574 2503 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:57:31.311091 kubelet[2503]: E1213 01:57:31.311029 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:32.311543 kubelet[2503]: E1213 01:57:32.311443 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:33.312330 kubelet[2503]: E1213 01:57:33.312259 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:34.312764 kubelet[2503]: E1213 01:57:34.312703 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:35.313623 kubelet[2503]: E1213 01:57:35.313549 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:36.314028 kubelet[2503]: E1213 01:57:36.313965 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:37.314785 kubelet[2503]: E1213 01:57:37.314721 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:38.211755 kubelet[2503]: E1213 01:57:38.211692 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:38.315640 kubelet[2503]: E1213 01:57:38.315590 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:39.005585 kubelet[2503]: E1213 01:57:39.005502 2503 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": unexpected EOF" Dec 13 01:57:39.006433 kubelet[2503]: E1213 01:57:39.006097 2503 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": dial tcp 172.31.22.122:6443: connect: connection refused" Dec 13 01:57:39.006433 kubelet[2503]: I1213 01:57:39.006155 2503 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 01:57:39.006890 kubelet[2503]: E1213 01:57:39.006844 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": dial tcp 172.31.22.122:6443: connect: connection refused" interval="200ms" Dec 13 01:57:39.208674 kubelet[2503]: E1213 01:57:39.208606 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": dial tcp 172.31.22.122:6443: connect: connection refused" interval="400ms" Dec 13 01:57:39.316185 kubelet[2503]: E1213 01:57:39.316129 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:39.610246 kubelet[2503]: E1213 01:57:39.610076 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": dial tcp 172.31.22.122:6443: connect: connection refused" interval="800ms" Dec 13 01:57:40.004595 kubelet[2503]: E1213 01:57:40.003779 2503 desired_state_of_world_populator.go:318] "Error processing volume" err="error processing PVC default/test-dynamic-volume-claim: failed to fetch PVC from API server: Get \"https://172.31.22.122:6443/api/v1/namespaces/default/persistentvolumeclaims/test-dynamic-volume-claim\": dial tcp 172.31.22.122:6443: connect: connection refused - error from a previous attempt: unexpected EOF" pod="default/test-pod-1" volumeName="config" Dec 13 01:57:40.316931 kubelet[2503]: E1213 01:57:40.316859 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:41.317289 kubelet[2503]: E1213 01:57:41.317226 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:42.317933 kubelet[2503]: E1213 01:57:42.317862 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:43.318925 kubelet[2503]: E1213 01:57:43.318864 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:44.319606 kubelet[2503]: E1213 01:57:44.319546 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:45.320510 kubelet[2503]: E1213 01:57:45.320441 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:46.321204 kubelet[2503]: E1213 01:57:46.321130 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:47.321569 kubelet[2503]: E1213 01:57:47.321506 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:48.322722 kubelet[2503]: E1213 01:57:48.322660 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:49.323803 kubelet[2503]: E1213 01:57:49.323736 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:50.324897 kubelet[2503]: E1213 01:57:50.324823 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:50.411294 kubelet[2503]: E1213 01:57:50.411220 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Dec 13 01:57:51.325907 kubelet[2503]: E1213 01:57:51.325841 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:51.546447 kubelet[2503]: E1213 01:57:51.546300 2503 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.98\": Get \"https://172.31.22.122:6443/api/v1/nodes/172.31.17.98?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 01:57:52.327089 kubelet[2503]: E1213 01:57:52.327016 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:53.327819 kubelet[2503]: E1213 01:57:53.327753 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:54.328424 kubelet[2503]: E1213 01:57:54.328335 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:55.329616 kubelet[2503]: E1213 01:57:55.329516 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:56.329750 kubelet[2503]: E1213 01:57:56.329687 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:57.330726 kubelet[2503]: E1213 01:57:57.330655 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:58.211897 kubelet[2503]: E1213 01:57:58.211827 2503 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:58.330825 kubelet[2503]: E1213 01:57:58.330760 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:59.331933 kubelet[2503]: E1213 01:57:59.331866 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:00.332791 kubelet[2503]: E1213 01:58:00.332673 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:01.333454 kubelet[2503]: E1213 01:58:01.333315 2503 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:58:01.546953 kubelet[2503]: E1213 01:58:01.546889 2503 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.17.98\": Get \"https://172.31.22.122:6443/api/v1/nodes/172.31.17.98?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 13 01:58:02.012325 kubelet[2503]: E1213 01:58:02.012260 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.17.98?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s"