Dec 13 01:54:48.222503 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:54:48.222550 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:54:48.222575 kernel: KASLR disabled due to lack of seed Dec 13 01:54:48.222592 kernel: efi: EFI v2.7 by EDK II Dec 13 01:54:48.222608 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:54:48.222624 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:48.222641 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:54:48.222657 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:54:48.222673 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:54:48.222689 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:54:48.222709 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:54:48.222724 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:54:48.222740 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:54:48.222756 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:54:48.222775 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:54:48.222795 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:54:48.222813 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:54:48.222829 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:54:48.222846 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:54:48.222862 kernel: printk: bootconsole [uart0] enabled Dec 13 01:54:48.222879 kernel: NUMA: Failed to initialise from firmware Dec 13 01:54:48.222895 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:48.222912 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:54:48.222928 kernel: Zone ranges: Dec 13 01:54:48.222944 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:54:48.222960 kernel: DMA32 empty Dec 13 01:54:48.222980 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:54:48.222997 kernel: Movable zone start for each node Dec 13 01:54:48.223014 kernel: Early memory node ranges Dec 13 01:54:48.223030 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:54:48.223046 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:54:48.223063 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:54:48.223079 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:54:48.223095 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:54:48.223112 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:54:48.223128 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:54:48.223145 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:54:48.223161 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:48.223182 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:54:48.223199 kernel: psci: probing for conduit method from ACPI. Dec 13 01:54:48.223222 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:54:48.223240 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:54:48.223257 kernel: psci: Trusted OS migration not required Dec 13 01:54:48.223278 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:54:48.223348 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:54:48.223371 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:54:48.223405 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:54:48.223429 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:54:48.223447 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:54:48.223464 kernel: CPU features: detected: Spectre-v2 Dec 13 01:54:48.223482 kernel: CPU features: detected: Spectre-v3a Dec 13 01:54:48.223499 kernel: CPU features: detected: Spectre-BHB Dec 13 01:54:48.223516 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:54:48.223534 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:54:48.223558 kernel: alternatives: applying boot alternatives Dec 13 01:54:48.223578 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:48.223598 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:48.223615 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:54:48.223633 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:48.223650 kernel: Fallback order for Node 0: 0 Dec 13 01:54:48.223667 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:54:48.223684 kernel: Policy zone: Normal Dec 13 01:54:48.223701 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:48.223719 kernel: software IO TLB: area num 2. Dec 13 01:54:48.223736 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:54:48.223758 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:54:48.223776 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:54:48.223793 kernel: trace event string verifier disabled Dec 13 01:54:48.223811 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:48.223829 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:48.223847 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:54:48.223864 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:48.223882 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:48.223899 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:48.223916 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:54:48.223933 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:54:48.223954 kernel: GICv3: 96 SPIs implemented Dec 13 01:54:48.223972 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:54:48.223989 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:54:48.224006 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:54:48.224023 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:54:48.224040 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:54:48.224057 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:54:48.224075 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:54:48.224092 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:54:48.224109 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:54:48.224126 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:54:48.224143 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:54:48.224165 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:54:48.224184 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:54:48.224201 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:54:48.224219 kernel: Console: colour dummy device 80x25 Dec 13 01:54:48.224237 kernel: printk: console [tty1] enabled Dec 13 01:54:48.224254 kernel: ACPI: Core revision 20230628 Dec 13 01:54:48.224272 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:54:48.224290 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:54:48.226400 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:48.226420 kernel: landlock: Up and running. Dec 13 01:54:48.226448 kernel: SELinux: Initializing. Dec 13 01:54:48.226466 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.226484 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.226502 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:48.226520 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:48.226538 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:48.226557 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:48.226574 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:54:48.226596 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:54:48.226614 kernel: Remapping and enabling EFI services. Dec 13 01:54:48.226690 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:48.226710 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:54:48.226730 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:54:48.226748 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:54:48.226766 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:54:48.226783 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:48.226801 kernel: SMP: Total of 2 processors activated. Dec 13 01:54:48.226818 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:54:48.226842 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:54:48.226860 kernel: CPU features: detected: CRC32 instructions Dec 13 01:54:48.226889 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:54:48.226911 kernel: alternatives: applying system-wide alternatives Dec 13 01:54:48.226930 kernel: devtmpfs: initialized Dec 13 01:54:48.226948 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:48.226966 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:54:48.226984 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:48.227003 kernel: SMBIOS 3.0.0 present. Dec 13 01:54:48.227026 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:54:48.227044 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:48.227062 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:54:48.227081 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:54:48.227100 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:54:48.227118 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:48.227137 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:48.227160 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:48.227179 kernel: cpuidle: using governor menu Dec 13 01:54:48.227198 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:54:48.227217 kernel: ASID allocator initialised with 65536 entries Dec 13 01:54:48.227235 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:48.227253 kernel: Serial: AMBA PL011 UART driver Dec 13 01:54:48.227271 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:54:48.227289 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:54:48.229574 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:48.229605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:48.229624 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:54:48.229643 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:54:48.229663 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:48.229681 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:48.229700 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:54:48.229718 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:54:48.229737 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:48.229755 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:48.229778 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:48.229796 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:48.229815 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:48.229833 kernel: ACPI: Interpreter enabled Dec 13 01:54:48.229851 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:54:48.229869 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:54:48.229887 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:54:48.230193 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:48.230495 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:54:48.230699 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:54:48.230908 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:54:48.231119 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:54:48.231149 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:54:48.231170 kernel: acpiphp: Slot [1] registered Dec 13 01:54:48.231191 kernel: acpiphp: Slot [2] registered Dec 13 01:54:48.231210 kernel: acpiphp: Slot [3] registered Dec 13 01:54:48.231240 kernel: acpiphp: Slot [4] registered Dec 13 01:54:48.231261 kernel: acpiphp: Slot [5] registered Dec 13 01:54:48.231280 kernel: acpiphp: Slot [6] registered Dec 13 01:54:48.231746 kernel: acpiphp: Slot [7] registered Dec 13 01:54:48.231773 kernel: acpiphp: Slot [8] registered Dec 13 01:54:48.231792 kernel: acpiphp: Slot [9] registered Dec 13 01:54:48.231810 kernel: acpiphp: Slot [10] registered Dec 13 01:54:48.231829 kernel: acpiphp: Slot [11] registered Dec 13 01:54:48.231847 kernel: acpiphp: Slot [12] registered Dec 13 01:54:48.231866 kernel: acpiphp: Slot [13] registered Dec 13 01:54:48.231894 kernel: acpiphp: Slot [14] registered Dec 13 01:54:48.232229 kernel: acpiphp: Slot [15] registered Dec 13 01:54:48.232248 kernel: acpiphp: Slot [16] registered Dec 13 01:54:48.232267 kernel: acpiphp: Slot [17] registered Dec 13 01:54:48.232285 kernel: acpiphp: Slot [18] registered Dec 13 01:54:48.233150 kernel: acpiphp: Slot [19] registered Dec 13 01:54:48.233170 kernel: acpiphp: Slot [20] registered Dec 13 01:54:48.233188 kernel: acpiphp: Slot [21] registered Dec 13 01:54:48.233207 kernel: acpiphp: Slot [22] registered Dec 13 01:54:48.233231 kernel: acpiphp: Slot [23] registered Dec 13 01:54:48.233250 kernel: acpiphp: Slot [24] registered Dec 13 01:54:48.233268 kernel: acpiphp: Slot [25] registered Dec 13 01:54:48.233286 kernel: acpiphp: Slot [26] registered Dec 13 01:54:48.233473 kernel: acpiphp: Slot [27] registered Dec 13 01:54:48.233592 kernel: acpiphp: Slot [28] registered Dec 13 01:54:48.233611 kernel: acpiphp: Slot [29] registered Dec 13 01:54:48.233630 kernel: acpiphp: Slot [30] registered Dec 13 01:54:48.233649 kernel: acpiphp: Slot [31] registered Dec 13 01:54:48.233669 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:48.234079 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:54:48.234282 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:54:48.234521 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:48.234725 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:54:48.234966 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:54:48.235205 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:54:48.237440 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:54:48.237696 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:54:48.237914 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:54:48.238139 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:48.238444 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:54:48.238661 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:54:48.238867 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:54:48.239083 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:54:48.239289 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:48.239550 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:54:48.239761 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:54:48.239984 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:54:48.240194 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:54:48.242742 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:54:48.242977 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:54:48.243171 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:54:48.243436 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:48.243466 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:54:48.243491 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:54:48.243511 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:54:48.243531 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:54:48.243550 kernel: iommu: Default domain type: Translated Dec 13 01:54:48.243578 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:54:48.243597 kernel: efivars: Registered efivars operations Dec 13 01:54:48.243615 kernel: vgaarb: loaded Dec 13 01:54:48.243634 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:54:48.243654 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:48.243675 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:48.243694 kernel: pnp: PnP ACPI init Dec 13 01:54:48.243961 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:54:48.244011 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:54:48.244032 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:48.244052 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:54:48.244075 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:54:48.244096 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:48.244117 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:48.244136 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:54:48.244157 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:54:48.244177 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.244203 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:48.244223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:48.244241 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:54:48.244260 kernel: kvm [1]: HYP mode not available Dec 13 01:54:48.244278 kernel: Initialise system trusted keyrings Dec 13 01:54:48.244381 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:54:48.244404 kernel: Key type asymmetric registered Dec 13 01:54:48.244423 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:48.244442 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:54:48.244470 kernel: io scheduler mq-deadline registered Dec 13 01:54:48.244490 kernel: io scheduler kyber registered Dec 13 01:54:48.244509 kernel: io scheduler bfq registered Dec 13 01:54:48.244759 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:54:48.244787 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:54:48.244806 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:54:48.244825 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:54:48.244844 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:54:48.244869 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:48.244889 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:54:48.245099 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:54:48.245126 kernel: printk: console [ttyS0] disabled Dec 13 01:54:48.245146 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:54:48.245165 kernel: printk: console [ttyS0] enabled Dec 13 01:54:48.245183 kernel: printk: bootconsole [uart0] disabled Dec 13 01:54:48.245202 kernel: thunder_xcv, ver 1.0 Dec 13 01:54:48.245220 kernel: thunder_bgx, ver 1.0 Dec 13 01:54:48.245239 kernel: nicpf, ver 1.0 Dec 13 01:54:48.245262 kernel: nicvf, ver 1.0 Dec 13 01:54:48.245689 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:54:48.245910 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:54:47 UTC (1734054887) Dec 13 01:54:48.245944 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:54:48.245965 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:54:48.245985 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:54:48.246005 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:54:48.246046 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:48.246066 kernel: Segment Routing with IPv6 Dec 13 01:54:48.246086 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:48.246106 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:48.246127 kernel: Key type dns_resolver registered Dec 13 01:54:48.246147 kernel: registered taskstats version 1 Dec 13 01:54:48.246166 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:48.246188 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:54:48.246208 kernel: Key type .fscrypt registered Dec 13 01:54:48.246227 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:48.246253 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:48.246271 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:48.246289 kernel: ima: No architecture policies found Dec 13 01:54:48.247686 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:54:48.247705 kernel: clk: Disabling unused clocks Dec 13 01:54:48.247724 kernel: Freeing unused kernel memory: 39360K Dec 13 01:54:48.247743 kernel: Run /init as init process Dec 13 01:54:48.247761 kernel: with arguments: Dec 13 01:54:48.247779 kernel: /init Dec 13 01:54:48.247806 kernel: with environment: Dec 13 01:54:48.247824 kernel: HOME=/ Dec 13 01:54:48.247843 kernel: TERM=linux Dec 13 01:54:48.247861 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:48.247886 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:48.247911 systemd[1]: Detected virtualization amazon. Dec 13 01:54:48.247933 systemd[1]: Detected architecture arm64. Dec 13 01:54:48.248443 systemd[1]: Running in initrd. Dec 13 01:54:48.248468 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:48.248489 systemd[1]: Hostname set to . Dec 13 01:54:48.248511 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:48.248532 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:48.248553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:48.248576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:48.248599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:48.248627 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:48.248648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:48.248671 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:48.248695 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:48.248717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:48.248737 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:48.248758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:48.248783 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:48.248803 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:48.248823 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:48.248844 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:48.248864 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:48.248885 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:48.248905 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:48.248927 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:48.248947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:48.248972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:48.248993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:48.249013 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:48.249034 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:48.249054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:48.249075 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:48.249095 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:48.249116 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:48.249140 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:48.249161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:48.249182 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:48.249202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:48.249222 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:48.249244 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:48.250664 systemd-journald[250]: Collecting audit messages is disabled. Dec 13 01:54:48.250770 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:48.250793 kernel: Bridge firewalling registered Dec 13 01:54:48.250824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:48.250845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:48.250866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:48.250888 systemd-journald[250]: Journal started Dec 13 01:54:48.250925 systemd-journald[250]: Runtime Journal (/run/log/journal/ec299904057d2fdccb80b683dd16fc4a) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:48.175501 systemd-modules-load[251]: Inserted module 'overlay' Dec 13 01:54:48.214159 systemd-modules-load[251]: Inserted module 'br_netfilter' Dec 13 01:54:48.265783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:48.265847 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:48.272619 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:48.290745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:48.305613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:48.309465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:48.334372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:48.337970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:48.363827 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:48.373108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:48.391775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:48.400829 dracut-cmdline[286]: dracut-dracut-053 Dec 13 01:54:48.410637 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:48.474198 systemd-resolved[292]: Positive Trust Anchors: Dec 13 01:54:48.474241 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:48.474325 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:48.592336 kernel: SCSI subsystem initialized Dec 13 01:54:48.602325 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:48.613336 kernel: iscsi: registered transport (tcp) Dec 13 01:54:48.635333 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:48.635421 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:48.710332 kernel: random: crng init done Dec 13 01:54:48.710684 systemd-resolved[292]: Defaulting to hostname 'linux'. Dec 13 01:54:48.714379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:48.718592 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:48.741915 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:48.752569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:48.797562 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:48.797662 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:48.797689 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:48.865363 kernel: raid6: neonx8 gen() 6700 MB/s Dec 13 01:54:48.882330 kernel: raid6: neonx4 gen() 6533 MB/s Dec 13 01:54:48.899328 kernel: raid6: neonx2 gen() 5449 MB/s Dec 13 01:54:48.916329 kernel: raid6: neonx1 gen() 3958 MB/s Dec 13 01:54:48.933329 kernel: raid6: int64x8 gen() 3791 MB/s Dec 13 01:54:48.950329 kernel: raid6: int64x4 gen() 3716 MB/s Dec 13 01:54:48.967327 kernel: raid6: int64x2 gen() 3596 MB/s Dec 13 01:54:48.985069 kernel: raid6: int64x1 gen() 2772 MB/s Dec 13 01:54:48.985105 kernel: raid6: using algorithm neonx8 gen() 6700 MB/s Dec 13 01:54:49.003048 kernel: raid6: .... xor() 4866 MB/s, rmw enabled Dec 13 01:54:49.003111 kernel: raid6: using neon recovery algorithm Dec 13 01:54:49.011518 kernel: xor: measuring software checksum speed Dec 13 01:54:49.011586 kernel: 8regs : 10972 MB/sec Dec 13 01:54:49.012593 kernel: 32regs : 11947 MB/sec Dec 13 01:54:49.013753 kernel: arm64_neon : 9518 MB/sec Dec 13 01:54:49.013785 kernel: xor: using function: 32regs (11947 MB/sec) Dec 13 01:54:49.099354 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:49.120563 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:49.130652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:49.175629 systemd-udevd[471]: Using default interface naming scheme 'v255'. Dec 13 01:54:49.184901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:49.204721 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:49.235914 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Dec 13 01:54:49.300995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:49.309702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:49.437970 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:49.450339 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:49.491185 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:49.496767 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:49.500140 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:49.506008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:49.525860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:49.562395 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:49.645007 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:54:49.645092 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:54:49.676510 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:54:49.676773 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:54:49.677004 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:63:2c:28:bd:6b Dec 13 01:54:49.658523 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:49.658791 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:49.664384 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:49.670899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:49.671206 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:49.676239 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:49.697821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:49.706809 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:49.724072 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:54:49.725957 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:54:49.737338 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:54:49.744353 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:54:49.744458 kernel: GPT:9289727 != 16777215 Dec 13 01:54:49.744485 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:54:49.745386 kernel: GPT:9289727 != 16777215 Dec 13 01:54:49.745420 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:49.747339 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:49.750804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:49.762641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:49.801094 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:49.860373 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (513) Dec 13 01:54:49.892412 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (535) Dec 13 01:54:49.943039 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:54:49.980989 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:54:50.009874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:50.025573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:50.027964 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:50.042598 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:50.067225 disk-uuid[659]: Primary Header is updated. Dec 13 01:54:50.067225 disk-uuid[659]: Secondary Entries is updated. Dec 13 01:54:50.067225 disk-uuid[659]: Secondary Header is updated. Dec 13 01:54:50.076371 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:50.083338 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:50.092374 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:50.092445 kernel: block device autoloading is deprecated and will be removed. Dec 13 01:54:51.091473 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:51.092071 disk-uuid[660]: The operation has completed successfully. Dec 13 01:54:51.300588 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:54:51.300906 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:54:51.338694 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:54:51.350432 sh[1004]: Success Dec 13 01:54:51.372355 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:54:51.483268 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:54:51.491816 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:54:51.501380 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:54:51.538745 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:54:51.538813 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:51.541324 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:54:51.541360 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:54:51.541752 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:54:51.593341 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:54:51.607274 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:54:51.607799 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:54:51.621788 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:54:51.628855 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:54:51.655280 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:51.655400 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:51.655431 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:51.673355 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:51.693892 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:54:51.697381 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:51.712435 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:54:51.725767 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:54:51.836765 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:51.864752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:51.910001 systemd-networkd[1196]: lo: Link UP Dec 13 01:54:51.910024 systemd-networkd[1196]: lo: Gained carrier Dec 13 01:54:51.914862 systemd-networkd[1196]: Enumeration completed Dec 13 01:54:51.915022 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:51.920188 systemd[1]: Reached target network.target - Network. Dec 13 01:54:51.922442 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:51.922448 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:51.926848 systemd-networkd[1196]: eth0: Link UP Dec 13 01:54:51.926860 systemd-networkd[1196]: eth0: Gained carrier Dec 13 01:54:51.927110 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:51.960447 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.20.234/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:52.121374 ignition[1115]: Ignition 2.19.0 Dec 13 01:54:52.121998 ignition[1115]: Stage: fetch-offline Dec 13 01:54:52.122714 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.122742 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.123401 ignition[1115]: Ignition finished successfully Dec 13 01:54:52.133454 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:52.145691 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:54:52.181450 ignition[1206]: Ignition 2.19.0 Dec 13 01:54:52.181480 ignition[1206]: Stage: fetch Dec 13 01:54:52.183083 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.183110 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.183381 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.202769 ignition[1206]: PUT result: OK Dec 13 01:54:52.207931 ignition[1206]: parsed url from cmdline: "" Dec 13 01:54:52.207949 ignition[1206]: no config URL provided Dec 13 01:54:52.207965 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:54:52.207991 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:54:52.208033 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.209910 ignition[1206]: PUT result: OK Dec 13 01:54:52.209993 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:54:52.214210 ignition[1206]: GET result: OK Dec 13 01:54:52.215159 ignition[1206]: parsing config with SHA512: 3f5ccb34e7c7ad4b235bf539122a6992d49399be9aef4763726cda70ff77520e95a3b66e7c9be4230d0c37ae23095e6b1e833bffacdfe8deead128a150df5e97 Dec 13 01:54:52.225366 unknown[1206]: fetched base config from "system" Dec 13 01:54:52.225657 unknown[1206]: fetched base config from "system" Dec 13 01:54:52.226250 ignition[1206]: fetch: fetch complete Dec 13 01:54:52.225671 unknown[1206]: fetched user config from "aws" Dec 13 01:54:52.226261 ignition[1206]: fetch: fetch passed Dec 13 01:54:52.226400 ignition[1206]: Ignition finished successfully Dec 13 01:54:52.236694 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:54:52.246661 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:54:52.287972 ignition[1213]: Ignition 2.19.0 Dec 13 01:54:52.288000 ignition[1213]: Stage: kargs Dec 13 01:54:52.288993 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.289020 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.289190 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.291339 ignition[1213]: PUT result: OK Dec 13 01:54:52.301129 ignition[1213]: kargs: kargs passed Dec 13 01:54:52.301251 ignition[1213]: Ignition finished successfully Dec 13 01:54:52.308374 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:54:52.324235 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:54:52.348626 ignition[1219]: Ignition 2.19.0 Dec 13 01:54:52.348655 ignition[1219]: Stage: disks Dec 13 01:54:52.350065 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:52.350091 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:52.350271 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:52.354458 ignition[1219]: PUT result: OK Dec 13 01:54:52.362727 ignition[1219]: disks: disks passed Dec 13 01:54:52.363075 ignition[1219]: Ignition finished successfully Dec 13 01:54:52.373481 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:54:52.378203 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:52.382273 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:52.384560 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:52.386421 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:54:52.388677 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:54:52.409823 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:54:52.458090 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:54:52.465850 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:54:52.487762 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:54:52.580351 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:54:52.582426 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:54:52.586182 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:54:52.606609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:52.613582 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:54:52.615989 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:54:52.616072 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:54:52.616121 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:52.637496 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1246) Dec 13 01:54:52.641619 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:52.641689 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:52.642845 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:52.650142 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:54:52.659345 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:52.660723 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:54:52.670883 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:52.944671 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:54:52.954287 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:54:52.963620 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:54:52.981849 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:54:53.284619 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:53.294626 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:54:53.302046 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:54:53.334185 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:54:53.336726 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:53.366605 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:54:53.381973 ignition[1359]: INFO : Ignition 2.19.0 Dec 13 01:54:53.383889 ignition[1359]: INFO : Stage: mount Dec 13 01:54:53.385503 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:53.385503 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:53.385503 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:53.392005 ignition[1359]: INFO : PUT result: OK Dec 13 01:54:53.396065 ignition[1359]: INFO : mount: mount passed Dec 13 01:54:53.398088 ignition[1359]: INFO : Ignition finished successfully Dec 13 01:54:53.401913 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:54:53.412505 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:54:53.457430 systemd-networkd[1196]: eth0: Gained IPv6LL Dec 13 01:54:53.591783 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:54:53.614329 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1370) Dec 13 01:54:53.614422 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:54:53.617532 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:54:53.617573 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:54:53.623329 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:54:53.627242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:54:53.665017 ignition[1386]: INFO : Ignition 2.19.0 Dec 13 01:54:53.667919 ignition[1386]: INFO : Stage: files Dec 13 01:54:53.667919 ignition[1386]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:53.667919 ignition[1386]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:53.667919 ignition[1386]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:53.676556 ignition[1386]: INFO : PUT result: OK Dec 13 01:54:53.681249 ignition[1386]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:54:53.703999 ignition[1386]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:54:53.703999 ignition[1386]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:54:53.710947 ignition[1386]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:54:53.713902 ignition[1386]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:54:53.716722 ignition[1386]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:54:53.716413 unknown[1386]: wrote ssh authorized keys file for user: core Dec 13 01:54:53.724371 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:54:53.724371 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:54:53.724371 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:53.733972 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:54:53.733972 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:53.733972 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:54:53.733972 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:53.733972 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:53.733972 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:53.756888 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:54:54.202017 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 01:54:54.563200 ignition[1386]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:54:54.563200 ignition[1386]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 01:54:54.569943 ignition[1386]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:54:54.569943 ignition[1386]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:54:54.569943 ignition[1386]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 01:54:54.569943 ignition[1386]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:54.569943 ignition[1386]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:54:54.569943 ignition[1386]: INFO : files: files passed Dec 13 01:54:54.569943 ignition[1386]: INFO : Ignition finished successfully Dec 13 01:54:54.588083 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:54:54.607708 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:54:54.618237 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:54:54.625183 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:54:54.625429 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:54:54.660563 initrd-setup-root-after-ignition[1415]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:54.660563 initrd-setup-root-after-ignition[1415]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:54.667089 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:54:54.671815 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:54.674645 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:54:54.690581 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:54:54.744075 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:54:54.746089 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:54:54.749952 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:54:54.754612 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:54:54.758234 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:54:54.766841 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:54:54.796561 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:54.807603 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:54:54.838444 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:54.839037 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:54.840266 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:54:54.841022 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:54:54.841557 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:54:54.842536 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:54:54.843129 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:54:54.844020 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:54:54.844894 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:54:54.845500 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:54:54.846098 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:54:54.846962 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:54.847577 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:54:54.848062 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:54:54.848466 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:54:54.848943 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:54:54.849227 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:54.850436 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:54.850818 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:54.851318 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:54:54.868501 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:54.882323 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:54:54.882661 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:54.914503 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:54:54.918713 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:54:54.921191 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:54:54.921432 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:54:54.937753 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:54:54.945720 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:54:54.950213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:54.973487 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:54:54.978998 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:54:54.981404 ignition[1439]: INFO : Ignition 2.19.0 Dec 13 01:54:54.981404 ignition[1439]: INFO : Stage: umount Dec 13 01:54:54.981404 ignition[1439]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:54:54.981404 ignition[1439]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:54:54.981404 ignition[1439]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:54:54.981404 ignition[1439]: INFO : PUT result: OK Dec 13 01:54:54.980646 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:54.986066 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:54:54.986404 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:55.008151 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:54:55.011771 ignition[1439]: INFO : umount: umount passed Dec 13 01:54:55.011771 ignition[1439]: INFO : Ignition finished successfully Dec 13 01:54:55.012237 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:54:55.022440 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:54:55.026361 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:54:55.036694 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:54:55.036897 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:54:55.042099 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:54:55.042547 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:54:55.048549 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:54:55.048734 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:54:55.056489 systemd[1]: Stopped target network.target - Network. Dec 13 01:54:55.059208 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:54:55.059376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:54:55.059534 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:54:55.060201 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:54:55.074172 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:55.076477 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:54:55.079429 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:54:55.082225 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:54:55.082324 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:55.089003 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:54:55.089085 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:55.091441 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:54:55.091530 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:54:55.093465 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:54:55.093568 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:55.096043 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:54:55.104655 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:54:55.106100 systemd-networkd[1196]: eth0: DHCPv6 lease lost Dec 13 01:54:55.111812 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:54:55.124835 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:54:55.125380 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:54:55.132835 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:54:55.133123 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:54:55.142632 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:54:55.142743 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:55.159906 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:54:55.165579 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:54:55.166775 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:54:55.172522 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:54:55.172636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:55.175698 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:54:55.175850 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:55.184421 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:54:55.184525 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:55.188471 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:55.219992 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:54:55.223963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:54:55.231871 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:54:55.232659 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:54:55.238802 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:54:55.239838 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:55.247664 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:54:55.247787 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:55.251287 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:54:55.251398 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:55.259953 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:54:55.260052 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:55.262967 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:54:55.263054 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:55.269631 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:55.269822 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:55.280639 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:54:55.280737 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:54:55.295567 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:54:55.298199 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:54:55.298339 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:55.300757 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:55.300850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:55.314605 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:54:55.314824 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:54:55.320567 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:54:55.343720 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:54:55.372402 systemd[1]: Switching root. Dec 13 01:54:55.407280 systemd-journald[250]: Journal stopped Dec 13 01:54:57.546889 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Dec 13 01:54:57.547019 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:54:57.547063 kernel: SELinux: policy capability open_perms=1 Dec 13 01:54:57.547095 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:54:57.547127 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:54:57.547158 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:54:57.547197 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:54:57.547234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:54:57.547267 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:54:57.548414 kernel: audit: type=1403 audit(1734054895.881:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:54:57.548477 systemd[1]: Successfully loaded SELinux policy in 70.903ms. Dec 13 01:54:57.548529 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.005ms. Dec 13 01:54:57.548569 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:57.548608 systemd[1]: Detected virtualization amazon. Dec 13 01:54:57.548643 systemd[1]: Detected architecture arm64. Dec 13 01:54:57.548675 systemd[1]: Detected first boot. Dec 13 01:54:57.548707 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:57.548740 zram_generator::config[1498]: No configuration found. Dec 13 01:54:57.548777 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:54:57.548810 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:54:57.548843 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:54:57.548887 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:54:57.548923 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:54:57.548956 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:54:57.548988 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:54:57.549020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:54:57.549052 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:54:57.549089 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:54:57.549118 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:54:57.549148 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:57.549178 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:57.549212 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:54:57.549244 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:54:57.549276 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:54:57.549775 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:57.549821 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:54:57.549864 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:57.549897 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:54:57.549927 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:57.549960 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:57.549996 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:57.550029 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:57.550059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:54:57.550089 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:54:57.550118 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:57.550150 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:57.550186 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:57.550216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:57.550251 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:57.550283 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:54:57.550363 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:54:57.550396 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:54:57.550428 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:54:57.550460 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:54:57.550490 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:54:57.550520 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:54:57.550549 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:54:57.550584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:54:57.550618 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:57.550647 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:54:57.550683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:54:57.550712 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:54:57.550742 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:54:57.550772 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:54:57.550805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:54:57.550835 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:54:57.550870 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:54:57.550908 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:54:57.550940 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:57.550970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:57.551003 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:54:57.551033 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:54:57.551062 kernel: fuse: init (API version 7.39) Dec 13 01:54:57.551092 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:57.551127 kernel: ACPI: bus type drm_connector registered Dec 13 01:54:57.551214 systemd-journald[1602]: Collecting audit messages is disabled. Dec 13 01:54:57.551266 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:54:57.551355 systemd-journald[1602]: Journal started Dec 13 01:54:57.551437 systemd-journald[1602]: Runtime Journal (/run/log/journal/ec299904057d2fdccb80b683dd16fc4a) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:57.560410 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:54:57.563975 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:57.574348 kernel: loop: module loaded Dec 13 01:54:57.569317 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:54:57.574719 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:54:57.577225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:54:57.581872 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:54:57.585273 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:57.592468 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:54:57.598773 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:54:57.599147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:54:57.603648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:54:57.604018 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:54:57.607342 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:54:57.607719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:54:57.610886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:54:57.611273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:54:57.614472 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:54:57.614861 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:54:57.617900 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:54:57.620885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:54:57.624231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:57.629238 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:54:57.634316 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:54:57.664136 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:54:57.675868 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:54:57.692626 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:54:57.695486 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:54:57.708640 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:54:57.732613 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:54:57.735728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:54:57.740596 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:54:57.743615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:54:57.761589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:57.776624 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:57.786937 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:54:57.790681 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:54:57.800352 systemd-journald[1602]: Time spent on flushing to /var/log/journal/ec299904057d2fdccb80b683dd16fc4a is 85.919ms for 880 entries. Dec 13 01:54:57.800352 systemd-journald[1602]: System Journal (/var/log/journal/ec299904057d2fdccb80b683dd16fc4a) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:54:57.898832 systemd-journald[1602]: Received client request to flush runtime journal. Dec 13 01:54:57.823073 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:54:57.827955 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:54:57.868718 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:57.886711 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:54:57.909903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:57.917056 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:54:57.924707 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Dec 13 01:54:57.924732 systemd-tmpfiles[1651]: ACLs are not supported, ignoring. Dec 13 01:54:57.939214 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:57.948671 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:54:57.970716 udevadm[1660]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:54:58.030336 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:54:58.047729 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:58.080197 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Dec 13 01:54:58.080800 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Dec 13 01:54:58.091227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:58.821968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:54:58.832607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:58.902114 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Dec 13 01:54:58.938253 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:58.957145 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:54:58.996642 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:54:59.077019 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:54:59.092156 (udev-worker)[1690]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:59.130355 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1686) Dec 13 01:54:59.158410 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1686) Dec 13 01:54:59.175051 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:54:59.347017 systemd-networkd[1685]: lo: Link UP Dec 13 01:54:59.347931 systemd-networkd[1685]: lo: Gained carrier Dec 13 01:54:59.351434 systemd-networkd[1685]: Enumeration completed Dec 13 01:54:59.353031 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:54:59.355112 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:59.355221 systemd-networkd[1685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:54:59.360517 systemd-networkd[1685]: eth0: Link UP Dec 13 01:54:59.360882 systemd-networkd[1685]: eth0: Gained carrier Dec 13 01:54:59.360913 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:54:59.365710 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:54:59.388511 systemd-networkd[1685]: eth0: DHCPv4 address 172.31.20.234/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:54:59.432706 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1682) Dec 13 01:54:59.520024 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:59.666010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:59.669768 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:54:59.681728 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:54:59.691134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:59.710403 lvm[1806]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:59.750158 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:54:59.754141 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:59.763750 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:54:59.786759 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:54:59.824737 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:54:59.828209 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:54:59.831676 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:54:59.831837 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:54:59.833878 systemd[1]: Reached target machines.target - Containers. Dec 13 01:54:59.837755 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:54:59.848612 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:54:59.854664 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:54:59.857558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:54:59.862101 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:54:59.875776 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:54:59.893714 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:54:59.904549 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:54:59.937674 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:54:59.946495 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:54:59.956443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:54:59.964471 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:55:00.041387 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:55:00.067375 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 01:55:00.134366 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 01:55:00.223359 kernel: loop3: detected capacity change from 0 to 52536 Dec 13 01:55:00.351650 kernel: loop4: detected capacity change from 0 to 114432 Dec 13 01:55:00.371325 kernel: loop5: detected capacity change from 0 to 194512 Dec 13 01:55:00.402353 kernel: loop6: detected capacity change from 0 to 114328 Dec 13 01:55:00.417838 kernel: loop7: detected capacity change from 0 to 52536 Dec 13 01:55:00.428784 (sd-merge)[1833]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:55:00.429818 (sd-merge)[1833]: Merged extensions into '/usr'. Dec 13 01:55:00.438353 systemd[1]: Reloading requested from client PID 1820 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:55:00.438387 systemd[1]: Reloading... Dec 13 01:55:00.561448 systemd-networkd[1685]: eth0: Gained IPv6LL Dec 13 01:55:00.595528 ldconfig[1816]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:55:00.597351 zram_generator::config[1864]: No configuration found. Dec 13 01:55:00.899950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:01.058589 systemd[1]: Reloading finished in 619 ms. Dec 13 01:55:01.085910 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:55:01.089709 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:55:01.094062 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:55:01.110735 systemd[1]: Starting ensure-sysext.service... Dec 13 01:55:01.126707 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:55:01.146834 systemd[1]: Reloading requested from client PID 1922 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:55:01.146885 systemd[1]: Reloading... Dec 13 01:55:01.185663 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:55:01.186501 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:55:01.188660 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:55:01.189406 systemd-tmpfiles[1923]: ACLs are not supported, ignoring. Dec 13 01:55:01.189641 systemd-tmpfiles[1923]: ACLs are not supported, ignoring. Dec 13 01:55:01.197480 systemd-tmpfiles[1923]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:01.197501 systemd-tmpfiles[1923]: Skipping /boot Dec 13 01:55:01.222024 systemd-tmpfiles[1923]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:01.222222 systemd-tmpfiles[1923]: Skipping /boot Dec 13 01:55:01.316335 zram_generator::config[1955]: No configuration found. Dec 13 01:55:01.597287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:01.758851 systemd[1]: Reloading finished in 611 ms. Dec 13 01:55:01.795623 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:55:01.814645 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:01.828027 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:55:01.837962 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:55:01.853757 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:55:01.872814 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:55:01.907259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:01.912881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:55:01.921206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:55:01.941568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:55:01.945568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:01.980766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:01.981270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:01.996703 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:55:02.008332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:02.018436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:55:02.021716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:02.022240 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:55:02.044024 systemd[1]: Finished ensure-sysext.service. Dec 13 01:55:02.082228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:55:02.082797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:55:02.088745 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:55:02.094897 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:55:02.100239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:55:02.100840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:55:02.106066 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:55:02.111089 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:55:02.129409 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:55:02.129749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:55:02.141062 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:55:02.144247 augenrules[2048]: No rules Dec 13 01:55:02.146997 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:55:02.149857 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:02.155869 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:55:02.156738 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:55:02.192047 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:55:02.212113 systemd-resolved[2015]: Positive Trust Anchors: Dec 13 01:55:02.212147 systemd-resolved[2015]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:55:02.212212 systemd-resolved[2015]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:55:02.223195 systemd-resolved[2015]: Defaulting to hostname 'linux'. Dec 13 01:55:02.227410 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:55:02.230025 systemd[1]: Reached target network.target - Network. Dec 13 01:55:02.231853 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:55:02.234102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:55:02.236825 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:55:02.239285 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:55:02.242379 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:55:02.245643 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:55:02.249369 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:55:02.252012 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:55:02.254526 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:55:02.254590 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:55:02.257136 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:55:02.262096 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:55:02.270063 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:55:02.275826 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:55:02.280834 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:55:02.283205 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:55:02.285278 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:55:02.287562 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:55:02.287674 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:02.287731 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:02.303601 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:55:02.310667 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:55:02.322720 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:55:02.336248 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:55:02.346713 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:55:02.350560 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:55:02.360273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:02.368852 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:55:02.396634 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:55:02.416253 jq[2069]: false Dec 13 01:55:02.426569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:55:02.455703 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:55:02.464701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:55:02.473393 dbus-daemon[2068]: [system] SELinux support is enabled Dec 13 01:55:02.486827 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:55:02.515548 dbus-daemon[2068]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1685 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:02.533389 extend-filesystems[2070]: Found loop4 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found loop5 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found loop6 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found loop7 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1p1 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1p2 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1p3 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found usr Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1p4 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1p6 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1p7 Dec 13 01:55:02.533389 extend-filesystems[2070]: Found nvme0n1p9 Dec 13 01:55:02.533389 extend-filesystems[2070]: Checking size of /dev/nvme0n1p9 Dec 13 01:55:02.537001 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:55:02.543981 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:55:02.555827 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:55:02.580783 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: ---------------------------------------------------- Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: corporation. Support and training for ntp-4 are Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: available at https://www.nwtime.org/support Dec 13 01:55:02.603093 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: ---------------------------------------------------- Dec 13 01:55:02.599585 ntpd[2075]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:02.599639 ntpd[2075]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:02.599660 ntpd[2075]: ---------------------------------------------------- Dec 13 01:55:02.599679 ntpd[2075]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:02.599700 ntpd[2075]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:02.599719 ntpd[2075]: corporation. Support and training for ntp-4 are Dec 13 01:55:02.599739 ntpd[2075]: available at https://www.nwtime.org/support Dec 13 01:55:02.599756 ntpd[2075]: ---------------------------------------------------- Dec 13 01:55:02.607434 ntpd[2075]: proto: precision = 0.108 usec (-23) Dec 13 01:55:02.609482 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: proto: precision = 0.108 usec (-23) Dec 13 01:55:02.609482 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: basedate set to 2024-11-30 Dec 13 01:55:02.609482 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:02.608073 ntpd[2075]: basedate set to 2024-11-30 Dec 13 01:55:02.608115 ntpd[2075]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:02.616455 ntpd[2075]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:02.617614 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:02.617614 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:02.617614 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:02.617614 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Listen normally on 3 eth0 172.31.20.234:123 Dec 13 01:55:02.617614 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:02.617614 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Listen normally on 5 eth0 [fe80::463:2cff:fe28:bd6b%2]:123 Dec 13 01:55:02.617614 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: Listening on routing socket on fd #22 for interface updates Dec 13 01:55:02.616563 ntpd[2075]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:02.616874 ntpd[2075]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:02.616943 ntpd[2075]: Listen normally on 3 eth0 172.31.20.234:123 Dec 13 01:55:02.617011 ntpd[2075]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:02.617087 ntpd[2075]: Listen normally on 5 eth0 [fe80::463:2cff:fe28:bd6b%2]:123 Dec 13 01:55:02.617151 ntpd[2075]: Listening on routing socket on fd #22 for interface updates Dec 13 01:55:02.625663 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:55:02.634419 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.635159 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.635159 ntpd[2075]: 13 Dec 01:55:02 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.634501 ntpd[2075]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:02.650165 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:55:02.650749 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:55:02.691908 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:55:02.702620 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetch successful Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetch successful Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetch successful Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetch successful Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetch failed with 404: resource not found Dec 13 01:55:02.714642 coreos-metadata[2066]: Dec 13 01:55:02.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:55:02.734867 extend-filesystems[2070]: Resized partition /dev/nvme0n1p9 Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetch successful Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetch successful Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetch successful Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetch successful Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:55:02.737333 coreos-metadata[2066]: Dec 13 01:55:02.731 INFO Fetch successful Dec 13 01:55:02.739838 jq[2094]: true Dec 13 01:55:02.783612 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:55:02.773157 (ntainerd)[2108]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:55:02.784343 extend-filesystems[2116]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:55:02.814009 update_engine[2091]: I20241213 01:55:02.750530 2091 main.cc:92] Flatcar Update Engine starting Dec 13 01:55:02.814009 update_engine[2091]: I20241213 01:55:02.755112 2091 update_check_scheduler.cc:74] Next update check in 7m35s Dec 13 01:55:02.778536 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:55:02.844390 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:55:02.845195 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:55:02.890368 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:55:02.916229 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:55:02.922289 dbus-daemon[2068]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:55:02.918174 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:55:02.921334 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:55:02.921393 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:55:02.924939 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:55:02.930146 jq[2118]: true Dec 13 01:55:02.944666 extend-filesystems[2116]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:55:02.944666 extend-filesystems[2116]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:55:02.944666 extend-filesystems[2116]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:55:02.934161 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:55:02.985048 extend-filesystems[2070]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:55:02.942356 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:55:02.946342 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:55:02.946986 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:55:03.085859 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:55:03.104160 systemd-logind[2084]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:55:03.104243 systemd-logind[2084]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:55:03.104792 systemd-logind[2084]: New seat seat0. Dec 13 01:55:03.109812 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:55:03.146677 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:55:03.158721 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:55:03.182363 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:55:03.199927 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:55:03.344329 bash[2186]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:03.357736 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:55:03.391006 systemd[1]: Starting sshkeys.service... Dec 13 01:55:03.462796 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: Initializing new seelog logger Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.499526 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.499235 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:55:03.512442 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO Proxy environment variables: Dec 13 01:55:03.513505 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.513505 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.514507 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.523681 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.525324 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:03.525727 amazon-ssm-agent[2173]: 2024/12/13 01:55:03 processing appconfig overrides Dec 13 01:55:03.567829 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (2154) Dec 13 01:55:03.584491 locksmithd[2137]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:55:03.616019 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO https_proxy: Dec 13 01:55:03.704321 containerd[2108]: time="2024-12-13T01:55:03.701787168Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:55:03.718856 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO http_proxy: Dec 13 01:55:03.735957 coreos-metadata[2198]: Dec 13 01:55:03.734 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:03.735957 coreos-metadata[2198]: Dec 13 01:55:03.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:55:03.746324 coreos-metadata[2198]: Dec 13 01:55:03.737 INFO Fetch successful Dec 13 01:55:03.746324 coreos-metadata[2198]: Dec 13 01:55:03.737 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:55:03.746324 coreos-metadata[2198]: Dec 13 01:55:03.743 INFO Fetch successful Dec 13 01:55:03.749499 unknown[2198]: wrote ssh authorized keys file for user: core Dec 13 01:55:03.828323 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO no_proxy: Dec 13 01:55:03.868338 update-ssh-keys[2236]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:03.872658 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:55:03.892290 systemd[1]: Finished sshkeys.service. Dec 13 01:55:03.928409 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:55:03.929006 dbus-daemon[2068]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:55:03.930472 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:55:03.935798 dbus-daemon[2068]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2166 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:03.960471 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:55:03.991388 containerd[2108]: time="2024-12-13T01:55:03.989773381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:04.006607 containerd[2108]: time="2024-12-13T01:55:04.006526905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:04.006607 containerd[2108]: time="2024-12-13T01:55:04.006598929Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:55:04.006811 containerd[2108]: time="2024-12-13T01:55:04.006639357Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:55:04.007070 containerd[2108]: time="2024-12-13T01:55:04.007026861Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:55:04.007133 containerd[2108]: time="2024-12-13T01:55:04.007079793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:04.007910 containerd[2108]: time="2024-12-13T01:55:04.007219269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:04.007910 containerd[2108]: time="2024-12-13T01:55:04.007260585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:04.007910 containerd[2108]: time="2024-12-13T01:55:04.007737309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:04.007910 containerd[2108]: time="2024-12-13T01:55:04.007772865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:04.007910 containerd[2108]: time="2024-12-13T01:55:04.007804293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:04.007910 containerd[2108]: time="2024-12-13T01:55:04.007829565Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:04.008208 containerd[2108]: time="2024-12-13T01:55:04.007995393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:04.015354 containerd[2108]: time="2024-12-13T01:55:04.011717482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:04.015354 containerd[2108]: time="2024-12-13T01:55:04.014960698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:04.015354 containerd[2108]: time="2024-12-13T01:55:04.015037870Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:55:04.015618 containerd[2108]: time="2024-12-13T01:55:04.015413638Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:55:04.015618 containerd[2108]: time="2024-12-13T01:55:04.015575458Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:55:04.028359 containerd[2108]: time="2024-12-13T01:55:04.028089358Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:55:04.028359 containerd[2108]: time="2024-12-13T01:55:04.028213522Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:55:04.028359 containerd[2108]: time="2024-12-13T01:55:04.028256758Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:55:04.028626 containerd[2108]: time="2024-12-13T01:55:04.028364446Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:55:04.028626 containerd[2108]: time="2024-12-13T01:55:04.028499314Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.028832854Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029476630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029705554Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029742298Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029775046Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029806486Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029837170Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029868298Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029900050Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029934370Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029965810Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.029996782Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.030027922Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:55:04.030352 containerd[2108]: time="2024-12-13T01:55:04.030081106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.031025 containerd[2108]: time="2024-12-13T01:55:04.030115846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.031025 containerd[2108]: time="2024-12-13T01:55:04.030146986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.031025 containerd[2108]: time="2024-12-13T01:55:04.030180778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.031025 containerd[2108]: time="2024-12-13T01:55:04.030212794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.031025 containerd[2108]: time="2024-12-13T01:55:04.030246850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.030277030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039407590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039483082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039538726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039572842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039603202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039648010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039688054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039736258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039765430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039792214Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039899374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039936922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:55:04.041540 containerd[2108]: time="2024-12-13T01:55:04.039965746Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:55:04.042318 containerd[2108]: time="2024-12-13T01:55:04.039994642Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:55:04.042318 containerd[2108]: time="2024-12-13T01:55:04.040018678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.042318 containerd[2108]: time="2024-12-13T01:55:04.040048762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:55:04.042318 containerd[2108]: time="2024-12-13T01:55:04.040072438Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:55:04.042318 containerd[2108]: time="2024-12-13T01:55:04.040097830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:55:04.042576 containerd[2108]: time="2024-12-13T01:55:04.040688842Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:55:04.042576 containerd[2108]: time="2024-12-13T01:55:04.040822834Z" level=info msg="Connect containerd service" Dec 13 01:55:04.042576 containerd[2108]: time="2024-12-13T01:55:04.040929778Z" level=info msg="using legacy CRI server" Dec 13 01:55:04.042576 containerd[2108]: time="2024-12-13T01:55:04.040950970Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:55:04.042576 containerd[2108]: time="2024-12-13T01:55:04.041124274Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:55:04.048564 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:55:04.059352 containerd[2108]: time="2024-12-13T01:55:04.057980674Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:04.071903 containerd[2108]: time="2024-12-13T01:55:04.069262042Z" level=info msg="Start subscribing containerd event" Dec 13 01:55:04.072052 containerd[2108]: time="2024-12-13T01:55:04.071931682Z" level=info msg="Start recovering state" Dec 13 01:55:04.072136 containerd[2108]: time="2024-12-13T01:55:04.072098110Z" level=info msg="Start event monitor" Dec 13 01:55:04.072192 containerd[2108]: time="2024-12-13T01:55:04.072138010Z" level=info msg="Start snapshots syncer" Dec 13 01:55:04.072192 containerd[2108]: time="2024-12-13T01:55:04.072166174Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:55:04.073690 containerd[2108]: time="2024-12-13T01:55:04.072202234Z" level=info msg="Start streaming server" Dec 13 01:55:04.073690 containerd[2108]: time="2024-12-13T01:55:04.073134658Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:55:04.073690 containerd[2108]: time="2024-12-13T01:55:04.073242838Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:55:04.080935 polkitd[2256]: Started polkitd version 121 Dec 13 01:55:04.082237 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:55:04.088038 containerd[2108]: time="2024-12-13T01:55:04.081472678Z" level=info msg="containerd successfully booted in 0.390173s" Dec 13 01:55:04.104574 polkitd[2256]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:55:04.104734 polkitd[2256]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:55:04.107835 polkitd[2256]: Finished loading, compiling and executing 2 rules Dec 13 01:55:04.109609 dbus-daemon[2068]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:55:04.109901 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:55:04.114637 polkitd[2256]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:55:04.145999 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO Agent will take identity from EC2 Dec 13 01:55:04.160009 systemd-hostnamed[2166]: Hostname set to (transient) Dec 13 01:55:04.160045 systemd-resolved[2015]: System hostname changed to 'ip-172-31-20-234'. Dec 13 01:55:04.247496 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:04.347743 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:04.447498 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:04.546777 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:55:04.649315 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:55:04.747390 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:55:04.847484 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:55:04.947915 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [Registrar] Starting registrar module Dec 13 01:55:05.048903 amazon-ssm-agent[2173]: 2024-12-13 01:55:03 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:55:05.086254 sshd_keygen[2123]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:55:05.146833 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:55:05.167918 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:55:05.213598 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:55:05.214123 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:55:05.234857 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:05.253742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:05.265770 (kubelet)[2342]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:05.303526 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:05.321194 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:05.343795 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:05.347402 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:05.354004 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:05.360616 systemd[1]: Startup finished in 9.299s (kernel) + 9.547s (userspace) = 18.847s. Dec 13 01:55:05.519377 amazon-ssm-agent[2173]: 2024-12-13 01:55:05 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:55:05.549619 amazon-ssm-agent[2173]: 2024-12-13 01:55:05 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:55:05.550564 amazon-ssm-agent[2173]: 2024-12-13 01:55:05 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:55:05.550564 amazon-ssm-agent[2173]: 2024-12-13 01:55:05 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:55:05.620664 amazon-ssm-agent[2173]: 2024-12-13 01:55:05 INFO [CredentialRefresher] Next credential rotation will be in 31.408309793433332 minutes Dec 13 01:55:06.357429 kubelet[2342]: E1213 01:55:06.357100 2342 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:06.361946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:06.362633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:06.577662 amazon-ssm-agent[2173]: 2024-12-13 01:55:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:55:06.679798 amazon-ssm-agent[2173]: 2024-12-13 01:55:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2359) started Dec 13 01:55:06.780795 amazon-ssm-agent[2173]: 2024-12-13 01:55:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:55:09.141836 systemd-resolved[2015]: Clock change detected. Flushing caches. Dec 13 01:55:09.820983 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:55:09.828329 systemd[1]: Started sshd@0-172.31.20.234:22-139.178.68.195:50570.service - OpenSSH per-connection server daemon (139.178.68.195:50570). Dec 13 01:55:10.015098 sshd[2368]: Accepted publickey for core from 139.178.68.195 port 50570 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:10.018653 sshd[2368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:10.035087 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:10.044303 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:10.050306 systemd-logind[2084]: New session 1 of user core. Dec 13 01:55:10.083119 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:10.095568 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:10.112162 (systemd)[2374]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:10.331586 systemd[2374]: Queued start job for default target default.target. Dec 13 01:55:10.333354 systemd[2374]: Created slice app.slice - User Application Slice. Dec 13 01:55:10.333418 systemd[2374]: Reached target paths.target - Paths. Dec 13 01:55:10.333451 systemd[2374]: Reached target timers.target - Timers. Dec 13 01:55:10.342877 systemd[2374]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:10.359141 systemd[2374]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:10.359298 systemd[2374]: Reached target sockets.target - Sockets. Dec 13 01:55:10.359333 systemd[2374]: Reached target basic.target - Basic System. Dec 13 01:55:10.359467 systemd[2374]: Reached target default.target - Main User Target. Dec 13 01:55:10.359540 systemd[2374]: Startup finished in 235ms. Dec 13 01:55:10.360130 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:10.366373 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:10.519473 systemd[1]: Started sshd@1-172.31.20.234:22-139.178.68.195:50580.service - OpenSSH per-connection server daemon (139.178.68.195:50580). Dec 13 01:55:10.697596 sshd[2386]: Accepted publickey for core from 139.178.68.195 port 50580 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:10.700681 sshd[2386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:10.711202 systemd-logind[2084]: New session 2 of user core. Dec 13 01:55:10.721559 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:10.855027 sshd[2386]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:10.862442 systemd[1]: sshd@1-172.31.20.234:22-139.178.68.195:50580.service: Deactivated successfully. Dec 13 01:55:10.863989 systemd-logind[2084]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:55:10.870293 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:55:10.871843 systemd-logind[2084]: Removed session 2. Dec 13 01:55:10.885307 systemd[1]: Started sshd@2-172.31.20.234:22-139.178.68.195:50592.service - OpenSSH per-connection server daemon (139.178.68.195:50592). Dec 13 01:55:11.064170 sshd[2394]: Accepted publickey for core from 139.178.68.195 port 50592 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:11.066318 sshd[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:11.075433 systemd-logind[2084]: New session 3 of user core. Dec 13 01:55:11.082049 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:11.208030 sshd[2394]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:11.216296 systemd[1]: sshd@2-172.31.20.234:22-139.178.68.195:50592.service: Deactivated successfully. Dec 13 01:55:11.222446 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:55:11.224025 systemd-logind[2084]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:55:11.225910 systemd-logind[2084]: Removed session 3. Dec 13 01:55:11.237362 systemd[1]: Started sshd@3-172.31.20.234:22-139.178.68.195:50606.service - OpenSSH per-connection server daemon (139.178.68.195:50606). Dec 13 01:55:11.421415 sshd[2402]: Accepted publickey for core from 139.178.68.195 port 50606 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:11.424981 sshd[2402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:11.435612 systemd-logind[2084]: New session 4 of user core. Dec 13 01:55:11.444372 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:11.581085 sshd[2402]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:11.590411 systemd[1]: sshd@3-172.31.20.234:22-139.178.68.195:50606.service: Deactivated successfully. Dec 13 01:55:11.592141 systemd-logind[2084]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:11.597904 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:11.599442 systemd-logind[2084]: Removed session 4. Dec 13 01:55:11.608242 systemd[1]: Started sshd@4-172.31.20.234:22-139.178.68.195:50620.service - OpenSSH per-connection server daemon (139.178.68.195:50620). Dec 13 01:55:11.778003 sshd[2410]: Accepted publickey for core from 139.178.68.195 port 50620 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:11.780153 sshd[2410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:11.789233 systemd-logind[2084]: New session 5 of user core. Dec 13 01:55:11.798458 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:11.921210 sudo[2414]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:55:11.922150 sudo[2414]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:11.943213 sudo[2414]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:11.966542 sshd[2410]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:11.971874 systemd[1]: sshd@4-172.31.20.234:22-139.178.68.195:50620.service: Deactivated successfully. Dec 13 01:55:11.978959 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:11.981220 systemd-logind[2084]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:11.983417 systemd-logind[2084]: Removed session 5. Dec 13 01:55:11.997232 systemd[1]: Started sshd@5-172.31.20.234:22-139.178.68.195:50636.service - OpenSSH per-connection server daemon (139.178.68.195:50636). Dec 13 01:55:12.175573 sshd[2419]: Accepted publickey for core from 139.178.68.195 port 50636 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.178535 sshd[2419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.186241 systemd-logind[2084]: New session 6 of user core. Dec 13 01:55:12.195294 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:55:12.304317 sudo[2424]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:55:12.305516 sudo[2424]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:12.311984 sudo[2424]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:12.322310 sudo[2423]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:55:12.322982 sudo[2423]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:12.350189 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:12.353846 auditctl[2427]: No rules Dec 13 01:55:12.354658 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:55:12.355216 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:12.369826 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:12.410758 augenrules[2446]: No rules Dec 13 01:55:12.414322 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:12.416660 sudo[2423]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:12.444050 sshd[2419]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:12.452968 systemd[1]: sshd@5-172.31.20.234:22-139.178.68.195:50636.service: Deactivated successfully. Dec 13 01:55:12.459221 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:55:12.460863 systemd-logind[2084]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:55:12.462815 systemd-logind[2084]: Removed session 6. Dec 13 01:55:12.473342 systemd[1]: Started sshd@6-172.31.20.234:22-139.178.68.195:50640.service - OpenSSH per-connection server daemon (139.178.68.195:50640). Dec 13 01:55:12.652379 sshd[2455]: Accepted publickey for core from 139.178.68.195 port 50640 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.655235 sshd[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.663586 systemd-logind[2084]: New session 7 of user core. Dec 13 01:55:12.677316 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:55:12.785083 sudo[2459]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:12.785985 sudo[2459]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:13.794196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:13.808179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:13.845915 systemd[1]: Reloading requested from client PID 2498 ('systemctl') (unit session-7.scope)... Dec 13 01:55:13.846494 systemd[1]: Reloading... Dec 13 01:55:14.079809 zram_generator::config[2538]: No configuration found. Dec 13 01:55:14.356897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:14.529772 systemd[1]: Reloading finished in 682 ms. Dec 13 01:55:14.608595 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:55:14.609028 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:55:14.609640 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:14.625304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:14.923164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:14.941418 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:15.026896 kubelet[2610]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:15.027734 kubelet[2610]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:15.027734 kubelet[2610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:15.027734 kubelet[2610]: I1213 01:55:15.027484 2610 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:16.445940 kubelet[2610]: I1213 01:55:16.445845 2610 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:55:16.445940 kubelet[2610]: I1213 01:55:16.445927 2610 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:16.446601 kubelet[2610]: I1213 01:55:16.446523 2610 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:55:16.489191 kubelet[2610]: I1213 01:55:16.489104 2610 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:16.511404 kubelet[2610]: I1213 01:55:16.511362 2610 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:16.513317 kubelet[2610]: I1213 01:55:16.512312 2610 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:16.513317 kubelet[2610]: I1213 01:55:16.512868 2610 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:55:16.513317 kubelet[2610]: I1213 01:55:16.512927 2610 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:16.513317 kubelet[2610]: I1213 01:55:16.512949 2610 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:55:16.515530 kubelet[2610]: I1213 01:55:16.515474 2610 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:16.520617 kubelet[2610]: I1213 01:55:16.520545 2610 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:55:16.520842 kubelet[2610]: I1213 01:55:16.520817 2610 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:16.520975 kubelet[2610]: I1213 01:55:16.520954 2610 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:55:16.521094 kubelet[2610]: I1213 01:55:16.521074 2610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:16.521363 kubelet[2610]: E1213 01:55:16.521249 2610 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:16.521363 kubelet[2610]: E1213 01:55:16.521338 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:16.525121 kubelet[2610]: I1213 01:55:16.525061 2610 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:16.525709 kubelet[2610]: I1213 01:55:16.525618 2610 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:16.526999 kubelet[2610]: W1213 01:55:16.526915 2610 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:55:16.528737 kubelet[2610]: I1213 01:55:16.528379 2610 server.go:1256] "Started kubelet" Dec 13 01:55:16.532274 kubelet[2610]: I1213 01:55:16.532213 2610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:16.540766 kubelet[2610]: I1213 01:55:16.539915 2610 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:16.542566 kubelet[2610]: I1213 01:55:16.542526 2610 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:55:16.547559 kubelet[2610]: I1213 01:55:16.542842 2610 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:16.548817 kubelet[2610]: I1213 01:55:16.548620 2610 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:55:16.550735 kubelet[2610]: I1213 01:55:16.549384 2610 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:55:16.550735 kubelet[2610]: I1213 01:55:16.549504 2610 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:55:16.550735 kubelet[2610]: I1213 01:55:16.549606 2610 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:16.555371 kubelet[2610]: I1213 01:55:16.554498 2610 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:16.556575 kubelet[2610]: I1213 01:55:16.556320 2610 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:16.556763 kubelet[2610]: E1213 01:55:16.556062 2610 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:16.566728 kubelet[2610]: E1213 01:55:16.563009 2610 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.234.181099c258b0c38c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.234,UID:172.31.20.234,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.20.234,},FirstTimestamp:2024-12-13 01:55:16.528325516 +0000 UTC m=+1.579082817,LastTimestamp:2024-12-13 01:55:16.528325516 +0000 UTC m=+1.579082817,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.234,}" Dec 13 01:55:16.566728 kubelet[2610]: W1213 01:55:16.563383 2610 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.31.20.234" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:55:16.566728 kubelet[2610]: E1213 01:55:16.563425 2610 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.20.234" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:55:16.566728 kubelet[2610]: W1213 01:55:16.563505 2610 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:55:16.566728 kubelet[2610]: E1213 01:55:16.563529 2610 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:55:16.567164 kubelet[2610]: W1213 01:55:16.564866 2610 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:55:16.567164 kubelet[2610]: E1213 01:55:16.564916 2610 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:55:16.567164 kubelet[2610]: I1213 01:55:16.566262 2610 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:16.602753 kubelet[2610]: E1213 01:55:16.601369 2610 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.20.234.181099c25a578bf4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.20.234,UID:172.31.20.234,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.20.234,},FirstTimestamp:2024-12-13 01:55:16.556033012 +0000 UTC m=+1.606790469,LastTimestamp:2024-12-13 01:55:16.556033012 +0000 UTC m=+1.606790469,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.20.234,}" Dec 13 01:55:16.602753 kubelet[2610]: E1213 01:55:16.601554 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.20.234\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:55:16.642718 kubelet[2610]: I1213 01:55:16.642633 2610 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:16.642718 kubelet[2610]: I1213 01:55:16.642685 2610 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:16.642935 kubelet[2610]: I1213 01:55:16.642749 2610 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:16.648380 kubelet[2610]: I1213 01:55:16.648256 2610 policy_none.go:49] "None policy: Start" Dec 13 01:55:16.650110 kubelet[2610]: I1213 01:55:16.650063 2610 kubelet_node_status.go:73] "Attempting to register node" node="172.31.20.234" Dec 13 01:55:16.650670 kubelet[2610]: I1213 01:55:16.650228 2610 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:16.650816 kubelet[2610]: I1213 01:55:16.650782 2610 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:16.666313 kubelet[2610]: I1213 01:55:16.666257 2610 kubelet_node_status.go:76] "Successfully registered node" node="172.31.20.234" Dec 13 01:55:16.683293 kubelet[2610]: I1213 01:55:16.683255 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:16.686257 kubelet[2610]: I1213 01:55:16.685535 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:16.686257 kubelet[2610]: I1213 01:55:16.685575 2610 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:16.686257 kubelet[2610]: I1213 01:55:16.685604 2610 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:55:16.686257 kubelet[2610]: E1213 01:55:16.685837 2610 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:16.700968 kubelet[2610]: I1213 01:55:16.700788 2610 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:16.701313 kubelet[2610]: E1213 01:55:16.701267 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:16.701763 kubelet[2610]: I1213 01:55:16.701714 2610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:16.709520 kubelet[2610]: E1213 01:55:16.709454 2610 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.20.234\" not found" Dec 13 01:55:16.801520 kubelet[2610]: E1213 01:55:16.801452 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:16.902422 kubelet[2610]: E1213 01:55:16.902348 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.003215 kubelet[2610]: E1213 01:55:17.003152 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.103833 kubelet[2610]: E1213 01:55:17.103766 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.204537 kubelet[2610]: E1213 01:55:17.204477 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.305463 kubelet[2610]: E1213 01:55:17.305273 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.406157 kubelet[2610]: E1213 01:55:17.406096 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.452610 kubelet[2610]: I1213 01:55:17.452500 2610 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:55:17.453512 kubelet[2610]: W1213 01:55:17.452782 2610 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:55:17.463429 sudo[2459]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:17.488564 sshd[2455]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:17.495429 systemd[1]: sshd@6-172.31.20.234:22-139.178.68.195:50640.service: Deactivated successfully. Dec 13 01:55:17.506987 kubelet[2610]: E1213 01:55:17.506920 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.507803 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:55:17.510024 systemd-logind[2084]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:55:17.512529 systemd-logind[2084]: Removed session 7. Dec 13 01:55:17.522355 kubelet[2610]: E1213 01:55:17.522309 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:17.608170 kubelet[2610]: E1213 01:55:17.608014 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.708380 kubelet[2610]: E1213 01:55:17.708279 2610 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.20.234\" not found" Dec 13 01:55:17.811180 kubelet[2610]: I1213 01:55:17.810966 2610 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:55:17.812064 containerd[2108]: time="2024-12-13T01:55:17.811969015Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:55:17.814660 kubelet[2610]: I1213 01:55:17.812592 2610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:55:18.522826 kubelet[2610]: E1213 01:55:18.522767 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:18.522826 kubelet[2610]: I1213 01:55:18.522773 2610 apiserver.go:52] "Watching apiserver" Dec 13 01:55:18.529055 kubelet[2610]: I1213 01:55:18.528984 2610 topology_manager.go:215] "Topology Admit Handler" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" podNamespace="kube-system" podName="cilium-9ntxj" Dec 13 01:55:18.529200 kubelet[2610]: I1213 01:55:18.529160 2610 topology_manager.go:215] "Topology Admit Handler" podUID="8d1a2de1-12fc-49d0-82ff-768ed85d775d" podNamespace="kube-system" podName="kube-proxy-k5qtp" Dec 13 01:55:18.551739 kubelet[2610]: I1213 01:55:18.550966 2610 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:55:18.560773 kubelet[2610]: I1213 01:55:18.560722 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjjxs\" (UniqueName: \"kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-kube-api-access-bjjxs\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.560945 kubelet[2610]: I1213 01:55:18.560802 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d1a2de1-12fc-49d0-82ff-768ed85d775d-xtables-lock\") pod \"kube-proxy-k5qtp\" (UID: \"8d1a2de1-12fc-49d0-82ff-768ed85d775d\") " pod="kube-system/kube-proxy-k5qtp" Dec 13 01:55:18.560945 kubelet[2610]: I1213 01:55:18.560855 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d1a2de1-12fc-49d0-82ff-768ed85d775d-lib-modules\") pod \"kube-proxy-k5qtp\" (UID: \"8d1a2de1-12fc-49d0-82ff-768ed85d775d\") " pod="kube-system/kube-proxy-k5qtp" Dec 13 01:55:18.560945 kubelet[2610]: I1213 01:55:18.560902 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-hostproc\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.560945 kubelet[2610]: I1213 01:55:18.560944 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cni-path\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561181 kubelet[2610]: I1213 01:55:18.560988 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-net\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561181 kubelet[2610]: I1213 01:55:18.561031 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d1a2de1-12fc-49d0-82ff-768ed85d775d-kube-proxy\") pod \"kube-proxy-k5qtp\" (UID: \"8d1a2de1-12fc-49d0-82ff-768ed85d775d\") " pod="kube-system/kube-proxy-k5qtp" Dec 13 01:55:18.561181 kubelet[2610]: I1213 01:55:18.561073 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-cgroup\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561181 kubelet[2610]: I1213 01:55:18.561121 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-config-path\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561181 kubelet[2610]: I1213 01:55:18.561175 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-lib-modules\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561410 kubelet[2610]: I1213 01:55:18.561219 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-kernel\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561410 kubelet[2610]: I1213 01:55:18.561260 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-hubble-tls\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561410 kubelet[2610]: I1213 01:55:18.561305 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc2r6\" (UniqueName: \"kubernetes.io/projected/8d1a2de1-12fc-49d0-82ff-768ed85d775d-kube-api-access-jc2r6\") pod \"kube-proxy-k5qtp\" (UID: \"8d1a2de1-12fc-49d0-82ff-768ed85d775d\") " pod="kube-system/kube-proxy-k5qtp" Dec 13 01:55:18.561410 kubelet[2610]: I1213 01:55:18.561346 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-bpf-maps\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561410 kubelet[2610]: I1213 01:55:18.561390 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-etc-cni-netd\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561640 kubelet[2610]: I1213 01:55:18.561435 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79394c86-dd5f-463b-9c69-5e7c029486c4-clustermesh-secrets\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561640 kubelet[2610]: I1213 01:55:18.561476 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-run\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.561640 kubelet[2610]: I1213 01:55:18.561521 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-xtables-lock\") pod \"cilium-9ntxj\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " pod="kube-system/cilium-9ntxj" Dec 13 01:55:18.837180 containerd[2108]: time="2024-12-13T01:55:18.836924144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5qtp,Uid:8d1a2de1-12fc-49d0-82ff-768ed85d775d,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:18.843666 containerd[2108]: time="2024-12-13T01:55:18.843387980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ntxj,Uid:79394c86-dd5f-463b-9c69-5e7c029486c4,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:19.469281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187833183.mount: Deactivated successfully. Dec 13 01:55:19.482057 containerd[2108]: time="2024-12-13T01:55:19.481965955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:19.490771 containerd[2108]: time="2024-12-13T01:55:19.490666711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:19.492792 containerd[2108]: time="2024-12-13T01:55:19.492729271Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:55:19.495445 containerd[2108]: time="2024-12-13T01:55:19.495378727Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:19.498488 containerd[2108]: time="2024-12-13T01:55:19.498431515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:19.503095 containerd[2108]: time="2024-12-13T01:55:19.503033839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:19.504639 containerd[2108]: time="2024-12-13T01:55:19.504253843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.623835ms" Dec 13 01:55:19.509168 containerd[2108]: time="2024-12-13T01:55:19.508932055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 670.444347ms" Dec 13 01:55:19.523045 kubelet[2610]: E1213 01:55:19.522974 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:19.714383 containerd[2108]: time="2024-12-13T01:55:19.713512172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:19.714383 containerd[2108]: time="2024-12-13T01:55:19.713681336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:19.714383 containerd[2108]: time="2024-12-13T01:55:19.713764316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:19.714383 containerd[2108]: time="2024-12-13T01:55:19.714006224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:19.728853 containerd[2108]: time="2024-12-13T01:55:19.723791648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:19.728853 containerd[2108]: time="2024-12-13T01:55:19.723921584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:19.728853 containerd[2108]: time="2024-12-13T01:55:19.723960428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:19.728853 containerd[2108]: time="2024-12-13T01:55:19.724127480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:19.844369 systemd[1]: run-containerd-runc-k8s.io-684e89f45f4bf565dabb12c71350e109fb83a8d411496b246ca57ba99e205ffe-runc.dHpj0t.mount: Deactivated successfully. Dec 13 01:55:19.903824 containerd[2108]: time="2024-12-13T01:55:19.903735693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9ntxj,Uid:79394c86-dd5f-463b-9c69-5e7c029486c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\"" Dec 13 01:55:19.912615 containerd[2108]: time="2024-12-13T01:55:19.911935677Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:55:19.923748 containerd[2108]: time="2024-12-13T01:55:19.923549493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5qtp,Uid:8d1a2de1-12fc-49d0-82ff-768ed85d775d,Namespace:kube-system,Attempt:0,} returns sandbox id \"684e89f45f4bf565dabb12c71350e109fb83a8d411496b246ca57ba99e205ffe\"" Dec 13 01:55:20.523583 kubelet[2610]: E1213 01:55:20.523491 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:21.524808 kubelet[2610]: E1213 01:55:21.524673 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:22.525415 kubelet[2610]: E1213 01:55:22.525329 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:23.525594 kubelet[2610]: E1213 01:55:23.525510 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:24.526047 kubelet[2610]: E1213 01:55:24.525991 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:25.526354 kubelet[2610]: E1213 01:55:25.526289 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:26.527234 kubelet[2610]: E1213 01:55:26.527167 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:27.527931 kubelet[2610]: E1213 01:55:27.527873 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:28.529149 kubelet[2610]: E1213 01:55:28.528964 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:29.530056 kubelet[2610]: E1213 01:55:29.529961 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:30.252635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937214447.mount: Deactivated successfully. Dec 13 01:55:30.531339 kubelet[2610]: E1213 01:55:30.531104 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:31.531665 kubelet[2610]: E1213 01:55:31.531614 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:32.531945 kubelet[2610]: E1213 01:55:32.531833 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:32.781011 containerd[2108]: time="2024-12-13T01:55:32.780903033Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:32.784027 containerd[2108]: time="2024-12-13T01:55:32.783238917Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650962" Dec 13 01:55:32.788733 containerd[2108]: time="2024-12-13T01:55:32.787739133Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:32.793942 containerd[2108]: time="2024-12-13T01:55:32.793847265Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.881800276s" Dec 13 01:55:32.793942 containerd[2108]: time="2024-12-13T01:55:32.793945737Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:55:32.796101 containerd[2108]: time="2024-12-13T01:55:32.795298869Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:55:32.800190 containerd[2108]: time="2024-12-13T01:55:32.800095377Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:55:32.835806 containerd[2108]: time="2024-12-13T01:55:32.835739925Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\"" Dec 13 01:55:32.839746 containerd[2108]: time="2024-12-13T01:55:32.838323945Z" level=info msg="StartContainer for \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\"" Dec 13 01:55:32.950976 containerd[2108]: time="2024-12-13T01:55:32.950919202Z" level=info msg="StartContainer for \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\" returns successfully" Dec 13 01:55:33.532269 kubelet[2610]: E1213 01:55:33.532201 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:33.737453 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:55:33.821622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93-rootfs.mount: Deactivated successfully. Dec 13 01:55:34.532903 kubelet[2610]: E1213 01:55:34.532799 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:34.665289 containerd[2108]: time="2024-12-13T01:55:34.665169214Z" level=info msg="shim disconnected" id=f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93 namespace=k8s.io Dec 13 01:55:34.665289 containerd[2108]: time="2024-12-13T01:55:34.665250610Z" level=warning msg="cleaning up after shim disconnected" id=f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93 namespace=k8s.io Dec 13 01:55:34.665289 containerd[2108]: time="2024-12-13T01:55:34.665274034Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:34.771251 containerd[2108]: time="2024-12-13T01:55:34.770920691Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:55:34.798768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269214092.mount: Deactivated successfully. Dec 13 01:55:34.812900 containerd[2108]: time="2024-12-13T01:55:34.812806175Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\"" Dec 13 01:55:34.814206 containerd[2108]: time="2024-12-13T01:55:34.814148999Z" level=info msg="StartContainer for \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\"" Dec 13 01:55:34.992022 containerd[2108]: time="2024-12-13T01:55:34.991959996Z" level=info msg="StartContainer for \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\" returns successfully" Dec 13 01:55:35.013017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:55:35.015364 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:35.015517 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:35.025935 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:35.083513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:35.113133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f-rootfs.mount: Deactivated successfully. Dec 13 01:55:35.147531 containerd[2108]: time="2024-12-13T01:55:35.147004089Z" level=info msg="shim disconnected" id=a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f namespace=k8s.io Dec 13 01:55:35.147531 containerd[2108]: time="2024-12-13T01:55:35.147108897Z" level=warning msg="cleaning up after shim disconnected" id=a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f namespace=k8s.io Dec 13 01:55:35.147531 containerd[2108]: time="2024-12-13T01:55:35.147152745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:35.182734 containerd[2108]: time="2024-12-13T01:55:35.181906965Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:55:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:55:35.533641 kubelet[2610]: E1213 01:55:35.533578 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:35.621369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994420017.mount: Deactivated successfully. Dec 13 01:55:35.790091 containerd[2108]: time="2024-12-13T01:55:35.789941424Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:55:35.838762 containerd[2108]: time="2024-12-13T01:55:35.838461504Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\"" Dec 13 01:55:35.839687 containerd[2108]: time="2024-12-13T01:55:35.839613756Z" level=info msg="StartContainer for \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\"" Dec 13 01:55:36.037344 containerd[2108]: time="2024-12-13T01:55:36.037265361Z" level=info msg="StartContainer for \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\" returns successfully" Dec 13 01:55:36.085453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f-rootfs.mount: Deactivated successfully. Dec 13 01:55:36.185284 containerd[2108]: time="2024-12-13T01:55:36.184973050Z" level=info msg="shim disconnected" id=405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f namespace=k8s.io Dec 13 01:55:36.185284 containerd[2108]: time="2024-12-13T01:55:36.185056702Z" level=warning msg="cleaning up after shim disconnected" id=405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f namespace=k8s.io Dec 13 01:55:36.185284 containerd[2108]: time="2024-12-13T01:55:36.185080138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:36.396618 containerd[2108]: time="2024-12-13T01:55:36.396237263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:36.398468 containerd[2108]: time="2024-12-13T01:55:36.398407931Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:55:36.400844 containerd[2108]: time="2024-12-13T01:55:36.400781891Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:36.405894 containerd[2108]: time="2024-12-13T01:55:36.405792599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:36.408390 containerd[2108]: time="2024-12-13T01:55:36.407307623Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 3.611931546s" Dec 13 01:55:36.408390 containerd[2108]: time="2024-12-13T01:55:36.407378519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:55:36.410857 containerd[2108]: time="2024-12-13T01:55:36.410794847Z" level=info msg="CreateContainer within sandbox \"684e89f45f4bf565dabb12c71350e109fb83a8d411496b246ca57ba99e205ffe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:55:36.444791 containerd[2108]: time="2024-12-13T01:55:36.444668387Z" level=info msg="CreateContainer within sandbox \"684e89f45f4bf565dabb12c71350e109fb83a8d411496b246ca57ba99e205ffe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4c46f12d7c03eafa3ebe173572124fda2f79f62098aafd46c27d2089ff248c70\"" Dec 13 01:55:36.446078 containerd[2108]: time="2024-12-13T01:55:36.446016731Z" level=info msg="StartContainer for \"4c46f12d7c03eafa3ebe173572124fda2f79f62098aafd46c27d2089ff248c70\"" Dec 13 01:55:36.521427 kubelet[2610]: E1213 01:55:36.521358 2610 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:36.534309 kubelet[2610]: E1213 01:55:36.533913 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:36.558362 containerd[2108]: time="2024-12-13T01:55:36.558078528Z" level=info msg="StartContainer for \"4c46f12d7c03eafa3ebe173572124fda2f79f62098aafd46c27d2089ff248c70\" returns successfully" Dec 13 01:55:36.805119 containerd[2108]: time="2024-12-13T01:55:36.804488941Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:55:36.805848 kubelet[2610]: I1213 01:55:36.803905 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k5qtp" podStartSLOduration=4.322269211 podStartE2EDuration="20.803838769s" podCreationTimestamp="2024-12-13 01:55:16 +0000 UTC" firstStartedPulling="2024-12-13 01:55:19.926188593 +0000 UTC m=+4.976945858" lastFinishedPulling="2024-12-13 01:55:36.407758151 +0000 UTC m=+21.458515416" observedRunningTime="2024-12-13 01:55:36.802909369 +0000 UTC m=+21.853666670" watchObservedRunningTime="2024-12-13 01:55:36.803838769 +0000 UTC m=+21.854596070" Dec 13 01:55:36.845934 containerd[2108]: time="2024-12-13T01:55:36.845865181Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\"" Dec 13 01:55:36.848130 containerd[2108]: time="2024-12-13T01:55:36.848057605Z" level=info msg="StartContainer for \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\"" Dec 13 01:55:37.060204 containerd[2108]: time="2024-12-13T01:55:37.059030794Z" level=info msg="StartContainer for \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\" returns successfully" Dec 13 01:55:37.115920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563-rootfs.mount: Deactivated successfully. Dec 13 01:55:37.181039 containerd[2108]: time="2024-12-13T01:55:37.180255371Z" level=info msg="shim disconnected" id=9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563 namespace=k8s.io Dec 13 01:55:37.181039 containerd[2108]: time="2024-12-13T01:55:37.180449231Z" level=warning msg="cleaning up after shim disconnected" id=9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563 namespace=k8s.io Dec 13 01:55:37.181039 containerd[2108]: time="2024-12-13T01:55:37.180472403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:37.534727 kubelet[2610]: E1213 01:55:37.534621 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:37.810660 containerd[2108]: time="2024-12-13T01:55:37.810087626Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:55:37.838187 containerd[2108]: time="2024-12-13T01:55:37.838103858Z" level=info msg="CreateContainer within sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\"" Dec 13 01:55:37.842454 containerd[2108]: time="2024-12-13T01:55:37.841127942Z" level=info msg="StartContainer for \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\"" Dec 13 01:55:37.955291 containerd[2108]: time="2024-12-13T01:55:37.955037367Z" level=info msg="StartContainer for \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\" returns successfully" Dec 13 01:55:38.117112 kubelet[2610]: I1213 01:55:38.115349 2610 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:55:38.535897 kubelet[2610]: E1213 01:55:38.535816 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:38.804977 kernel: Initializing XFRM netlink socket Dec 13 01:55:39.536375 kubelet[2610]: E1213 01:55:39.536258 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:40.537137 kubelet[2610]: E1213 01:55:40.537063 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:40.643350 systemd-networkd[1685]: cilium_host: Link UP Dec 13 01:55:40.645901 systemd-networkd[1685]: cilium_net: Link UP Dec 13 01:55:40.646312 systemd-networkd[1685]: cilium_net: Gained carrier Dec 13 01:55:40.646644 systemd-networkd[1685]: cilium_host: Gained carrier Dec 13 01:55:40.652923 (udev-worker)[3087]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:40.653934 (udev-worker)[3312]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:40.654807 systemd-networkd[1685]: cilium_host: Gained IPv6LL Dec 13 01:55:40.691730 kubelet[2610]: I1213 01:55:40.687838 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-9ntxj" podStartSLOduration=11.799638016 podStartE2EDuration="24.687778804s" podCreationTimestamp="2024-12-13 01:55:16 +0000 UTC" firstStartedPulling="2024-12-13 01:55:19.906626373 +0000 UTC m=+4.957383638" lastFinishedPulling="2024-12-13 01:55:32.794767161 +0000 UTC m=+17.845524426" observedRunningTime="2024-12-13 01:55:38.842003331 +0000 UTC m=+23.892760608" watchObservedRunningTime="2024-12-13 01:55:40.687778804 +0000 UTC m=+25.738536069" Dec 13 01:55:40.699013 kubelet[2610]: I1213 01:55:40.698959 2610 topology_manager.go:215] "Topology Admit Handler" podUID="5fe2871c-d0c3-4b6c-a57b-1f68d8443750" podNamespace="default" podName="nginx-deployment-6d5f899847-zlwzl" Dec 13 01:55:40.812111 kubelet[2610]: I1213 01:55:40.811946 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46752\" (UniqueName: \"kubernetes.io/projected/5fe2871c-d0c3-4b6c-a57b-1f68d8443750-kube-api-access-46752\") pod \"nginx-deployment-6d5f899847-zlwzl\" (UID: \"5fe2871c-d0c3-4b6c-a57b-1f68d8443750\") " pod="default/nginx-deployment-6d5f899847-zlwzl" Dec 13 01:55:40.839098 systemd-networkd[1685]: cilium_vxlan: Link UP Dec 13 01:55:40.839121 systemd-networkd[1685]: cilium_vxlan: Gained carrier Dec 13 01:55:40.910039 systemd-networkd[1685]: cilium_net: Gained IPv6LL Dec 13 01:55:41.018379 containerd[2108]: time="2024-12-13T01:55:41.018280742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-zlwzl,Uid:5fe2871c-d0c3-4b6c-a57b-1f68d8443750,Namespace:default,Attempt:0,}" Dec 13 01:55:41.353367 kernel: NET: Registered PF_ALG protocol family Dec 13 01:55:41.537628 kubelet[2610]: E1213 01:55:41.537544 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:42.149938 systemd-networkd[1685]: cilium_vxlan: Gained IPv6LL Dec 13 01:55:42.537833 kubelet[2610]: E1213 01:55:42.537771 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:42.645242 systemd-networkd[1685]: lxc_health: Link UP Dec 13 01:55:42.651584 systemd-networkd[1685]: lxc_health: Gained carrier Dec 13 01:55:42.653427 (udev-worker)[3323]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:43.105501 systemd-networkd[1685]: lxc4ff53bdcf1d2: Link UP Dec 13 01:55:43.115783 kernel: eth0: renamed from tmp3afdd Dec 13 01:55:43.121030 systemd-networkd[1685]: lxc4ff53bdcf1d2: Gained carrier Dec 13 01:55:43.538573 kubelet[2610]: E1213 01:55:43.538500 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:44.006155 systemd-networkd[1685]: lxc_health: Gained IPv6LL Dec 13 01:55:44.326296 systemd-networkd[1685]: lxc4ff53bdcf1d2: Gained IPv6LL Dec 13 01:55:44.540724 kubelet[2610]: E1213 01:55:44.539141 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:45.539686 kubelet[2610]: E1213 01:55:45.539602 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:46.540161 kubelet[2610]: E1213 01:55:46.540044 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:47.066841 update_engine[2091]: I20241213 01:55:47.066742 2091 update_attempter.cc:509] Updating boot flags... Dec 13 01:55:47.150077 ntpd[2075]: Listen normally on 6 cilium_host 192.168.1.108:123 Dec 13 01:55:47.151107 ntpd[2075]: 13 Dec 01:55:47 ntpd[2075]: Listen normally on 6 cilium_host 192.168.1.108:123 Dec 13 01:55:47.151107 ntpd[2075]: 13 Dec 01:55:47 ntpd[2075]: Listen normally on 7 cilium_net [fe80::48b2:bff:fe8a:d511%3]:123 Dec 13 01:55:47.151107 ntpd[2075]: 13 Dec 01:55:47 ntpd[2075]: Listen normally on 8 cilium_host [fe80::f0b7:94ff:feb1:3f7%4]:123 Dec 13 01:55:47.151107 ntpd[2075]: 13 Dec 01:55:47 ntpd[2075]: Listen normally on 9 cilium_vxlan [fe80::b83d:9fff:fe52:fd24%5]:123 Dec 13 01:55:47.151107 ntpd[2075]: 13 Dec 01:55:47 ntpd[2075]: Listen normally on 10 lxc_health [fe80::7c5d:fdff:fea3:735%7]:123 Dec 13 01:55:47.151107 ntpd[2075]: 13 Dec 01:55:47 ntpd[2075]: Listen normally on 11 lxc4ff53bdcf1d2 [fe80::5851:d0ff:fe3d:4503%9]:123 Dec 13 01:55:47.150239 ntpd[2075]: Listen normally on 7 cilium_net [fe80::48b2:bff:fe8a:d511%3]:123 Dec 13 01:55:47.150321 ntpd[2075]: Listen normally on 8 cilium_host [fe80::f0b7:94ff:feb1:3f7%4]:123 Dec 13 01:55:47.150388 ntpd[2075]: Listen normally on 9 cilium_vxlan [fe80::b83d:9fff:fe52:fd24%5]:123 Dec 13 01:55:47.150455 ntpd[2075]: Listen normally on 10 lxc_health [fe80::7c5d:fdff:fea3:735%7]:123 Dec 13 01:55:47.150522 ntpd[2075]: Listen normally on 11 lxc4ff53bdcf1d2 [fe80::5851:d0ff:fe3d:4503%9]:123 Dec 13 01:55:47.222052 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3687) Dec 13 01:55:47.544985 kubelet[2610]: E1213 01:55:47.544854 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:47.808964 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3688) Dec 13 01:55:48.403759 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3688) Dec 13 01:55:48.549729 kubelet[2610]: E1213 01:55:48.545105 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:49.545733 kubelet[2610]: E1213 01:55:49.545659 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:50.546979 kubelet[2610]: E1213 01:55:50.546903 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:51.548330 kubelet[2610]: E1213 01:55:51.548205 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:52.148827 containerd[2108]: time="2024-12-13T01:55:52.148075549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:52.148827 containerd[2108]: time="2024-12-13T01:55:52.148241653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:52.148827 containerd[2108]: time="2024-12-13T01:55:52.148282033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:52.148827 containerd[2108]: time="2024-12-13T01:55:52.148520833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:52.266094 containerd[2108]: time="2024-12-13T01:55:52.265993130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-zlwzl,Uid:5fe2871c-d0c3-4b6c-a57b-1f68d8443750,Namespace:default,Attempt:0,} returns sandbox id \"3afdda6a525c00b65af820adb727b3115c5f501ab2b4ffa3084f1913ef5ce768\"" Dec 13 01:55:52.271850 containerd[2108]: time="2024-12-13T01:55:52.271329758Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:55:52.548971 kubelet[2610]: E1213 01:55:52.548882 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:53.549299 kubelet[2610]: E1213 01:55:53.549175 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:54.551131 kubelet[2610]: E1213 01:55:54.550994 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:55.426419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832484299.mount: Deactivated successfully. Dec 13 01:55:55.553199 kubelet[2610]: E1213 01:55:55.553079 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.522147 kubelet[2610]: E1213 01:55:56.522005 2610 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.553917 kubelet[2610]: E1213 01:55:56.553603 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:56.889102 containerd[2108]: time="2024-12-13T01:55:56.888919077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:56.892382 containerd[2108]: time="2024-12-13T01:55:56.892288809Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 01:55:56.895234 containerd[2108]: time="2024-12-13T01:55:56.895059585Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:56.902899 containerd[2108]: time="2024-12-13T01:55:56.902767809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:56.904821 containerd[2108]: time="2024-12-13T01:55:56.904757121Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 4.633352459s" Dec 13 01:55:56.905058 containerd[2108]: time="2024-12-13T01:55:56.904820541Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:55:56.908667 containerd[2108]: time="2024-12-13T01:55:56.908595165Z" level=info msg="CreateContainer within sandbox \"3afdda6a525c00b65af820adb727b3115c5f501ab2b4ffa3084f1913ef5ce768\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:55:56.937098 containerd[2108]: time="2024-12-13T01:55:56.936927201Z" level=info msg="CreateContainer within sandbox \"3afdda6a525c00b65af820adb727b3115c5f501ab2b4ffa3084f1913ef5ce768\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3f81ecf9e1c2527476c8c3a9340c305cd6519bcd313df6da1918ac2d1ba9477b\"" Dec 13 01:55:56.938205 containerd[2108]: time="2024-12-13T01:55:56.938103417Z" level=info msg="StartContainer for \"3f81ecf9e1c2527476c8c3a9340c305cd6519bcd313df6da1918ac2d1ba9477b\"" Dec 13 01:55:57.046288 containerd[2108]: time="2024-12-13T01:55:57.046170953Z" level=info msg="StartContainer for \"3f81ecf9e1c2527476c8c3a9340c305cd6519bcd313df6da1918ac2d1ba9477b\" returns successfully" Dec 13 01:55:57.554559 kubelet[2610]: E1213 01:55:57.554485 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:58.555417 kubelet[2610]: E1213 01:55:58.555323 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:55:59.556120 kubelet[2610]: E1213 01:55:59.556061 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:00.556298 kubelet[2610]: E1213 01:56:00.556235 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:01.556941 kubelet[2610]: E1213 01:56:01.556858 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:02.557630 kubelet[2610]: E1213 01:56:02.557519 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:03.558367 kubelet[2610]: E1213 01:56:03.558297 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:04.558880 kubelet[2610]: E1213 01:56:04.558786 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:05.559808 kubelet[2610]: E1213 01:56:05.559734 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:06.449624 kubelet[2610]: I1213 01:56:06.449524 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-zlwzl" podStartSLOduration=21.814326049 podStartE2EDuration="26.449460976s" podCreationTimestamp="2024-12-13 01:55:40 +0000 UTC" firstStartedPulling="2024-12-13 01:55:52.270235922 +0000 UTC m=+37.320993187" lastFinishedPulling="2024-12-13 01:55:56.905370849 +0000 UTC m=+41.956128114" observedRunningTime="2024-12-13 01:55:57.901125418 +0000 UTC m=+42.951882707" watchObservedRunningTime="2024-12-13 01:56:06.449460976 +0000 UTC m=+51.500218241" Dec 13 01:56:06.449986 kubelet[2610]: I1213 01:56:06.449790 2610 topology_manager.go:215] "Topology Admit Handler" podUID="4e10301f-d42d-4b9c-914a-a1948beb2820" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:56:06.560877 kubelet[2610]: E1213 01:56:06.560803 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:06.591246 kubelet[2610]: I1213 01:56:06.591189 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q7lm\" (UniqueName: \"kubernetes.io/projected/4e10301f-d42d-4b9c-914a-a1948beb2820-kube-api-access-6q7lm\") pod \"nfs-server-provisioner-0\" (UID: \"4e10301f-d42d-4b9c-914a-a1948beb2820\") " pod="default/nfs-server-provisioner-0" Dec 13 01:56:06.591447 kubelet[2610]: I1213 01:56:06.591347 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4e10301f-d42d-4b9c-914a-a1948beb2820-data\") pod \"nfs-server-provisioner-0\" (UID: \"4e10301f-d42d-4b9c-914a-a1948beb2820\") " pod="default/nfs-server-provisioner-0" Dec 13 01:56:06.756386 containerd[2108]: time="2024-12-13T01:56:06.756307542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4e10301f-d42d-4b9c-914a-a1948beb2820,Namespace:default,Attempt:0,}" Dec 13 01:56:06.822226 (udev-worker)[4073]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:06.822392 (udev-worker)[4074]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:06.841505 systemd-networkd[1685]: lxc69a6a8a03459: Link UP Dec 13 01:56:06.850733 kernel: eth0: renamed from tmpccf34 Dec 13 01:56:06.863597 systemd-networkd[1685]: lxc69a6a8a03459: Gained carrier Dec 13 01:56:07.206932 containerd[2108]: time="2024-12-13T01:56:07.205925608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:07.206932 containerd[2108]: time="2024-12-13T01:56:07.206050756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:07.206932 containerd[2108]: time="2024-12-13T01:56:07.206078332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:07.206932 containerd[2108]: time="2024-12-13T01:56:07.206261776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:07.309471 containerd[2108]: time="2024-12-13T01:56:07.309403936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4e10301f-d42d-4b9c-914a-a1948beb2820,Namespace:default,Attempt:0,} returns sandbox id \"ccf34a9aac2ffa618ad2c635233d4066cd16646c6911bc008bcfe78de3e31abf\"" Dec 13 01:56:07.313279 containerd[2108]: time="2024-12-13T01:56:07.313147600Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:56:07.561498 kubelet[2610]: E1213 01:56:07.561406 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:08.007573 systemd-networkd[1685]: lxc69a6a8a03459: Gained IPv6LL Dec 13 01:56:08.562066 kubelet[2610]: E1213 01:56:08.561972 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:09.563056 kubelet[2610]: E1213 01:56:09.562644 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:09.864413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2314397350.mount: Deactivated successfully. Dec 13 01:56:10.140887 ntpd[2075]: Listen normally on 12 lxc69a6a8a03459 [fe80::dcaa:a0ff:fed0:a40e%11]:123 Dec 13 01:56:10.143496 ntpd[2075]: 13 Dec 01:56:10 ntpd[2075]: Listen normally on 12 lxc69a6a8a03459 [fe80::dcaa:a0ff:fed0:a40e%11]:123 Dec 13 01:56:10.563889 kubelet[2610]: E1213 01:56:10.563750 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:11.564741 kubelet[2610]: E1213 01:56:11.564665 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:12.565302 kubelet[2610]: E1213 01:56:12.565218 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:12.841182 containerd[2108]: time="2024-12-13T01:56:12.840784140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:12.843054 containerd[2108]: time="2024-12-13T01:56:12.842982456Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Dec 13 01:56:12.845585 containerd[2108]: time="2024-12-13T01:56:12.845513460Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:12.851339 containerd[2108]: time="2024-12-13T01:56:12.851253996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:12.853542 containerd[2108]: time="2024-12-13T01:56:12.853344120Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.540074432s" Dec 13 01:56:12.853542 containerd[2108]: time="2024-12-13T01:56:12.853402992Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 01:56:12.857467 containerd[2108]: time="2024-12-13T01:56:12.857389380Z" level=info msg="CreateContainer within sandbox \"ccf34a9aac2ffa618ad2c635233d4066cd16646c6911bc008bcfe78de3e31abf\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:56:12.885061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603067102.mount: Deactivated successfully. Dec 13 01:56:12.889241 containerd[2108]: time="2024-12-13T01:56:12.889154460Z" level=info msg="CreateContainer within sandbox \"ccf34a9aac2ffa618ad2c635233d4066cd16646c6911bc008bcfe78de3e31abf\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3ce2a47cc09e3b31ba899b075b561d4e14b43428e7569b55ca5f2eae6a110b3d\"" Dec 13 01:56:12.890584 containerd[2108]: time="2024-12-13T01:56:12.890483280Z" level=info msg="StartContainer for \"3ce2a47cc09e3b31ba899b075b561d4e14b43428e7569b55ca5f2eae6a110b3d\"" Dec 13 01:56:12.992809 containerd[2108]: time="2024-12-13T01:56:12.992658613Z" level=info msg="StartContainer for \"3ce2a47cc09e3b31ba899b075b561d4e14b43428e7569b55ca5f2eae6a110b3d\" returns successfully" Dec 13 01:56:13.566267 kubelet[2610]: E1213 01:56:13.566177 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:13.983217 kubelet[2610]: I1213 01:56:13.983140 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.440945322 podStartE2EDuration="7.983060222s" podCreationTimestamp="2024-12-13 01:56:06 +0000 UTC" firstStartedPulling="2024-12-13 01:56:07.311837152 +0000 UTC m=+52.362594417" lastFinishedPulling="2024-12-13 01:56:12.853952052 +0000 UTC m=+57.904709317" observedRunningTime="2024-12-13 01:56:13.981468026 +0000 UTC m=+59.032225303" watchObservedRunningTime="2024-12-13 01:56:13.983060222 +0000 UTC m=+59.033817511" Dec 13 01:56:14.566911 kubelet[2610]: E1213 01:56:14.566834 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:15.567711 kubelet[2610]: E1213 01:56:15.567628 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:16.521841 kubelet[2610]: E1213 01:56:16.521776 2610 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:16.568366 kubelet[2610]: E1213 01:56:16.568299 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:17.568860 kubelet[2610]: E1213 01:56:17.568784 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:18.569416 kubelet[2610]: E1213 01:56:18.569346 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:19.569913 kubelet[2610]: E1213 01:56:19.569831 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:20.570623 kubelet[2610]: E1213 01:56:20.570526 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:21.570793 kubelet[2610]: E1213 01:56:21.570685 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:22.571270 kubelet[2610]: E1213 01:56:22.571172 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:22.717506 kubelet[2610]: I1213 01:56:22.717407 2610 topology_manager.go:215] "Topology Admit Handler" podUID="630991b9-512c-4554-80f0-a80f19ec35b8" podNamespace="default" podName="test-pod-1" Dec 13 01:56:22.798668 kubelet[2610]: I1213 01:56:22.798590 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9fcd1f33-255e-4ac7-9d89-7368cb7c9864\" (UniqueName: \"kubernetes.io/nfs/630991b9-512c-4554-80f0-a80f19ec35b8-pvc-9fcd1f33-255e-4ac7-9d89-7368cb7c9864\") pod \"test-pod-1\" (UID: \"630991b9-512c-4554-80f0-a80f19ec35b8\") " pod="default/test-pod-1" Dec 13 01:56:22.798913 kubelet[2610]: I1213 01:56:22.798717 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w252\" (UniqueName: \"kubernetes.io/projected/630991b9-512c-4554-80f0-a80f19ec35b8-kube-api-access-6w252\") pod \"test-pod-1\" (UID: \"630991b9-512c-4554-80f0-a80f19ec35b8\") " pod="default/test-pod-1" Dec 13 01:56:22.937822 kernel: FS-Cache: Loaded Dec 13 01:56:22.979906 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:56:22.980056 kernel: RPC: Registered udp transport module. Dec 13 01:56:22.980103 kernel: RPC: Registered tcp transport module. Dec 13 01:56:22.980802 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:56:22.981729 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:56:23.316085 kernel: NFS: Registering the id_resolver key type Dec 13 01:56:23.316252 kernel: Key type id_resolver registered Dec 13 01:56:23.316299 kernel: Key type id_legacy registered Dec 13 01:56:23.358109 nfsidmap[4257]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:23.368054 nfsidmap[4258]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 01:56:23.572179 kubelet[2610]: E1213 01:56:23.572077 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:23.625345 containerd[2108]: time="2024-12-13T01:56:23.625147941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:630991b9-512c-4554-80f0-a80f19ec35b8,Namespace:default,Attempt:0,}" Dec 13 01:56:23.683196 (udev-worker)[4251]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:23.683513 (udev-worker)[4253]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:23.696424 kernel: eth0: renamed from tmpd6055 Dec 13 01:56:23.705823 systemd-networkd[1685]: lxc65f314b25fd3: Link UP Dec 13 01:56:23.710252 systemd-networkd[1685]: lxc65f314b25fd3: Gained carrier Dec 13 01:56:24.035809 containerd[2108]: time="2024-12-13T01:56:24.035331680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:24.035809 containerd[2108]: time="2024-12-13T01:56:24.035441048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:24.035809 containerd[2108]: time="2024-12-13T01:56:24.035485124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:24.036791 containerd[2108]: time="2024-12-13T01:56:24.036597620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:24.124460 containerd[2108]: time="2024-12-13T01:56:24.124384028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:630991b9-512c-4554-80f0-a80f19ec35b8,Namespace:default,Attempt:0,} returns sandbox id \"d60558ff7e9d192ece15d1368a15defda865da5c309f52bde1ac01730b6de564\"" Dec 13 01:56:24.127553 containerd[2108]: time="2024-12-13T01:56:24.127483904Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:56:24.445816 containerd[2108]: time="2024-12-13T01:56:24.445159990Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:24.448122 containerd[2108]: time="2024-12-13T01:56:24.448014430Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:56:24.454570 containerd[2108]: time="2024-12-13T01:56:24.454474438Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 326.906018ms" Dec 13 01:56:24.455104 containerd[2108]: time="2024-12-13T01:56:24.454896058Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:56:24.458147 containerd[2108]: time="2024-12-13T01:56:24.458089786Z" level=info msg="CreateContainer within sandbox \"d60558ff7e9d192ece15d1368a15defda865da5c309f52bde1ac01730b6de564\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:56:24.486468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305422658.mount: Deactivated successfully. Dec 13 01:56:24.489945 containerd[2108]: time="2024-12-13T01:56:24.489852142Z" level=info msg="CreateContainer within sandbox \"d60558ff7e9d192ece15d1368a15defda865da5c309f52bde1ac01730b6de564\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"553b9625b00ea77e1f6fdf94ced23e6d83a1dd0121db6b393b8eccf839d78c6a\"" Dec 13 01:56:24.490902 containerd[2108]: time="2024-12-13T01:56:24.490835902Z" level=info msg="StartContainer for \"553b9625b00ea77e1f6fdf94ced23e6d83a1dd0121db6b393b8eccf839d78c6a\"" Dec 13 01:56:24.573177 kubelet[2610]: E1213 01:56:24.573117 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:24.592356 containerd[2108]: time="2024-12-13T01:56:24.592036138Z" level=info msg="StartContainer for \"553b9625b00ea77e1f6fdf94ced23e6d83a1dd0121db6b393b8eccf839d78c6a\" returns successfully" Dec 13 01:56:25.350290 systemd-networkd[1685]: lxc65f314b25fd3: Gained IPv6LL Dec 13 01:56:25.574673 kubelet[2610]: E1213 01:56:25.574587 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:26.575424 kubelet[2610]: E1213 01:56:26.575363 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:27.576303 kubelet[2610]: E1213 01:56:27.576172 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:28.140890 ntpd[2075]: Listen normally on 13 lxc65f314b25fd3 [fe80::a446:1eff:fecd:2d8a%13]:123 Dec 13 01:56:28.141518 ntpd[2075]: 13 Dec 01:56:28 ntpd[2075]: Listen normally on 13 lxc65f314b25fd3 [fe80::a446:1eff:fecd:2d8a%13]:123 Dec 13 01:56:28.577265 kubelet[2610]: E1213 01:56:28.577186 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:29.577912 kubelet[2610]: E1213 01:56:29.577829 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:30.578620 kubelet[2610]: E1213 01:56:30.578555 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:31.579095 kubelet[2610]: E1213 01:56:31.579034 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:32.580060 kubelet[2610]: E1213 01:56:32.579990 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:33.100159 kubelet[2610]: I1213 01:56:33.099999 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=25.770969587 podStartE2EDuration="26.099934961s" podCreationTimestamp="2024-12-13 01:56:07 +0000 UTC" firstStartedPulling="2024-12-13 01:56:24.126322628 +0000 UTC m=+69.177079881" lastFinishedPulling="2024-12-13 01:56:24.455287978 +0000 UTC m=+69.506045255" observedRunningTime="2024-12-13 01:56:25.013961876 +0000 UTC m=+70.064719153" watchObservedRunningTime="2024-12-13 01:56:33.099934961 +0000 UTC m=+78.150692250" Dec 13 01:56:33.152942 containerd[2108]: time="2024-12-13T01:56:33.152857877Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:56:33.172651 containerd[2108]: time="2024-12-13T01:56:33.172250477Z" level=info msg="StopContainer for \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\" with timeout 2 (s)" Dec 13 01:56:33.172957 containerd[2108]: time="2024-12-13T01:56:33.172896701Z" level=info msg="Stop container \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\" with signal terminated" Dec 13 01:56:33.186063 systemd-networkd[1685]: lxc_health: Link DOWN Dec 13 01:56:33.186082 systemd-networkd[1685]: lxc_health: Lost carrier Dec 13 01:56:33.252037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005-rootfs.mount: Deactivated successfully. Dec 13 01:56:33.536379 containerd[2108]: time="2024-12-13T01:56:33.536177155Z" level=info msg="shim disconnected" id=8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005 namespace=k8s.io Dec 13 01:56:33.536379 containerd[2108]: time="2024-12-13T01:56:33.536334007Z" level=warning msg="cleaning up after shim disconnected" id=8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005 namespace=k8s.io Dec 13 01:56:33.536379 containerd[2108]: time="2024-12-13T01:56:33.536360131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:33.565738 containerd[2108]: time="2024-12-13T01:56:33.565607467Z" level=info msg="StopContainer for \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\" returns successfully" Dec 13 01:56:33.567088 containerd[2108]: time="2024-12-13T01:56:33.567032791Z" level=info msg="StopPodSandbox for \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\"" Dec 13 01:56:33.567232 containerd[2108]: time="2024-12-13T01:56:33.567102691Z" level=info msg="Container to stop \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.567232 containerd[2108]: time="2024-12-13T01:56:33.567130843Z" level=info msg="Container to stop \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.567232 containerd[2108]: time="2024-12-13T01:56:33.567154471Z" level=info msg="Container to stop \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.567232 containerd[2108]: time="2024-12-13T01:56:33.567178819Z" level=info msg="Container to stop \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.567232 containerd[2108]: time="2024-12-13T01:56:33.567200899Z" level=info msg="Container to stop \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.573772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2-shm.mount: Deactivated successfully. Dec 13 01:56:33.581162 kubelet[2610]: E1213 01:56:33.581032 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:33.628482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2-rootfs.mount: Deactivated successfully. Dec 13 01:56:33.634101 containerd[2108]: time="2024-12-13T01:56:33.633793399Z" level=info msg="shim disconnected" id=ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2 namespace=k8s.io Dec 13 01:56:33.634101 containerd[2108]: time="2024-12-13T01:56:33.633848743Z" level=warning msg="cleaning up after shim disconnected" id=ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2 namespace=k8s.io Dec 13 01:56:33.634101 containerd[2108]: time="2024-12-13T01:56:33.633869431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:33.659671 containerd[2108]: time="2024-12-13T01:56:33.659617579Z" level=info msg="TearDown network for sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" successfully" Dec 13 01:56:33.660055 containerd[2108]: time="2024-12-13T01:56:33.659842087Z" level=info msg="StopPodSandbox for \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" returns successfully" Dec 13 01:56:33.764763 kubelet[2610]: I1213 01:56:33.764637 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-xtables-lock\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766438 kubelet[2610]: I1213 01:56:33.764865 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.766438 kubelet[2610]: I1213 01:56:33.765203 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-net\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766438 kubelet[2610]: I1213 01:56:33.765258 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-cgroup\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766438 kubelet[2610]: I1213 01:56:33.765286 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.766438 kubelet[2610]: I1213 01:56:33.765313 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-config-path\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766830 kubelet[2610]: I1213 01:56:33.765336 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.766830 kubelet[2610]: I1213 01:56:33.765357 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-kernel\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766830 kubelet[2610]: I1213 01:56:33.765405 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-etc-cni-netd\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766830 kubelet[2610]: I1213 01:56:33.765453 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cni-path\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766830 kubelet[2610]: I1213 01:56:33.765512 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79394c86-dd5f-463b-9c69-5e7c029486c4-clustermesh-secrets\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.766830 kubelet[2610]: I1213 01:56:33.765573 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-hubble-tls\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.767172 kubelet[2610]: I1213 01:56:33.765617 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-run\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.767172 kubelet[2610]: I1213 01:56:33.765661 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-lib-modules\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.767172 kubelet[2610]: I1213 01:56:33.765745 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-hostproc\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.767172 kubelet[2610]: I1213 01:56:33.765793 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-bpf-maps\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.767172 kubelet[2610]: I1213 01:56:33.765853 2610 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjjxs\" (UniqueName: \"kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-kube-api-access-bjjxs\") pod \"79394c86-dd5f-463b-9c69-5e7c029486c4\" (UID: \"79394c86-dd5f-463b-9c69-5e7c029486c4\") " Dec 13 01:56:33.767172 kubelet[2610]: I1213 01:56:33.765932 2610 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-xtables-lock\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.767172 kubelet[2610]: I1213 01:56:33.765962 2610 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-net\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.767558 kubelet[2610]: I1213 01:56:33.765988 2610 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-cgroup\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.771879 kubelet[2610]: I1213 01:56:33.770857 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.771879 kubelet[2610]: I1213 01:56:33.770982 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.771879 kubelet[2610]: I1213 01:56:33.771037 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.775154 kubelet[2610]: I1213 01:56:33.775093 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:56:33.776225 kubelet[2610]: I1213 01:56:33.776152 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-kube-api-access-bjjxs" (OuterVolumeSpecName: "kube-api-access-bjjxs") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "kube-api-access-bjjxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:33.776929 kubelet[2610]: I1213 01:56:33.776876 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.777338 kubelet[2610]: I1213 01:56:33.777169 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.777338 kubelet[2610]: I1213 01:56:33.777242 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.777338 kubelet[2610]: I1213 01:56:33.777295 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.782812 systemd[1]: var-lib-kubelet-pods-79394c86\x2ddd5f\x2d463b\x2d9c69\x2d5e7c029486c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjjxs.mount: Deactivated successfully. Dec 13 01:56:33.784087 kubelet[2610]: I1213 01:56:33.783840 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:33.788804 kubelet[2610]: I1213 01:56:33.788462 2610 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79394c86-dd5f-463b-9c69-5e7c029486c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "79394c86-dd5f-463b-9c69-5e7c029486c4" (UID: "79394c86-dd5f-463b-9c69-5e7c029486c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:56:33.866942 kubelet[2610]: I1213 01:56:33.866887 2610 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-lib-modules\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.866942 kubelet[2610]: I1213 01:56:33.866945 2610 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-hubble-tls\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.866979 2610 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-run\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.867007 2610 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bjjxs\" (UniqueName: \"kubernetes.io/projected/79394c86-dd5f-463b-9c69-5e7c029486c4-kube-api-access-bjjxs\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.867031 2610 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-hostproc\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.867054 2610 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-bpf-maps\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.867079 2610 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79394c86-dd5f-463b-9c69-5e7c029486c4-cilium-config-path\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.867107 2610 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-host-proc-sys-kernel\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.867131 2610 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-etc-cni-netd\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867277 kubelet[2610]: I1213 01:56:33.867154 2610 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79394c86-dd5f-463b-9c69-5e7c029486c4-cni-path\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:33.867791 kubelet[2610]: I1213 01:56:33.867177 2610 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79394c86-dd5f-463b-9c69-5e7c029486c4-clustermesh-secrets\") on node \"172.31.20.234\" DevicePath \"\"" Dec 13 01:56:34.024283 kubelet[2610]: I1213 01:56:34.024249 2610 scope.go:117] "RemoveContainer" containerID="8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005" Dec 13 01:56:34.028230 containerd[2108]: time="2024-12-13T01:56:34.027729449Z" level=info msg="RemoveContainer for \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\"" Dec 13 01:56:34.042104 containerd[2108]: time="2024-12-13T01:56:34.041195609Z" level=info msg="RemoveContainer for \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\" returns successfully" Dec 13 01:56:34.043158 kubelet[2610]: I1213 01:56:34.042738 2610 scope.go:117] "RemoveContainer" containerID="9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563" Dec 13 01:56:34.045017 containerd[2108]: time="2024-12-13T01:56:34.044952125Z" level=info msg="RemoveContainer for \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\"" Dec 13 01:56:34.052728 containerd[2108]: time="2024-12-13T01:56:34.052642697Z" level=info msg="RemoveContainer for \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\" returns successfully" Dec 13 01:56:34.053740 kubelet[2610]: I1213 01:56:34.053202 2610 scope.go:117] "RemoveContainer" containerID="405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f" Dec 13 01:56:34.056465 containerd[2108]: time="2024-12-13T01:56:34.056398589Z" level=info msg="RemoveContainer for \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\"" Dec 13 01:56:34.062627 containerd[2108]: time="2024-12-13T01:56:34.062546357Z" level=info msg="RemoveContainer for \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\" returns successfully" Dec 13 01:56:34.063172 kubelet[2610]: I1213 01:56:34.063028 2610 scope.go:117] "RemoveContainer" containerID="a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f" Dec 13 01:56:34.065280 containerd[2108]: time="2024-12-13T01:56:34.065220269Z" level=info msg="RemoveContainer for \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\"" Dec 13 01:56:34.071827 containerd[2108]: time="2024-12-13T01:56:34.071739221Z" level=info msg="RemoveContainer for \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\" returns successfully" Dec 13 01:56:34.072279 kubelet[2610]: I1213 01:56:34.072137 2610 scope.go:117] "RemoveContainer" containerID="f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93" Dec 13 01:56:34.075629 containerd[2108]: time="2024-12-13T01:56:34.075207617Z" level=info msg="RemoveContainer for \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\"" Dec 13 01:56:34.081605 containerd[2108]: time="2024-12-13T01:56:34.081457469Z" level=info msg="RemoveContainer for \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\" returns successfully" Dec 13 01:56:34.082141 kubelet[2610]: I1213 01:56:34.082083 2610 scope.go:117] "RemoveContainer" containerID="8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005" Dec 13 01:56:34.082788 containerd[2108]: time="2024-12-13T01:56:34.082673801Z" level=error msg="ContainerStatus for \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\": not found" Dec 13 01:56:34.083343 kubelet[2610]: E1213 01:56:34.083283 2610 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\": not found" containerID="8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005" Dec 13 01:56:34.083475 kubelet[2610]: I1213 01:56:34.083440 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005"} err="failed to get container status \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\": rpc error: code = NotFound desc = an error occurred when try to find container \"8478cef48f45f3c9d451edf1a7608a01f4787f6970fac36d988599fa37a24005\": not found" Dec 13 01:56:34.083475 kubelet[2610]: I1213 01:56:34.083469 2610 scope.go:117] "RemoveContainer" containerID="9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563" Dec 13 01:56:34.084167 containerd[2108]: time="2024-12-13T01:56:34.084057749Z" level=error msg="ContainerStatus for \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\": not found" Dec 13 01:56:34.084498 kubelet[2610]: E1213 01:56:34.084390 2610 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\": not found" containerID="9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563" Dec 13 01:56:34.084498 kubelet[2610]: I1213 01:56:34.084473 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563"} err="failed to get container status \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\": rpc error: code = NotFound desc = an error occurred when try to find container \"9082f571f4fb5918188f7c9e6cf5b57d0ac3a4e8ed9126c19f5cb7fe8acda563\": not found" Dec 13 01:56:34.084498 kubelet[2610]: I1213 01:56:34.084500 2610 scope.go:117] "RemoveContainer" containerID="405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f" Dec 13 01:56:34.085493 containerd[2108]: time="2024-12-13T01:56:34.085104881Z" level=error msg="ContainerStatus for \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\": not found" Dec 13 01:56:34.085712 kubelet[2610]: E1213 01:56:34.085402 2610 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\": not found" containerID="405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f" Dec 13 01:56:34.085712 kubelet[2610]: I1213 01:56:34.085466 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f"} err="failed to get container status \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\": rpc error: code = NotFound desc = an error occurred when try to find container \"405a9dbf1b488f72f671fdddc05fae074a74d289fd421f1ccea9ca34fd7fc15f\": not found" Dec 13 01:56:34.085712 kubelet[2610]: I1213 01:56:34.085493 2610 scope.go:117] "RemoveContainer" containerID="a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f" Dec 13 01:56:34.086491 containerd[2108]: time="2024-12-13T01:56:34.086383709Z" level=error msg="ContainerStatus for \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\": not found" Dec 13 01:56:34.086899 kubelet[2610]: E1213 01:56:34.086846 2610 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\": not found" containerID="a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f" Dec 13 01:56:34.087025 kubelet[2610]: I1213 01:56:34.086921 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f"} err="failed to get container status \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3ac8301dab41a689b87a1d6f0a5da45784843ae4e64aeb6261892b51984995f\": not found" Dec 13 01:56:34.087025 kubelet[2610]: I1213 01:56:34.086954 2610 scope.go:117] "RemoveContainer" containerID="f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93" Dec 13 01:56:34.087975 containerd[2108]: time="2024-12-13T01:56:34.087628133Z" level=error msg="ContainerStatus for \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\": not found" Dec 13 01:56:34.088240 kubelet[2610]: E1213 01:56:34.088024 2610 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\": not found" containerID="f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93" Dec 13 01:56:34.088240 kubelet[2610]: I1213 01:56:34.088090 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93"} err="failed to get container status \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\": rpc error: code = NotFound desc = an error occurred when try to find container \"f04102b47f1d15ec09b1d999ca073fcf41882143d1becfb08e1f92e4dd1f4b93\": not found" Dec 13 01:56:34.129539 systemd[1]: var-lib-kubelet-pods-79394c86\x2ddd5f\x2d463b\x2d9c69\x2d5e7c029486c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:56:34.129912 systemd[1]: var-lib-kubelet-pods-79394c86\x2ddd5f\x2d463b\x2d9c69\x2d5e7c029486c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:56:34.582109 kubelet[2610]: E1213 01:56:34.582027 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:34.690850 kubelet[2610]: I1213 01:56:34.690799 2610 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" path="/var/lib/kubelet/pods/79394c86-dd5f-463b-9c69-5e7c029486c4/volumes" Dec 13 01:56:35.582722 kubelet[2610]: E1213 01:56:35.582602 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:36.140983 ntpd[2075]: Deleting interface #10 lxc_health, fe80::7c5d:fdff:fea3:735%7#123, interface stats: received=0, sent=0, dropped=0, active_time=49 secs Dec 13 01:56:36.141555 ntpd[2075]: 13 Dec 01:56:36 ntpd[2075]: Deleting interface #10 lxc_health, fe80::7c5d:fdff:fea3:735%7#123, interface stats: received=0, sent=0, dropped=0, active_time=49 secs Dec 13 01:56:36.521731 kubelet[2610]: E1213 01:56:36.521605 2610 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:36.582961 kubelet[2610]: E1213 01:56:36.582882 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:36.728766 kubelet[2610]: E1213 01:56:36.728671 2610 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:56:36.763783 kubelet[2610]: I1213 01:56:36.763653 2610 topology_manager.go:215] "Topology Admit Handler" podUID="6af0b2b5-317a-496e-a032-deab6880b4f7" podNamespace="kube-system" podName="cilium-operator-5cc964979-fsbfd" Dec 13 01:56:36.763783 kubelet[2610]: E1213 01:56:36.763774 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" containerName="mount-cgroup" Dec 13 01:56:36.764016 kubelet[2610]: E1213 01:56:36.763801 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" containerName="apply-sysctl-overwrites" Dec 13 01:56:36.764016 kubelet[2610]: E1213 01:56:36.763821 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" containerName="mount-bpf-fs" Dec 13 01:56:36.764016 kubelet[2610]: E1213 01:56:36.763838 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" containerName="clean-cilium-state" Dec 13 01:56:36.764016 kubelet[2610]: E1213 01:56:36.763862 2610 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" containerName="cilium-agent" Dec 13 01:56:36.764016 kubelet[2610]: I1213 01:56:36.763900 2610 memory_manager.go:354] "RemoveStaleState removing state" podUID="79394c86-dd5f-463b-9c69-5e7c029486c4" containerName="cilium-agent" Dec 13 01:56:36.790076 kubelet[2610]: W1213 01:56:36.789912 2610 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.20.234" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.234' and this object Dec 13 01:56:36.790076 kubelet[2610]: E1213 01:56:36.789975 2610 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.20.234" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.20.234' and this object Dec 13 01:56:36.823372 kubelet[2610]: I1213 01:56:36.823295 2610 topology_manager.go:215] "Topology Admit Handler" podUID="af4685cc-a4dd-40ad-ba46-ebb1892c14e7" podNamespace="kube-system" podName="cilium-k55hv" Dec 13 01:56:36.884459 kubelet[2610]: I1213 01:56:36.884383 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-bpf-maps\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.884990 kubelet[2610]: I1213 01:56:36.884497 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvln\" (UniqueName: \"kubernetes.io/projected/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-kube-api-access-tzvln\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.884990 kubelet[2610]: I1213 01:56:36.884572 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-xtables-lock\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.884990 kubelet[2610]: I1213 01:56:36.884621 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6af0b2b5-317a-496e-a032-deab6880b4f7-cilium-config-path\") pod \"cilium-operator-5cc964979-fsbfd\" (UID: \"6af0b2b5-317a-496e-a032-deab6880b4f7\") " pod="kube-system/cilium-operator-5cc964979-fsbfd" Dec 13 01:56:36.884990 kubelet[2610]: I1213 01:56:36.884669 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-hostproc\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.884990 kubelet[2610]: I1213 01:56:36.884747 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-cilium-config-path\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885363 kubelet[2610]: I1213 01:56:36.884811 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-hubble-tls\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885363 kubelet[2610]: I1213 01:56:36.884895 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-clustermesh-secrets\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885363 kubelet[2610]: I1213 01:56:36.884960 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85wp\" (UniqueName: \"kubernetes.io/projected/6af0b2b5-317a-496e-a032-deab6880b4f7-kube-api-access-j85wp\") pod \"cilium-operator-5cc964979-fsbfd\" (UID: \"6af0b2b5-317a-496e-a032-deab6880b4f7\") " pod="kube-system/cilium-operator-5cc964979-fsbfd" Dec 13 01:56:36.885363 kubelet[2610]: I1213 01:56:36.885030 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-etc-cni-netd\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885363 kubelet[2610]: I1213 01:56:36.885075 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-host-proc-sys-net\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885621 kubelet[2610]: I1213 01:56:36.885133 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-host-proc-sys-kernel\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885621 kubelet[2610]: I1213 01:56:36.885184 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-cilium-ipsec-secrets\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885621 kubelet[2610]: I1213 01:56:36.885228 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-lib-modules\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885621 kubelet[2610]: I1213 01:56:36.885298 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-cilium-run\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885621 kubelet[2610]: I1213 01:56:36.885341 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-cilium-cgroup\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:36.885621 kubelet[2610]: I1213 01:56:36.885386 2610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af4685cc-a4dd-40ad-ba46-ebb1892c14e7-cni-path\") pod \"cilium-k55hv\" (UID: \"af4685cc-a4dd-40ad-ba46-ebb1892c14e7\") " pod="kube-system/cilium-k55hv" Dec 13 01:56:37.583883 kubelet[2610]: E1213 01:56:37.583810 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:37.969818 containerd[2108]: time="2024-12-13T01:56:37.969576337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fsbfd,Uid:6af0b2b5-317a-496e-a032-deab6880b4f7,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:38.012094 containerd[2108]: time="2024-12-13T01:56:38.011837985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:38.012094 containerd[2108]: time="2024-12-13T01:56:38.011988957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:38.012504 containerd[2108]: time="2024-12-13T01:56:38.012043713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:38.012504 containerd[2108]: time="2024-12-13T01:56:38.012300681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:38.030851 containerd[2108]: time="2024-12-13T01:56:38.030668265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k55hv,Uid:af4685cc-a4dd-40ad-ba46-ebb1892c14e7,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:38.070935 systemd[1]: run-containerd-runc-k8s.io-da9a76f485a8644e9b8d3884f86def5ffe18f909f2b19d3be6b8cc124bfea7ad-runc.XrCIe3.mount: Deactivated successfully. Dec 13 01:56:38.111937 containerd[2108]: time="2024-12-13T01:56:38.111774549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:38.113165 containerd[2108]: time="2024-12-13T01:56:38.112713297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:38.113165 containerd[2108]: time="2024-12-13T01:56:38.112769877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:38.113165 containerd[2108]: time="2024-12-13T01:56:38.113005485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:38.177311 containerd[2108]: time="2024-12-13T01:56:38.176857834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fsbfd,Uid:6af0b2b5-317a-496e-a032-deab6880b4f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"da9a76f485a8644e9b8d3884f86def5ffe18f909f2b19d3be6b8cc124bfea7ad\"" Dec 13 01:56:38.181314 containerd[2108]: time="2024-12-13T01:56:38.181031158Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:56:38.208538 containerd[2108]: time="2024-12-13T01:56:38.208335406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k55hv,Uid:af4685cc-a4dd-40ad-ba46-ebb1892c14e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\"" Dec 13 01:56:38.213895 containerd[2108]: time="2024-12-13T01:56:38.213823630Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:56:38.235979 containerd[2108]: time="2024-12-13T01:56:38.235829230Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"223152f083501cef015af901b604ab7840246b9f61c7617b79653aebf32920ca\"" Dec 13 01:56:38.237074 containerd[2108]: time="2024-12-13T01:56:38.236800978Z" level=info msg="StartContainer for \"223152f083501cef015af901b604ab7840246b9f61c7617b79653aebf32920ca\"" Dec 13 01:56:38.258776 kubelet[2610]: I1213 01:56:38.258670 2610 setters.go:568] "Node became not ready" node="172.31.20.234" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:56:38Z","lastTransitionTime":"2024-12-13T01:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:56:38.326167 containerd[2108]: time="2024-12-13T01:56:38.326087878Z" level=info msg="StartContainer for \"223152f083501cef015af901b604ab7840246b9f61c7617b79653aebf32920ca\" returns successfully" Dec 13 01:56:38.406336 containerd[2108]: time="2024-12-13T01:56:38.406231463Z" level=info msg="shim disconnected" id=223152f083501cef015af901b604ab7840246b9f61c7617b79653aebf32920ca namespace=k8s.io Dec 13 01:56:38.406336 containerd[2108]: time="2024-12-13T01:56:38.406378499Z" level=warning msg="cleaning up after shim disconnected" id=223152f083501cef015af901b604ab7840246b9f61c7617b79653aebf32920ca namespace=k8s.io Dec 13 01:56:38.406336 containerd[2108]: time="2024-12-13T01:56:38.406402079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:38.584937 kubelet[2610]: E1213 01:56:38.584736 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:39.050028 containerd[2108]: time="2024-12-13T01:56:39.049963378Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:56:39.076303 containerd[2108]: time="2024-12-13T01:56:39.076221058Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a0e1b5d1f72c49ef043fc032c432a0068c35f7041d9845548ca8f9fd539c39c4\"" Dec 13 01:56:39.079742 containerd[2108]: time="2024-12-13T01:56:39.079634506Z" level=info msg="StartContainer for \"a0e1b5d1f72c49ef043fc032c432a0068c35f7041d9845548ca8f9fd539c39c4\"" Dec 13 01:56:39.189044 containerd[2108]: time="2024-12-13T01:56:39.188823419Z" level=info msg="StartContainer for \"a0e1b5d1f72c49ef043fc032c432a0068c35f7041d9845548ca8f9fd539c39c4\" returns successfully" Dec 13 01:56:39.238170 containerd[2108]: time="2024-12-13T01:56:39.237965399Z" level=info msg="shim disconnected" id=a0e1b5d1f72c49ef043fc032c432a0068c35f7041d9845548ca8f9fd539c39c4 namespace=k8s.io Dec 13 01:56:39.238715 containerd[2108]: time="2024-12-13T01:56:39.238444775Z" level=warning msg="cleaning up after shim disconnected" id=a0e1b5d1f72c49ef043fc032c432a0068c35f7041d9845548ca8f9fd539c39c4 namespace=k8s.io Dec 13 01:56:39.238715 containerd[2108]: time="2024-12-13T01:56:39.238475315Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:39.586020 kubelet[2610]: E1213 01:56:39.585953 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:40.019377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0e1b5d1f72c49ef043fc032c432a0068c35f7041d9845548ca8f9fd539c39c4-rootfs.mount: Deactivated successfully. Dec 13 01:56:40.056792 containerd[2108]: time="2024-12-13T01:56:40.056562755Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:56:40.095648 containerd[2108]: time="2024-12-13T01:56:40.095560739Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c5a46418b1f48b9c717489e4a4ea62111465bc495c8ec5d201a67158ed28e8a7\"" Dec 13 01:56:40.096480 containerd[2108]: time="2024-12-13T01:56:40.096411083Z" level=info msg="StartContainer for \"c5a46418b1f48b9c717489e4a4ea62111465bc495c8ec5d201a67158ed28e8a7\"" Dec 13 01:56:40.204006 containerd[2108]: time="2024-12-13T01:56:40.203879304Z" level=info msg="StartContainer for \"c5a46418b1f48b9c717489e4a4ea62111465bc495c8ec5d201a67158ed28e8a7\" returns successfully" Dec 13 01:56:40.255180 containerd[2108]: time="2024-12-13T01:56:40.254957988Z" level=info msg="shim disconnected" id=c5a46418b1f48b9c717489e4a4ea62111465bc495c8ec5d201a67158ed28e8a7 namespace=k8s.io Dec 13 01:56:40.255180 containerd[2108]: time="2024-12-13T01:56:40.255175632Z" level=warning msg="cleaning up after shim disconnected" id=c5a46418b1f48b9c717489e4a4ea62111465bc495c8ec5d201a67158ed28e8a7 namespace=k8s.io Dec 13 01:56:40.255531 containerd[2108]: time="2024-12-13T01:56:40.255198324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:40.587014 kubelet[2610]: E1213 01:56:40.586959 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:41.019537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5a46418b1f48b9c717489e4a4ea62111465bc495c8ec5d201a67158ed28e8a7-rootfs.mount: Deactivated successfully. Dec 13 01:56:41.063361 containerd[2108]: time="2024-12-13T01:56:41.063308952Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:56:41.099282 containerd[2108]: time="2024-12-13T01:56:41.099096240Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e30d8e74f1d25c9a04b8f6b326874c555a0ddd9d3bd0ac6da76b70a0e9594077\"" Dec 13 01:56:41.100076 containerd[2108]: time="2024-12-13T01:56:41.100019748Z" level=info msg="StartContainer for \"e30d8e74f1d25c9a04b8f6b326874c555a0ddd9d3bd0ac6da76b70a0e9594077\"" Dec 13 01:56:41.200472 containerd[2108]: time="2024-12-13T01:56:41.199663537Z" level=info msg="StartContainer for \"e30d8e74f1d25c9a04b8f6b326874c555a0ddd9d3bd0ac6da76b70a0e9594077\" returns successfully" Dec 13 01:56:41.241976 containerd[2108]: time="2024-12-13T01:56:41.241896313Z" level=info msg="shim disconnected" id=e30d8e74f1d25c9a04b8f6b326874c555a0ddd9d3bd0ac6da76b70a0e9594077 namespace=k8s.io Dec 13 01:56:41.241976 containerd[2108]: time="2024-12-13T01:56:41.241975501Z" level=warning msg="cleaning up after shim disconnected" id=e30d8e74f1d25c9a04b8f6b326874c555a0ddd9d3bd0ac6da76b70a0e9594077 namespace=k8s.io Dec 13 01:56:41.242305 containerd[2108]: time="2024-12-13T01:56:41.241998013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:41.588216 kubelet[2610]: E1213 01:56:41.588110 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:41.730352 kubelet[2610]: E1213 01:56:41.730308 2610 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:56:41.949638 containerd[2108]: time="2024-12-13T01:56:41.949288480Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:41.952017 containerd[2108]: time="2024-12-13T01:56:41.951946720Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138326" Dec 13 01:56:41.954364 containerd[2108]: time="2024-12-13T01:56:41.954293813Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:41.957347 containerd[2108]: time="2024-12-13T01:56:41.957266801Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.776079799s" Dec 13 01:56:41.957516 containerd[2108]: time="2024-12-13T01:56:41.957345593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:56:41.960957 containerd[2108]: time="2024-12-13T01:56:41.960901877Z" level=info msg="CreateContainer within sandbox \"da9a76f485a8644e9b8d3884f86def5ffe18f909f2b19d3be6b8cc124bfea7ad\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:56:41.981066 containerd[2108]: time="2024-12-13T01:56:41.980927093Z" level=info msg="CreateContainer within sandbox \"da9a76f485a8644e9b8d3884f86def5ffe18f909f2b19d3be6b8cc124bfea7ad\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8de61e84c0fbdb155c571c4bef1fc149cd8e6734d746271637d8720a3317ae7e\"" Dec 13 01:56:41.982545 containerd[2108]: time="2024-12-13T01:56:41.982197977Z" level=info msg="StartContainer for \"8de61e84c0fbdb155c571c4bef1fc149cd8e6734d746271637d8720a3317ae7e\"" Dec 13 01:56:42.029650 systemd[1]: run-containerd-runc-k8s.io-e30d8e74f1d25c9a04b8f6b326874c555a0ddd9d3bd0ac6da76b70a0e9594077-runc.Hsv0yj.mount: Deactivated successfully. Dec 13 01:56:42.030364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e30d8e74f1d25c9a04b8f6b326874c555a0ddd9d3bd0ac6da76b70a0e9594077-rootfs.mount: Deactivated successfully. Dec 13 01:56:42.084082 containerd[2108]: time="2024-12-13T01:56:42.084027637Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:56:42.106398 containerd[2108]: time="2024-12-13T01:56:42.106143505Z" level=info msg="StartContainer for \"8de61e84c0fbdb155c571c4bef1fc149cd8e6734d746271637d8720a3317ae7e\" returns successfully" Dec 13 01:56:42.136196 containerd[2108]: time="2024-12-13T01:56:42.136111657Z" level=info msg="CreateContainer within sandbox \"393577ba00c852926cd283432bbc5cb745f34a67e3c25e8dcd105caaa8dc4dd1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"19a89a6b0c10c62b5cc063a60d5f370eb98e56bffa586d375e17a931f6ec1892\"" Dec 13 01:56:42.137316 containerd[2108]: time="2024-12-13T01:56:42.137246197Z" level=info msg="StartContainer for \"19a89a6b0c10c62b5cc063a60d5f370eb98e56bffa586d375e17a931f6ec1892\"" Dec 13 01:56:42.283258 containerd[2108]: time="2024-12-13T01:56:42.283007930Z" level=info msg="StartContainer for \"19a89a6b0c10c62b5cc063a60d5f370eb98e56bffa586d375e17a931f6ec1892\" returns successfully" Dec 13 01:56:42.589428 kubelet[2610]: E1213 01:56:42.589197 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:43.071770 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:56:43.126015 kubelet[2610]: I1213 01:56:43.125928 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-k55hv" podStartSLOduration=7.125852822 podStartE2EDuration="7.125852822s" podCreationTimestamp="2024-12-13 01:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:43.12576491 +0000 UTC m=+88.176522211" watchObservedRunningTime="2024-12-13 01:56:43.125852822 +0000 UTC m=+88.176610087" Dec 13 01:56:43.147435 kubelet[2610]: I1213 01:56:43.146898 2610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-fsbfd" podStartSLOduration=3.369162707 podStartE2EDuration="7.146838506s" podCreationTimestamp="2024-12-13 01:56:36 +0000 UTC" firstStartedPulling="2024-12-13 01:56:38.180103306 +0000 UTC m=+83.230860571" lastFinishedPulling="2024-12-13 01:56:41.957779105 +0000 UTC m=+87.008536370" observedRunningTime="2024-12-13 01:56:43.14667365 +0000 UTC m=+88.197430915" watchObservedRunningTime="2024-12-13 01:56:43.146838506 +0000 UTC m=+88.197595783" Dec 13 01:56:43.590457 kubelet[2610]: E1213 01:56:43.590395 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:44.590965 kubelet[2610]: E1213 01:56:44.590877 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:45.591427 kubelet[2610]: E1213 01:56:45.591360 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:46.592421 kubelet[2610]: E1213 01:56:46.592353 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:47.198065 systemd-networkd[1685]: lxc_health: Link UP Dec 13 01:56:47.202677 (udev-worker)[5393]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:47.208686 systemd-networkd[1685]: lxc_health: Gained carrier Dec 13 01:56:47.593411 kubelet[2610]: E1213 01:56:47.593334 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:48.594004 kubelet[2610]: E1213 01:56:48.593879 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:48.837987 systemd-networkd[1685]: lxc_health: Gained IPv6LL Dec 13 01:56:49.594469 kubelet[2610]: E1213 01:56:49.594383 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:50.595090 kubelet[2610]: E1213 01:56:50.595009 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:51.141885 ntpd[2075]: Listen normally on 14 lxc_health [fe80::54ab:54ff:fe73:61f3%15]:123 Dec 13 01:56:51.143067 ntpd[2075]: 13 Dec 01:56:51 ntpd[2075]: Listen normally on 14 lxc_health [fe80::54ab:54ff:fe73:61f3%15]:123 Dec 13 01:56:51.596310 kubelet[2610]: E1213 01:56:51.596214 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:52.597077 kubelet[2610]: E1213 01:56:52.597005 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:53.542222 kubelet[2610]: E1213 01:56:53.542133 2610 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49894->127.0.0.1:39143: write tcp 127.0.0.1:49894->127.0.0.1:39143: write: broken pipe Dec 13 01:56:53.598002 kubelet[2610]: E1213 01:56:53.597891 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:54.598767 kubelet[2610]: E1213 01:56:54.598624 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:55.599135 kubelet[2610]: E1213 01:56:55.599032 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:56.522926 kubelet[2610]: E1213 01:56:56.522793 2610 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:56.599731 kubelet[2610]: E1213 01:56:56.599628 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:57.600139 kubelet[2610]: E1213 01:56:57.600008 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:58.601122 kubelet[2610]: E1213 01:56:58.601065 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:56:59.602194 kubelet[2610]: E1213 01:56:59.602084 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:00.602647 kubelet[2610]: E1213 01:57:00.602572 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:01.603117 kubelet[2610]: E1213 01:57:01.603051 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:02.604109 kubelet[2610]: E1213 01:57:02.604038 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:03.604988 kubelet[2610]: E1213 01:57:03.604875 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:04.605942 kubelet[2610]: E1213 01:57:04.605867 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:05.607067 kubelet[2610]: E1213 01:57:05.607001 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:06.607932 kubelet[2610]: E1213 01:57:06.607854 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:07.608137 kubelet[2610]: E1213 01:57:07.608025 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:08.608631 kubelet[2610]: E1213 01:57:08.608566 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:08.993221 kubelet[2610]: E1213 01:57:08.993164 2610 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.234\": Get \"https://172.31.16.194:6443/api/v1/nodes/172.31.20.234?resourceVersion=0&timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:08.993656 kubelet[2610]: E1213 01:57:08.993620 2610 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.234\": Get \"https://172.31.16.194:6443/api/v1/nodes/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:08.994269 kubelet[2610]: E1213 01:57:08.994069 2610 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.234\": Get \"https://172.31.16.194:6443/api/v1/nodes/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:08.994833 kubelet[2610]: E1213 01:57:08.994572 2610 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.234\": Get \"https://172.31.16.194:6443/api/v1/nodes/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:08.995146 kubelet[2610]: E1213 01:57:08.995121 2610 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.20.234\": Get \"https://172.31.16.194:6443/api/v1/nodes/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:08.995266 kubelet[2610]: E1213 01:57:08.995247 2610 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count" Dec 13 01:57:09.099946 kubelet[2610]: E1213 01:57:09.099894 2610 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:09.100831 kubelet[2610]: E1213 01:57:09.100519 2610 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:09.101484 kubelet[2610]: E1213 01:57:09.101218 2610 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:09.101800 kubelet[2610]: E1213 01:57:09.101773 2610 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:09.102539 kubelet[2610]: E1213 01:57:09.102500 2610 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" Dec 13 01:57:09.102667 kubelet[2610]: I1213 01:57:09.102556 2610 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 13 01:57:09.103118 kubelet[2610]: E1213 01:57:09.103053 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" interval="200ms" Dec 13 01:57:09.304604 kubelet[2610]: E1213 01:57:09.304339 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" interval="400ms" Dec 13 01:57:09.609100 kubelet[2610]: E1213 01:57:09.608892 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:09.705944 kubelet[2610]: E1213 01:57:09.705888 2610 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.20.234?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" interval="800ms" Dec 13 01:57:10.609783 kubelet[2610]: E1213 01:57:10.609723 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:11.610622 kubelet[2610]: E1213 01:57:11.610561 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:12.611640 kubelet[2610]: E1213 01:57:12.611564 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:13.612634 kubelet[2610]: E1213 01:57:13.612575 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:14.613551 kubelet[2610]: E1213 01:57:14.613415 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:15.614284 kubelet[2610]: E1213 01:57:15.614212 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:16.521654 kubelet[2610]: E1213 01:57:16.521582 2610 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:16.554487 containerd[2108]: time="2024-12-13T01:57:16.554427228Z" level=info msg="StopPodSandbox for \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\"" Dec 13 01:57:16.555294 containerd[2108]: time="2024-12-13T01:57:16.554653380Z" level=info msg="TearDown network for sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" successfully" Dec 13 01:57:16.555294 containerd[2108]: time="2024-12-13T01:57:16.554743200Z" level=info msg="StopPodSandbox for \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" returns successfully" Dec 13 01:57:16.556126 containerd[2108]: time="2024-12-13T01:57:16.556053240Z" level=info msg="RemovePodSandbox for \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\"" Dec 13 01:57:16.556302 containerd[2108]: time="2024-12-13T01:57:16.556126416Z" level=info msg="Forcibly stopping sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\"" Dec 13 01:57:16.556302 containerd[2108]: time="2024-12-13T01:57:16.556236504Z" level=info msg="TearDown network for sandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" successfully" Dec 13 01:57:16.571912 containerd[2108]: time="2024-12-13T01:57:16.571815912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:57:16.572088 containerd[2108]: time="2024-12-13T01:57:16.571940772Z" level=info msg="RemovePodSandbox \"ae43ba87351d773f9ae912925643c29b3dd30c4a0e3ea0dd0d5be9d0dbfabbc2\" returns successfully" Dec 13 01:57:16.615223 kubelet[2610]: E1213 01:57:16.615145 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:57:16.703511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8de61e84c0fbdb155c571c4bef1fc149cd8e6734d746271637d8720a3317ae7e-rootfs.mount: Deactivated successfully. Dec 13 01:57:16.734861 containerd[2108]: time="2024-12-13T01:57:16.734595157Z" level=info msg="shim disconnected" id=8de61e84c0fbdb155c571c4bef1fc149cd8e6734d746271637d8720a3317ae7e namespace=k8s.io Dec 13 01:57:16.734861 containerd[2108]: time="2024-12-13T01:57:16.734763829Z" level=warning msg="cleaning up after shim disconnected" id=8de61e84c0fbdb155c571c4bef1fc149cd8e6734d746271637d8720a3317ae7e namespace=k8s.io Dec 13 01:57:16.734861 containerd[2108]: time="2024-12-13T01:57:16.734788549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:16.765222 containerd[2108]: time="2024-12-13T01:57:16.765110209Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:57:17.229066 kubelet[2610]: I1213 01:57:17.228929 2610 scope.go:117] "RemoveContainer" containerID="8de61e84c0fbdb155c571c4bef1fc149cd8e6734d746271637d8720a3317ae7e" Dec 13 01:57:17.233317 containerd[2108]: time="2024-12-13T01:57:17.233210796Z" level=info msg="CreateContainer within sandbox \"da9a76f485a8644e9b8d3884f86def5ffe18f909f2b19d3be6b8cc124bfea7ad\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Dec 13 01:57:17.262952 containerd[2108]: time="2024-12-13T01:57:17.262863912Z" level=info msg="CreateContainer within sandbox \"da9a76f485a8644e9b8d3884f86def5ffe18f909f2b19d3be6b8cc124bfea7ad\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"acd0d000de34d3944c26ebf5bd1ba054101f874d703c6c934d85bad137974295\"" Dec 13 01:57:17.263660 containerd[2108]: time="2024-12-13T01:57:17.263595492Z" level=info msg="StartContainer for \"acd0d000de34d3944c26ebf5bd1ba054101f874d703c6c934d85bad137974295\"" Dec 13 01:57:17.358125 containerd[2108]: time="2024-12-13T01:57:17.357683604Z" level=info msg="StartContainer for \"acd0d000de34d3944c26ebf5bd1ba054101f874d703c6c934d85bad137974295\" returns successfully" Dec 13 01:57:17.616873 kubelet[2610]: E1213 01:57:17.616663 2610 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"