Dec 13 01:53:31.241666 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:53:31.241723 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:53:31.241751 kernel: KASLR disabled due to lack of seed Dec 13 01:53:31.241769 kernel: efi: EFI v2.7 by EDK II Dec 13 01:53:31.241785 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:53:31.241801 kernel: ACPI: Early table checksum verification disabled Dec 13 01:53:31.241820 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:53:31.241835 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:53:31.241851 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:53:31.241867 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:53:31.241888 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:53:31.241905 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:53:31.241921 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:53:31.241938 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:53:31.241959 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:53:31.241980 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:53:31.241998 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:53:31.242015 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:53:31.242032 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:53:31.242049 kernel: printk: bootconsole [uart0] enabled Dec 13 01:53:31.242066 kernel: NUMA: Failed to initialise from firmware Dec 13 01:53:31.242084 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:31.242101 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:53:31.242118 kernel: Zone ranges: Dec 13 01:53:31.242136 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:53:31.242154 kernel: DMA32 empty Dec 13 01:53:31.242178 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:53:31.242196 kernel: Movable zone start for each node Dec 13 01:53:31.242212 kernel: Early memory node ranges Dec 13 01:53:31.242230 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:53:31.242247 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:53:31.242264 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:53:31.242282 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:53:31.242299 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:53:31.242315 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:53:31.242332 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:53:31.242349 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:53:31.242366 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:31.242390 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:53:31.242408 kernel: psci: probing for conduit method from ACPI. Dec 13 01:53:31.242433 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:53:31.242451 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:53:31.242471 kernel: psci: Trusted OS migration not required Dec 13 01:53:31.242492 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:53:31.242539 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:53:31.242568 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:53:31.242588 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:53:31.242606 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:53:31.242624 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:53:31.242642 kernel: CPU features: detected: Spectre-v2 Dec 13 01:53:31.242660 kernel: CPU features: detected: Spectre-v3a Dec 13 01:53:31.242679 kernel: CPU features: detected: Spectre-BHB Dec 13 01:53:31.242696 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:53:31.242714 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:53:31.242742 kernel: alternatives: applying boot alternatives Dec 13 01:53:31.242763 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:31.242783 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:53:31.242802 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:53:31.242821 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:53:31.242838 kernel: Fallback order for Node 0: 0 Dec 13 01:53:31.242857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:53:31.242874 kernel: Policy zone: Normal Dec 13 01:53:31.242893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:53:31.242910 kernel: software IO TLB: area num 2. Dec 13 01:53:31.242927 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:53:31.242951 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:53:31.242970 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:53:31.242987 kernel: trace event string verifier disabled Dec 13 01:53:31.243005 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:53:31.243024 kernel: rcu: RCU event tracing is enabled. Dec 13 01:53:31.243042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:53:31.243062 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:53:31.243080 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:53:31.243099 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:53:31.243117 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:53:31.243134 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:53:31.243158 kernel: GICv3: 96 SPIs implemented Dec 13 01:53:31.243176 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:53:31.243194 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:53:31.243212 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:53:31.243230 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:53:31.243247 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:53:31.243265 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:53:31.243283 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:53:31.243302 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:53:31.243320 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:53:31.243338 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:53:31.243356 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:53:31.243381 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:53:31.243400 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:53:31.243418 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:53:31.243437 kernel: Console: colour dummy device 80x25 Dec 13 01:53:31.243457 kernel: printk: console [tty1] enabled Dec 13 01:53:31.243475 kernel: ACPI: Core revision 20230628 Dec 13 01:53:31.243494 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:53:31.246587 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:53:31.246635 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:53:31.246666 kernel: landlock: Up and running. Dec 13 01:53:31.246687 kernel: SELinux: Initializing. Dec 13 01:53:31.246707 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:31.246726 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:31.246745 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:31.246763 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:31.246781 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:53:31.246801 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:53:31.246821 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:53:31.246845 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:53:31.246866 kernel: Remapping and enabling EFI services. Dec 13 01:53:31.246886 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:53:31.246904 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:53:31.246922 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:53:31.246941 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:53:31.246959 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:53:31.246978 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:53:31.246995 kernel: SMP: Total of 2 processors activated. Dec 13 01:53:31.247018 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:53:31.247037 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:53:31.247056 kernel: CPU features: detected: CRC32 instructions Dec 13 01:53:31.247087 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:53:31.247111 kernel: alternatives: applying system-wide alternatives Dec 13 01:53:31.247130 kernel: devtmpfs: initialized Dec 13 01:53:31.247149 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:53:31.247169 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:53:31.247188 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:53:31.247207 kernel: SMBIOS 3.0.0 present. Dec 13 01:53:31.247232 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:53:31.247251 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:53:31.247271 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:53:31.247290 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:53:31.247310 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:53:31.247328 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:53:31.247348 kernel: audit: type=2000 audit(0.310:1): state=initialized audit_enabled=0 res=1 Dec 13 01:53:31.247372 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:53:31.247391 kernel: cpuidle: using governor menu Dec 13 01:53:31.247411 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:53:31.247429 kernel: ASID allocator initialised with 65536 entries Dec 13 01:53:31.247448 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:53:31.247467 kernel: Serial: AMBA PL011 UART driver Dec 13 01:53:31.247486 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:53:31.247505 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:53:31.248607 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:53:31.248643 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:53:31.248664 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:53:31.248683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:53:31.248702 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:53:31.248721 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:53:31.248739 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:53:31.248758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:53:31.248777 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:53:31.248795 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:53:31.248819 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:53:31.248839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:53:31.248858 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:53:31.248876 kernel: ACPI: Interpreter enabled Dec 13 01:53:31.248895 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:53:31.248914 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:53:31.248933 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:53:31.249269 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:53:31.250868 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:53:31.253748 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:53:31.254003 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:53:31.254269 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:53:31.254312 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:53:31.254333 kernel: acpiphp: Slot [1] registered Dec 13 01:53:31.254354 kernel: acpiphp: Slot [2] registered Dec 13 01:53:31.254374 kernel: acpiphp: Slot [3] registered Dec 13 01:53:31.254409 kernel: acpiphp: Slot [4] registered Dec 13 01:53:31.254430 kernel: acpiphp: Slot [5] registered Dec 13 01:53:31.254450 kernel: acpiphp: Slot [6] registered Dec 13 01:53:31.254471 kernel: acpiphp: Slot [7] registered Dec 13 01:53:31.254492 kernel: acpiphp: Slot [8] registered Dec 13 01:53:31.255614 kernel: acpiphp: Slot [9] registered Dec 13 01:53:31.255661 kernel: acpiphp: Slot [10] registered Dec 13 01:53:31.255682 kernel: acpiphp: Slot [11] registered Dec 13 01:53:31.255701 kernel: acpiphp: Slot [12] registered Dec 13 01:53:31.255720 kernel: acpiphp: Slot [13] registered Dec 13 01:53:31.255751 kernel: acpiphp: Slot [14] registered Dec 13 01:53:31.255771 kernel: acpiphp: Slot [15] registered Dec 13 01:53:31.255791 kernel: acpiphp: Slot [16] registered Dec 13 01:53:31.255811 kernel: acpiphp: Slot [17] registered Dec 13 01:53:31.255830 kernel: acpiphp: Slot [18] registered Dec 13 01:53:31.255849 kernel: acpiphp: Slot [19] registered Dec 13 01:53:31.255867 kernel: acpiphp: Slot [20] registered Dec 13 01:53:31.255886 kernel: acpiphp: Slot [21] registered Dec 13 01:53:31.255905 kernel: acpiphp: Slot [22] registered Dec 13 01:53:31.255929 kernel: acpiphp: Slot [23] registered Dec 13 01:53:31.255949 kernel: acpiphp: Slot [24] registered Dec 13 01:53:31.255968 kernel: acpiphp: Slot [25] registered Dec 13 01:53:31.255987 kernel: acpiphp: Slot [26] registered Dec 13 01:53:31.256006 kernel: acpiphp: Slot [27] registered Dec 13 01:53:31.256025 kernel: acpiphp: Slot [28] registered Dec 13 01:53:31.256044 kernel: acpiphp: Slot [29] registered Dec 13 01:53:31.256064 kernel: acpiphp: Slot [30] registered Dec 13 01:53:31.256082 kernel: acpiphp: Slot [31] registered Dec 13 01:53:31.256101 kernel: PCI host bridge to bus 0000:00 Dec 13 01:53:31.256453 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:53:31.257232 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:53:31.257503 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:31.257759 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:53:31.258007 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:53:31.258243 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:53:31.258502 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:53:31.260963 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:53:31.261392 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:53:31.261711 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:31.261969 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:53:31.262200 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:53:31.262462 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:53:31.264844 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:53:31.265093 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:31.265316 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:53:31.265606 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:53:31.265842 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:53:31.266100 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:53:31.266344 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:53:31.268709 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:53:31.268952 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:53:31.269155 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:31.269183 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:53:31.269204 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:53:31.269224 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:53:31.269244 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:53:31.269263 kernel: iommu: Default domain type: Translated Dec 13 01:53:31.269296 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:53:31.269316 kernel: efivars: Registered efivars operations Dec 13 01:53:31.269335 kernel: vgaarb: loaded Dec 13 01:53:31.269374 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:53:31.269399 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:53:31.269420 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:53:31.269441 kernel: pnp: PnP ACPI init Dec 13 01:53:31.269788 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:53:31.269853 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:53:31.269879 kernel: NET: Registered PF_INET protocol family Dec 13 01:53:31.269899 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:53:31.269919 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:53:31.269939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:53:31.269958 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:53:31.269978 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:53:31.269997 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:53:31.270018 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:31.270044 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:31.270067 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:53:31.270089 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:53:31.270114 kernel: kvm [1]: HYP mode not available Dec 13 01:53:31.270138 kernel: Initialise system trusted keyrings Dec 13 01:53:31.270158 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:53:31.270177 kernel: Key type asymmetric registered Dec 13 01:53:31.270196 kernel: Asymmetric key parser 'x509' registered Dec 13 01:53:31.270215 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:53:31.270242 kernel: io scheduler mq-deadline registered Dec 13 01:53:31.270262 kernel: io scheduler kyber registered Dec 13 01:53:31.270281 kernel: io scheduler bfq registered Dec 13 01:53:31.272250 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:53:31.272304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:53:31.272326 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:53:31.272348 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:53:31.272367 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:53:31.272397 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:53:31.272418 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:53:31.272708 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:53:31.272745 kernel: printk: console [ttyS0] disabled Dec 13 01:53:31.272768 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:53:31.272788 kernel: printk: console [ttyS0] enabled Dec 13 01:53:31.272808 kernel: printk: bootconsole [uart0] disabled Dec 13 01:53:31.272830 kernel: thunder_xcv, ver 1.0 Dec 13 01:53:31.272849 kernel: thunder_bgx, ver 1.0 Dec 13 01:53:31.272879 kernel: nicpf, ver 1.0 Dec 13 01:53:31.272899 kernel: nicvf, ver 1.0 Dec 13 01:53:31.273162 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:53:31.273415 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:53:30 UTC (1734054810) Dec 13 01:53:31.273452 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:53:31.273472 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:53:31.273492 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:53:31.273556 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:53:31.273597 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:53:31.273619 kernel: Segment Routing with IPv6 Dec 13 01:53:31.273638 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:53:31.273658 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:53:31.273678 kernel: Key type dns_resolver registered Dec 13 01:53:31.273699 kernel: registered taskstats version 1 Dec 13 01:53:31.273718 kernel: Loading compiled-in X.509 certificates Dec 13 01:53:31.273738 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:53:31.273756 kernel: Key type .fscrypt registered Dec 13 01:53:31.273782 kernel: Key type fscrypt-provisioning registered Dec 13 01:53:31.273801 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:53:31.273821 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:53:31.273841 kernel: ima: No architecture policies found Dec 13 01:53:31.273861 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:53:31.273881 kernel: clk: Disabling unused clocks Dec 13 01:53:31.273900 kernel: Freeing unused kernel memory: 39360K Dec 13 01:53:31.273919 kernel: Run /init as init process Dec 13 01:53:31.273939 kernel: with arguments: Dec 13 01:53:31.273959 kernel: /init Dec 13 01:53:31.273987 kernel: with environment: Dec 13 01:53:31.274006 kernel: HOME=/ Dec 13 01:53:31.274025 kernel: TERM=linux Dec 13 01:53:31.274043 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:53:31.274067 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:31.274093 systemd[1]: Detected virtualization amazon. Dec 13 01:53:31.274115 systemd[1]: Detected architecture arm64. Dec 13 01:53:31.274141 systemd[1]: Running in initrd. Dec 13 01:53:31.274162 systemd[1]: No hostname configured, using default hostname. Dec 13 01:53:31.274182 systemd[1]: Hostname set to . Dec 13 01:53:31.274204 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:31.274225 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:53:31.274246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:31.274267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:31.274289 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:53:31.274315 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:31.274337 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:53:31.274358 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:53:31.274383 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:53:31.274405 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:53:31.274426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:31.274447 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:31.274473 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:31.274494 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:31.274602 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:31.274631 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:31.274658 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:31.274680 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:31.274701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:53:31.274722 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:53:31.274743 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:31.274773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:31.274795 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:31.274816 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:31.274838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:53:31.274858 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:31.274880 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:53:31.274900 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:53:31.274921 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:31.274947 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:31.274968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:31.274989 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:31.275010 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:31.275091 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:53:31.275143 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:53:31.275166 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:31.275187 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:53:31.275211 kernel: Bridge firewalling registered Dec 13 01:53:31.275233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:31.275255 systemd-journald[251]: Journal started Dec 13 01:53:31.275294 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e49dd3da603db9cdad22e7b61991f) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:31.223931 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:53:31.288823 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:31.269454 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:53:31.290075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:31.296639 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:31.313037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:31.323820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:31.331753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:31.356807 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:31.365087 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:31.381862 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:53:31.397292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:31.403307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:31.410195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:31.429820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:31.442567 dracut-cmdline[282]: dracut-dracut-053 Dec 13 01:53:31.451464 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:31.518926 systemd-resolved[289]: Positive Trust Anchors: Dec 13 01:53:31.518962 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:31.519026 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:31.651853 kernel: SCSI subsystem initialized Dec 13 01:53:31.659644 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:53:31.673654 kernel: iscsi: registered transport (tcp) Dec 13 01:53:31.697102 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:53:31.697184 kernel: QLogic iSCSI HBA Driver Dec 13 01:53:31.771560 kernel: random: crng init done Dec 13 01:53:31.769963 systemd-resolved[289]: Defaulting to hostname 'linux'. Dec 13 01:53:31.773855 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:31.778570 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:31.807579 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:31.822400 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:53:31.856872 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:53:31.856955 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:53:31.856984 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:53:31.929608 kernel: raid6: neonx8 gen() 6637 MB/s Dec 13 01:53:31.946577 kernel: raid6: neonx4 gen() 6459 MB/s Dec 13 01:53:31.963572 kernel: raid6: neonx2 gen() 5381 MB/s Dec 13 01:53:31.980576 kernel: raid6: neonx1 gen() 3903 MB/s Dec 13 01:53:31.997571 kernel: raid6: int64x8 gen() 3735 MB/s Dec 13 01:53:32.014575 kernel: raid6: int64x4 gen() 3676 MB/s Dec 13 01:53:32.031581 kernel: raid6: int64x2 gen() 3584 MB/s Dec 13 01:53:32.049414 kernel: raid6: int64x1 gen() 2743 MB/s Dec 13 01:53:32.049505 kernel: raid6: using algorithm neonx8 gen() 6637 MB/s Dec 13 01:53:32.067376 kernel: raid6: .... xor() 4847 MB/s, rmw enabled Dec 13 01:53:32.067453 kernel: raid6: using neon recovery algorithm Dec 13 01:53:32.075571 kernel: xor: measuring software checksum speed Dec 13 01:53:32.076568 kernel: 8regs : 9768 MB/sec Dec 13 01:53:32.078897 kernel: 32regs : 10539 MB/sec Dec 13 01:53:32.078975 kernel: arm64_neon : 9029 MB/sec Dec 13 01:53:32.079008 kernel: xor: using function: 32regs (10539 MB/sec) Dec 13 01:53:32.167570 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:53:32.191162 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:32.201943 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:32.241967 systemd-udevd[469]: Using default interface naming scheme 'v255'. Dec 13 01:53:32.253700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:32.269814 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:53:32.309679 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Dec 13 01:53:32.376799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:32.387899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:32.523247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:32.543208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:53:32.581154 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:32.587967 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:32.592326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:32.594745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:32.630787 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:53:32.677497 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:32.737250 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:53:32.737320 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:53:32.765077 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:53:32.765887 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:53:32.766171 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1f:1f:6b:82:a9 Dec 13 01:53:32.772329 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:32.780788 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:53:32.780851 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:53:32.788265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:32.793884 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:32.802665 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:53:32.796818 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:32.800077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:32.800424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:32.810419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:32.825613 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:53:32.825656 kernel: GPT:9289727 != 16777215 Dec 13 01:53:32.825683 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:53:32.825733 kernel: GPT:9289727 != 16777215 Dec 13 01:53:32.825775 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:53:32.825803 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:32.826176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:32.863456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:32.876039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:32.923949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:32.961560 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (542) Dec 13 01:53:32.971585 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (518) Dec 13 01:53:33.025966 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:53:33.060830 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:53:33.100033 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:33.115997 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:33.121077 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:33.143874 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:53:33.158250 disk-uuid[660]: Primary Header is updated. Dec 13 01:53:33.158250 disk-uuid[660]: Secondary Entries is updated. Dec 13 01:53:33.158250 disk-uuid[660]: Secondary Header is updated. Dec 13 01:53:33.168581 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:33.176224 kernel: GPT:disk_guids don't match. Dec 13 01:53:33.176287 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:53:33.176314 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:33.187553 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:34.189976 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:34.190045 disk-uuid[661]: The operation has completed successfully. Dec 13 01:53:34.376650 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:53:34.376914 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:53:34.420854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:53:34.429548 sh[1004]: Success Dec 13 01:53:34.459744 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:53:34.567813 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:53:34.582657 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:53:34.592440 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:53:34.625971 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:53:34.626063 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:34.626093 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:53:34.627678 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:53:34.629934 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:53:34.702559 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:53:34.718045 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:53:34.722133 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:53:34.737826 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:53:34.747862 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:53:34.768634 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:34.768708 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:34.768738 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:34.787551 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:34.804416 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:53:34.809579 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:34.831748 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:53:34.843974 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:53:34.932887 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:34.954852 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:35.009504 systemd-networkd[1201]: lo: Link UP Dec 13 01:53:35.009543 systemd-networkd[1201]: lo: Gained carrier Dec 13 01:53:35.013182 systemd-networkd[1201]: Enumeration completed Dec 13 01:53:35.013966 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:35.013972 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:35.016135 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:35.019859 systemd[1]: Reached target network.target - Network. Dec 13 01:53:35.021654 systemd-networkd[1201]: eth0: Link UP Dec 13 01:53:35.021661 systemd-networkd[1201]: eth0: Gained carrier Dec 13 01:53:35.021679 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:35.048606 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.16.194/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:35.313838 ignition[1123]: Ignition 2.19.0 Dec 13 01:53:35.313867 ignition[1123]: Stage: fetch-offline Dec 13 01:53:35.315482 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:35.315541 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:35.316215 ignition[1123]: Ignition finished successfully Dec 13 01:53:35.321720 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:35.338862 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:53:35.366563 ignition[1214]: Ignition 2.19.0 Dec 13 01:53:35.366593 ignition[1214]: Stage: fetch Dec 13 01:53:35.367903 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:35.367933 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:35.368099 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:35.377401 ignition[1214]: PUT result: OK Dec 13 01:53:35.382370 ignition[1214]: parsed url from cmdline: "" Dec 13 01:53:35.382401 ignition[1214]: no config URL provided Dec 13 01:53:35.382418 ignition[1214]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:53:35.382447 ignition[1214]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:53:35.382484 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:35.399904 ignition[1214]: PUT result: OK Dec 13 01:53:35.401612 ignition[1214]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:53:35.407109 ignition[1214]: GET result: OK Dec 13 01:53:35.408404 ignition[1214]: parsing config with SHA512: 61ab00cd5ad233e018a7b2a5d99aa2bc2c54f075fdb35fe6d5869e726f4dc1e3f762a1150958517ccdbb89326b853dbb7c2afd3804ad9bf9f89e833a72381ebd Dec 13 01:53:35.417577 unknown[1214]: fetched base config from "system" Dec 13 01:53:35.417620 unknown[1214]: fetched base config from "system" Dec 13 01:53:35.417637 unknown[1214]: fetched user config from "aws" Dec 13 01:53:35.423905 ignition[1214]: fetch: fetch complete Dec 13 01:53:35.423947 ignition[1214]: fetch: fetch passed Dec 13 01:53:35.424061 ignition[1214]: Ignition finished successfully Dec 13 01:53:35.432572 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:53:35.443850 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:53:35.473270 ignition[1221]: Ignition 2.19.0 Dec 13 01:53:35.473818 ignition[1221]: Stage: kargs Dec 13 01:53:35.474458 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:35.474483 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:35.474713 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:35.479684 ignition[1221]: PUT result: OK Dec 13 01:53:35.488020 ignition[1221]: kargs: kargs passed Dec 13 01:53:35.488132 ignition[1221]: Ignition finished successfully Dec 13 01:53:35.491709 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:53:35.502919 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:53:35.537304 ignition[1227]: Ignition 2.19.0 Dec 13 01:53:35.537352 ignition[1227]: Stage: disks Dec 13 01:53:35.538917 ignition[1227]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:35.538957 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:35.539284 ignition[1227]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:35.542483 ignition[1227]: PUT result: OK Dec 13 01:53:35.551962 ignition[1227]: disks: disks passed Dec 13 01:53:35.552136 ignition[1227]: Ignition finished successfully Dec 13 01:53:35.556282 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:53:35.560407 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:35.564023 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:53:35.566306 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:35.568165 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:35.571094 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:35.587905 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:53:35.646178 systemd-fsck[1235]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:53:35.656721 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:53:35.668770 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:53:35.763566 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:53:35.764743 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:53:35.768221 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:53:35.804741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:35.812692 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:53:35.814922 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:53:35.815004 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:53:35.815054 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:35.836694 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:53:35.848951 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:53:35.857685 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1254) Dec 13 01:53:35.863390 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:35.863490 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:35.863551 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:35.868586 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:35.871072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:36.193282 initrd-setup-root[1278]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:53:36.202969 initrd-setup-root[1285]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:53:36.212311 initrd-setup-root[1292]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:53:36.221853 initrd-setup-root[1299]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:53:36.318700 systemd-networkd[1201]: eth0: Gained IPv6LL Dec 13 01:53:36.511777 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:36.527322 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:53:36.535916 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:53:36.552580 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:36.553224 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:53:36.595223 ignition[1367]: INFO : Ignition 2.19.0 Dec 13 01:53:36.595223 ignition[1367]: INFO : Stage: mount Dec 13 01:53:36.603256 ignition[1367]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:36.603256 ignition[1367]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:36.603256 ignition[1367]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:36.603256 ignition[1367]: INFO : PUT result: OK Dec 13 01:53:36.597603 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:53:36.613727 ignition[1367]: INFO : mount: mount passed Dec 13 01:53:36.613727 ignition[1367]: INFO : Ignition finished successfully Dec 13 01:53:36.619239 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:53:36.630781 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:53:36.772856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:36.804974 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1379) Dec 13 01:53:36.805047 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:36.806829 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:36.806893 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:36.812567 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:36.816145 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:36.858147 ignition[1396]: INFO : Ignition 2.19.0 Dec 13 01:53:36.858147 ignition[1396]: INFO : Stage: files Dec 13 01:53:36.861711 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:36.861711 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:36.861711 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:36.869093 ignition[1396]: INFO : PUT result: OK Dec 13 01:53:36.873737 ignition[1396]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:53:36.904558 ignition[1396]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:53:36.904558 ignition[1396]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:53:36.911661 ignition[1396]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:53:36.914286 ignition[1396]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:53:36.914286 ignition[1396]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:53:36.913713 unknown[1396]: wrote ssh authorized keys file for user: core Dec 13 01:53:36.922555 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:53:36.922555 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:53:36.922555 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:36.922555 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:53:37.016360 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:53:37.166585 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:37.166585 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:53:37.173395 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 01:53:37.643192 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:53:37.782332 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:53:37.785897 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:37.785897 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:37.785897 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:37.796740 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:37.796740 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:37.796740 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:37.796740 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:37.796740 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:37.796740 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:37.815841 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:37.819288 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:37.819288 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:37.819288 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:37.819288 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:53:38.219796 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:53:38.538956 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:38.538956 ignition[1396]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:53:38.545816 ignition[1396]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:53:38.550405 ignition[1396]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:53:38.550405 ignition[1396]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:53:38.556818 ignition[1396]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:53:38.556818 ignition[1396]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:38.564634 ignition[1396]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:38.564634 ignition[1396]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:53:38.564634 ignition[1396]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:38.564634 ignition[1396]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:38.564634 ignition[1396]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:38.564634 ignition[1396]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:38.564634 ignition[1396]: INFO : files: files passed Dec 13 01:53:38.564634 ignition[1396]: INFO : Ignition finished successfully Dec 13 01:53:38.585366 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:53:38.603185 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:53:38.609763 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:53:38.622237 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:53:38.622425 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:53:38.641628 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:38.641628 initrd-setup-root-after-ignition[1425]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:38.649689 initrd-setup-root-after-ignition[1429]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:38.653030 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:38.656035 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:53:38.668042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:53:38.734882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:53:38.735326 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:53:38.742437 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:53:38.744858 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:53:38.745177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:53:38.756854 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:53:38.789917 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:38.807471 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:53:38.829783 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:38.834178 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:38.838165 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:53:38.841216 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:53:38.841574 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:38.846187 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:53:38.850506 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:53:38.852962 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:53:38.860046 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:38.862617 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:38.868844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:53:38.871315 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:38.877563 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:53:38.880135 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:53:38.883506 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:53:38.888088 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:53:38.888324 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:38.891203 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:38.898568 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:38.900935 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:53:38.902110 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:38.905940 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:53:38.906165 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:38.915823 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:53:38.916085 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:38.921225 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:53:38.921710 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:53:38.936844 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:53:38.942981 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:53:38.944735 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:53:38.961657 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:38.965991 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:53:38.966234 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:38.981459 ignition[1449]: INFO : Ignition 2.19.0 Dec 13 01:53:38.986589 ignition[1449]: INFO : Stage: umount Dec 13 01:53:38.986589 ignition[1449]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:38.986589 ignition[1449]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:38.986589 ignition[1449]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:38.986589 ignition[1449]: INFO : PUT result: OK Dec 13 01:53:38.984467 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:53:38.986384 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:53:39.007468 ignition[1449]: INFO : umount: umount passed Dec 13 01:53:39.007468 ignition[1449]: INFO : Ignition finished successfully Dec 13 01:53:39.012402 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:53:39.014739 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:53:39.020478 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:53:39.020603 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:53:39.032662 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:53:39.032771 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:53:39.049556 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:53:39.049657 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:53:39.058181 systemd[1]: Stopped target network.target - Network. Dec 13 01:53:39.059980 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:53:39.061757 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:39.072725 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:53:39.076213 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:53:39.078222 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:39.078335 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:53:39.084699 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:53:39.086492 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:53:39.086593 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:39.088432 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:53:39.088500 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:39.090404 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:53:39.090486 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:53:39.092379 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:53:39.092456 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:39.094884 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:53:39.098719 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:53:39.106502 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:53:39.108164 systemd-networkd[1201]: eth0: DHCPv6 lease lost Dec 13 01:53:39.115869 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:53:39.116072 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:53:39.128450 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:53:39.131509 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:53:39.135664 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:53:39.136089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:53:39.145426 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:53:39.145562 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:39.147956 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:53:39.148050 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:39.170825 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:53:39.173810 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:53:39.173922 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:39.176576 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:53:39.176658 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:39.179013 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:53:39.179092 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:39.181628 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:53:39.181702 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:39.184565 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:39.226321 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:53:39.228656 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:39.237200 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:53:39.238448 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:53:39.244065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:53:39.244174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:39.250473 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:53:39.250572 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:39.254182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:53:39.254270 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:39.256420 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:53:39.256502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:39.260359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:39.260449 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:39.281930 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:53:39.295653 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:53:39.295770 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:39.298076 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:53:39.298162 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:39.300454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:53:39.300551 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:39.302845 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:39.302923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:39.306422 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:53:39.306821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:53:39.329749 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:53:39.340920 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:53:39.387082 systemd[1]: Switching root. Dec 13 01:53:39.420219 systemd-journald[251]: Journal stopped Dec 13 01:53:42.264704 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:53:42.264836 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:53:42.264888 kernel: SELinux: policy capability open_perms=1 Dec 13 01:53:42.264920 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:53:42.264951 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:53:42.264987 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:53:42.265019 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:53:42.265050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:53:42.265082 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:53:42.265112 kernel: audit: type=1403 audit(1734054820.576:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:53:42.265143 systemd[1]: Successfully loaded SELinux policy in 73.766ms. Dec 13 01:53:42.265195 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.594ms. Dec 13 01:53:42.265239 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:42.265273 systemd[1]: Detected virtualization amazon. Dec 13 01:53:42.265384 systemd[1]: Detected architecture arm64. Dec 13 01:53:42.267595 systemd[1]: Detected first boot. Dec 13 01:53:42.267631 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:42.267665 zram_generator::config[1508]: No configuration found. Dec 13 01:53:42.267700 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:53:42.267733 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:53:42.267765 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:53:42.267800 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:53:42.267838 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:53:42.267871 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:53:42.267903 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:53:42.267935 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:53:42.267967 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:53:42.267997 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:53:42.268030 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:53:42.268063 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:42.268097 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:42.268132 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:53:42.268162 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:53:42.268195 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:53:42.268227 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:42.268260 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:53:42.268291 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:42.268320 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:53:42.268350 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:42.268383 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:42.268419 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:42.268450 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:42.268479 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:53:42.268536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:53:42.268577 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:53:42.268610 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:53:42.268639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:42.268673 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:42.268708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:42.268739 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:53:42.268768 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:53:42.268797 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:53:42.268829 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:53:42.268859 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:53:42.268888 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:53:42.268918 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:53:42.268951 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:53:42.268986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:42.269016 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:42.269046 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:53:42.269075 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:42.269105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:42.269134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:42.269164 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:53:42.269206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:42.269241 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:53:42.269277 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:53:42.269345 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:53:42.269383 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:42.269414 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:42.269447 kernel: loop: module loaded Dec 13 01:53:42.269482 kernel: ACPI: bus type drm_connector registered Dec 13 01:53:42.272262 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:53:42.272338 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:53:42.272369 kernel: fuse: init (API version 7.39) Dec 13 01:53:42.272401 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:42.272433 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:53:42.272465 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:53:42.272494 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:53:42.272658 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:53:42.272693 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:53:42.272727 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:53:42.272765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:42.272798 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:53:42.272829 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:53:42.272859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:42.272891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:42.272924 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:53:42.272954 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:42.272984 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:42.273018 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:42.273048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:42.273081 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:53:42.273111 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:53:42.273190 systemd-journald[1608]: Collecting audit messages is disabled. Dec 13 01:53:42.273248 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:42.273281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:42.273335 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:42.273367 systemd-journald[1608]: Journal started Dec 13 01:53:42.273415 systemd-journald[1608]: Runtime Journal (/run/log/journal/ec2e49dd3da603db9cdad22e7b61991f) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:42.280618 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:53:42.284564 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:42.287044 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:53:42.314612 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:53:42.323786 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:53:42.334681 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:53:42.336821 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:53:42.352898 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:53:42.364859 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:53:42.367707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:42.372103 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:53:42.374829 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:42.392758 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:42.401781 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:42.414648 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:53:42.418920 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:53:42.439736 systemd-journald[1608]: Time spent on flushing to /var/log/journal/ec2e49dd3da603db9cdad22e7b61991f is 81.088ms for 903 entries. Dec 13 01:53:42.439736 systemd-journald[1608]: System Journal (/var/log/journal/ec2e49dd3da603db9cdad22e7b61991f) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:53:42.549665 systemd-journald[1608]: Received client request to flush runtime journal. Dec 13 01:53:42.455348 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:53:42.457898 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:53:42.488173 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:42.499913 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:53:42.557270 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:42.569269 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:53:42.574236 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Dec 13 01:53:42.574267 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Dec 13 01:53:42.576776 udevadm[1669]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:53:42.589507 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:42.602966 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:53:42.650291 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:53:42.660998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:42.695308 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Dec 13 01:53:42.695341 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Dec 13 01:53:42.703799 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:43.416141 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:53:43.438825 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:43.485559 systemd-udevd[1688]: Using default interface naming scheme 'v255'. Dec 13 01:53:43.556473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:43.567924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:43.600881 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:53:43.688461 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:53:43.713620 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1704) Dec 13 01:53:43.760555 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:53:43.790609 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1704) Dec 13 01:53:43.792159 (udev-worker)[1703]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:43.926942 systemd-networkd[1692]: lo: Link UP Dec 13 01:53:43.927464 systemd-networkd[1692]: lo: Gained carrier Dec 13 01:53:43.930242 systemd-networkd[1692]: Enumeration completed Dec 13 01:53:43.930682 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:43.935944 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:43.936128 systemd-networkd[1692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:43.938599 systemd-networkd[1692]: eth0: Link UP Dec 13 01:53:43.939054 systemd-networkd[1692]: eth0: Gained carrier Dec 13 01:53:43.939209 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:43.940914 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:53:43.950729 systemd-networkd[1692]: eth0: DHCPv4 address 172.31.16.194/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:43.996989 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1703) Dec 13 01:53:44.008127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:44.183215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:44.208901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:44.212278 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:53:44.263755 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:53:44.311772 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:44.351194 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:53:44.354091 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:44.372779 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:53:44.381453 lvm[1820]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:44.420187 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:53:44.424305 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:53:44.427202 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:53:44.427410 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:44.429750 systemd[1]: Reached target machines.target - Containers. Dec 13 01:53:44.433620 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:53:44.441862 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:53:44.447803 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:53:44.450923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:44.460858 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:53:44.466824 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:53:44.479782 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:53:44.489894 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:53:44.519541 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:53:44.534651 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:53:44.549108 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:53:44.551202 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:53:44.628564 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:53:44.648815 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 01:53:44.706081 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 01:53:44.739547 kernel: loop3: detected capacity change from 0 to 52536 Dec 13 01:53:44.790602 kernel: loop4: detected capacity change from 0 to 114432 Dec 13 01:53:44.809589 kernel: loop5: detected capacity change from 0 to 194512 Dec 13 01:53:44.826548 kernel: loop6: detected capacity change from 0 to 114328 Dec 13 01:53:44.840606 kernel: loop7: detected capacity change from 0 to 52536 Dec 13 01:53:44.854587 (sd-merge)[1842]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:53:44.855569 (sd-merge)[1842]: Merged extensions into '/usr'. Dec 13 01:53:44.861680 systemd[1]: Reloading requested from client PID 1828 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:53:44.861890 systemd[1]: Reloading... Dec 13 01:53:44.993061 zram_generator::config[1870]: No configuration found. Dec 13 01:53:45.200618 ldconfig[1824]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:53:45.275901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:45.419357 systemd[1]: Reloading finished in 556 ms. Dec 13 01:53:45.448445 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:53:45.451380 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:53:45.465852 systemd[1]: Starting ensure-sysext.service... Dec 13 01:53:45.473952 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:45.488909 systemd[1]: Reloading requested from client PID 1929 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:53:45.488944 systemd[1]: Reloading... Dec 13 01:53:45.522892 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:53:45.524905 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:53:45.526715 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:53:45.527244 systemd-tmpfiles[1930]: ACLs are not supported, ignoring. Dec 13 01:53:45.527396 systemd-tmpfiles[1930]: ACLs are not supported, ignoring. Dec 13 01:53:45.533741 systemd-tmpfiles[1930]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:45.533769 systemd-tmpfiles[1930]: Skipping /boot Dec 13 01:53:45.557258 systemd-tmpfiles[1930]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:45.557298 systemd-tmpfiles[1930]: Skipping /boot Dec 13 01:53:45.598721 systemd-networkd[1692]: eth0: Gained IPv6LL Dec 13 01:53:45.645631 zram_generator::config[1961]: No configuration found. Dec 13 01:53:45.884883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:46.025698 systemd[1]: Reloading finished in 536 ms. Dec 13 01:53:46.052507 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:53:46.063742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:46.080837 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:46.094971 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:53:46.102165 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:53:46.116881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:46.137772 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:53:46.151719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:46.160624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:46.175278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:46.187025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:46.190441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:46.205313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:46.206732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:46.228843 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:53:46.234056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:46.234417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:46.244390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:46.248620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:46.253016 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:46.255874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:46.283389 systemd[1]: Finished ensure-sysext.service. Dec 13 01:53:46.289441 augenrules[2054]: No rules Dec 13 01:53:46.292770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:46.301912 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:46.304333 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:46.304422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:46.304549 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:46.304682 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:53:46.321860 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:53:46.330201 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:46.333475 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:53:46.340024 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:53:46.345478 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:46.348089 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:46.374751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:53:46.393375 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:53:46.426626 systemd-resolved[2029]: Positive Trust Anchors: Dec 13 01:53:46.427177 systemd-resolved[2029]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:46.427245 systemd-resolved[2029]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:46.435367 systemd-resolved[2029]: Defaulting to hostname 'linux'. Dec 13 01:53:46.438741 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:46.441029 systemd[1]: Reached target network.target - Network. Dec 13 01:53:46.442778 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:53:46.444853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:46.447117 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:46.449338 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:53:46.453508 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:53:46.456182 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:53:46.458371 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:53:46.460755 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:53:46.463151 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:53:46.463212 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:46.465016 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:46.468294 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:53:46.473418 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:53:46.477812 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:53:46.482576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:53:46.484701 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:46.487631 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:46.489971 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:53:46.490060 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:46.490117 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:46.498832 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:53:46.505736 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:53:46.520799 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:53:46.538702 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:53:46.557808 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:53:46.559782 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:53:46.566664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:53:46.580949 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:53:46.588113 jq[2081]: false Dec 13 01:53:46.591121 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:53:46.605422 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:53:46.625810 dbus-daemon[2080]: [system] SELinux support is enabled Dec 13 01:53:46.634197 dbus-daemon[2080]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1692 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:46.641412 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:53:46.649838 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:53:46.663074 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:53:46.680736 extend-filesystems[2083]: Found loop4 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found loop5 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found loop6 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found loop7 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1p1 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1p2 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1p3 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found usr Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1p4 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1p6 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1p7 Dec 13 01:53:46.700048 extend-filesystems[2083]: Found nvme0n1p9 Dec 13 01:53:46.700048 extend-filesystems[2083]: Checking size of /dev/nvme0n1p9 Dec 13 01:53:46.692190 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:53:46.728071 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:53:46.737330 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:53:46.772760 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:53:46.799163 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:53:46.806671 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: ---------------------------------------------------- Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: corporation. Support and training for ntp-4 are Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: available at https://www.nwtime.org/support Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: ---------------------------------------------------- Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: proto: precision = 0.096 usec (-23) Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: basedate set to 2024-11-30 Dec 13 01:53:46.818751 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:46.812967 ntpd[2087]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:46.824425 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:46.824425 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:46.813014 ntpd[2087]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:46.832938 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:46.832938 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Listen normally on 3 eth0 172.31.16.194:123 Dec 13 01:53:46.832938 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:46.832938 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Listen normally on 5 eth0 [fe80::41f:1fff:fe6b:82a9%2]:123 Dec 13 01:53:46.832938 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: Listening on routing socket on fd #22 for interface updates Dec 13 01:53:46.813034 ntpd[2087]: ---------------------------------------------------- Dec 13 01:53:46.813053 ntpd[2087]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:46.813071 ntpd[2087]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:46.813090 ntpd[2087]: corporation. Support and training for ntp-4 are Dec 13 01:53:46.813108 ntpd[2087]: available at https://www.nwtime.org/support Dec 13 01:53:46.813127 ntpd[2087]: ---------------------------------------------------- Dec 13 01:53:46.815160 ntpd[2087]: proto: precision = 0.096 usec (-23) Dec 13 01:53:46.815612 ntpd[2087]: basedate set to 2024-11-30 Dec 13 01:53:46.815638 ntpd[2087]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:46.823180 ntpd[2087]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:46.823259 ntpd[2087]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:46.825661 ntpd[2087]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:46.825760 ntpd[2087]: Listen normally on 3 eth0 172.31.16.194:123 Dec 13 01:53:46.825832 ntpd[2087]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:46.825909 ntpd[2087]: Listen normally on 5 eth0 [fe80::41f:1fff:fe6b:82a9%2]:123 Dec 13 01:53:46.825977 ntpd[2087]: Listening on routing socket on fd #22 for interface updates Dec 13 01:53:46.841436 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:46.841436 ntpd[2087]: 13 Dec 01:53:46 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:46.841661 extend-filesystems[2083]: Resized partition /dev/nvme0n1p9 Dec 13 01:53:46.840594 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:46.840644 ntpd[2087]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:46.856917 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:53:46.857094 extend-filesystems[2123]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:53:46.873938 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:53:46.859021 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:53:46.863228 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:53:46.863759 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:53:46.877938 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:53:46.880140 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:53:46.892698 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:53:46.912748 jq[2115]: true Dec 13 01:53:46.949984 coreos-metadata[2078]: Dec 13 01:53:46.949 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:46.963766 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.957 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.963 INFO Fetch successful Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.963 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.971 INFO Fetch successful Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.971 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.972 INFO Fetch successful Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.972 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.974 INFO Fetch successful Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.974 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.992 INFO Fetch failed with 404: resource not found Dec 13 01:53:46.993128 coreos-metadata[2078]: Dec 13 01:53:46.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:53:46.966970 (ntainerd)[2130]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:53:47.004839 extend-filesystems[2123]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:53:47.004839 extend-filesystems[2123]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:53:47.004839 extend-filesystems[2123]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:46.996 INFO Fetch successful Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:46.996 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:46.997 INFO Fetch successful Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:46.997 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:47.016 INFO Fetch successful Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:47.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:47.023 INFO Fetch successful Dec 13 01:53:47.035850 coreos-metadata[2078]: Dec 13 01:53:47.029 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:53:47.036408 extend-filesystems[2083]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:53:47.008659 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:53:47.009157 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:53:47.051015 coreos-metadata[2078]: Dec 13 01:53:47.048 INFO Fetch successful Dec 13 01:53:47.095553 update_engine[2109]: I20241213 01:53:47.094039 2109 main.cc:92] Flatcar Update Engine starting Dec 13 01:53:47.114276 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:53:47.146208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:53:47.150726 tar[2127]: linux-arm64/helm Dec 13 01:53:47.146271 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:53:47.161299 update_engine[2109]: I20241213 01:53:47.157244 2109 update_check_scheduler.cc:74] Next update check in 5m50s Dec 13 01:53:47.161384 jq[2133]: true Dec 13 01:53:47.170854 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:53:47.172851 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:53:47.172886 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:53:47.211164 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:53:47.235955 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:53:47.246821 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:53:47.254951 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:53:47.290467 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:53:47.372586 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:53:47.381764 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:53:47.474028 systemd-logind[2107]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:53:47.474080 systemd-logind[2107]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:53:47.476716 systemd-logind[2107]: New seat seat0. Dec 13 01:53:47.499092 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:53:47.524933 bash[2202]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:47.538088 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:53:47.552887 amazon-ssm-agent[2170]: Initializing new seelog logger Dec 13 01:53:47.553385 amazon-ssm-agent[2170]: New Seelog Logger Creation Complete Dec 13 01:53:47.553385 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.553385 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 processing appconfig overrides Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 processing appconfig overrides Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 processing appconfig overrides Dec 13 01:53:47.560568 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO Proxy environment variables: Dec 13 01:53:47.554015 systemd[1]: Starting sshkeys.service... Dec 13 01:53:47.565017 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.565017 amazon-ssm-agent[2170]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:47.565017 amazon-ssm-agent[2170]: 2024/12/13 01:53:47 processing appconfig overrides Dec 13 01:53:47.581082 containerd[2130]: time="2024-12-13T01:53:47.576014856Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:53:47.656720 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:53:47.657731 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO https_proxy: Dec 13 01:53:47.662826 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:53:47.690933 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (2198) Dec 13 01:53:47.762878 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO http_proxy: Dec 13 01:53:47.823986 containerd[2130]: time="2024-12-13T01:53:47.823082318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:47.842812 containerd[2130]: time="2024-12-13T01:53:47.842732390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:47.842812 containerd[2130]: time="2024-12-13T01:53:47.842804498Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:53:47.842992 containerd[2130]: time="2024-12-13T01:53:47.842842394Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:53:47.843190 containerd[2130]: time="2024-12-13T01:53:47.843148586Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:53:47.843249 containerd[2130]: time="2024-12-13T01:53:47.843196202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:47.844508 containerd[2130]: time="2024-12-13T01:53:47.843357638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:47.844508 containerd[2130]: time="2024-12-13T01:53:47.843420410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:47.871301 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO no_proxy: Dec 13 01:53:47.884988 containerd[2130]: time="2024-12-13T01:53:47.884907254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:47.884988 containerd[2130]: time="2024-12-13T01:53:47.884974874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:47.885189 containerd[2130]: time="2024-12-13T01:53:47.885014258Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:47.885189 containerd[2130]: time="2024-12-13T01:53:47.885039686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:47.885333 containerd[2130]: time="2024-12-13T01:53:47.885250874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:47.886588 containerd[2130]: time="2024-12-13T01:53:47.885699782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:47.886588 containerd[2130]: time="2024-12-13T01:53:47.885995966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:47.886588 containerd[2130]: time="2024-12-13T01:53:47.886030010Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:53:47.886588 containerd[2130]: time="2024-12-13T01:53:47.886199294Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:53:47.886588 containerd[2130]: time="2024-12-13T01:53:47.886297430Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:53:47.911579 containerd[2130]: time="2024-12-13T01:53:47.910583342Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:53:47.911579 containerd[2130]: time="2024-12-13T01:53:47.910686170Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:53:47.911579 containerd[2130]: time="2024-12-13T01:53:47.910722674Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:53:47.911579 containerd[2130]: time="2024-12-13T01:53:47.910766090Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:53:47.911579 containerd[2130]: time="2024-12-13T01:53:47.910801418Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:53:47.911579 containerd[2130]: time="2024-12-13T01:53:47.911074094Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:53:47.911966 containerd[2130]: time="2024-12-13T01:53:47.911637230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:53:47.911966 containerd[2130]: time="2024-12-13T01:53:47.911864882Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:53:47.911966 containerd[2130]: time="2024-12-13T01:53:47.911900942Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:53:47.911966 containerd[2130]: time="2024-12-13T01:53:47.911931074Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:53:47.912137 containerd[2130]: time="2024-12-13T01:53:47.911974802Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912137 containerd[2130]: time="2024-12-13T01:53:47.912006134Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912137 containerd[2130]: time="2024-12-13T01:53:47.912037514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912137 containerd[2130]: time="2024-12-13T01:53:47.912072134Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912137 containerd[2130]: time="2024-12-13T01:53:47.912104138Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912349 containerd[2130]: time="2024-12-13T01:53:47.912134630Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912349 containerd[2130]: time="2024-12-13T01:53:47.912163502Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912349 containerd[2130]: time="2024-12-13T01:53:47.912196826Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:53:47.912349 containerd[2130]: time="2024-12-13T01:53:47.912236126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912349 containerd[2130]: time="2024-12-13T01:53:47.912266750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912349 containerd[2130]: time="2024-12-13T01:53:47.912295346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912349 containerd[2130]: time="2024-12-13T01:53:47.912326174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912674 containerd[2130]: time="2024-12-13T01:53:47.912368450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912674 containerd[2130]: time="2024-12-13T01:53:47.912400562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912674 containerd[2130]: time="2024-12-13T01:53:47.912428390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912674 containerd[2130]: time="2024-12-13T01:53:47.912458378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.912674 containerd[2130]: time="2024-12-13T01:53:47.912487874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.920606018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.920714582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.920776070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.920835962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.920880410Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.920955914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.921012386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.921043358Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:53:47.921543 containerd[2130]: time="2024-12-13T01:53:47.921317690Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:53:47.924116 containerd[2130]: time="2024-12-13T01:53:47.921431798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:53:47.924116 containerd[2130]: time="2024-12-13T01:53:47.923667326Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:53:47.924116 containerd[2130]: time="2024-12-13T01:53:47.923724734Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:53:47.924116 containerd[2130]: time="2024-12-13T01:53:47.923752034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.924116 containerd[2130]: time="2024-12-13T01:53:47.923812994Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:53:47.924116 containerd[2130]: time="2024-12-13T01:53:47.923840882Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:53:47.924116 containerd[2130]: time="2024-12-13T01:53:47.923890814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:53:47.930439 containerd[2130]: time="2024-12-13T01:53:47.930228878Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:53:47.930824 containerd[2130]: time="2024-12-13T01:53:47.930442490Z" level=info msg="Connect containerd service" Dec 13 01:53:47.932547 containerd[2130]: time="2024-12-13T01:53:47.931596650Z" level=info msg="using legacy CRI server" Dec 13 01:53:47.932547 containerd[2130]: time="2024-12-13T01:53:47.931659794Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:53:47.935498 containerd[2130]: time="2024-12-13T01:53:47.933712466Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:53:47.944556 containerd[2130]: time="2024-12-13T01:53:47.943853810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:53:47.948229 containerd[2130]: time="2024-12-13T01:53:47.948102566Z" level=info msg="Start subscribing containerd event" Dec 13 01:53:47.948229 containerd[2130]: time="2024-12-13T01:53:47.948224774Z" level=info msg="Start recovering state" Dec 13 01:53:47.948410 containerd[2130]: time="2024-12-13T01:53:47.948358742Z" level=info msg="Start event monitor" Dec 13 01:53:47.948476 containerd[2130]: time="2024-12-13T01:53:47.948413234Z" level=info msg="Start snapshots syncer" Dec 13 01:53:47.948476 containerd[2130]: time="2024-12-13T01:53:47.948439394Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:53:47.948476 containerd[2130]: time="2024-12-13T01:53:47.948458234Z" level=info msg="Start streaming server" Dec 13 01:53:47.954043 containerd[2130]: time="2024-12-13T01:53:47.951877910Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:53:47.954043 containerd[2130]: time="2024-12-13T01:53:47.952005434Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:53:47.954043 containerd[2130]: time="2024-12-13T01:53:47.952118042Z" level=info msg="containerd successfully booted in 0.426405s" Dec 13 01:53:47.952280 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:53:47.978532 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:53:48.074828 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:53:48.116123 locksmithd[2171]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:53:48.132730 coreos-metadata[2221]: Dec 13 01:53:48.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:48.137090 coreos-metadata[2221]: Dec 13 01:53:48.136 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:53:48.137341 coreos-metadata[2221]: Dec 13 01:53:48.137 INFO Fetch successful Dec 13 01:53:48.137438 coreos-metadata[2221]: Dec 13 01:53:48.137 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:53:48.142313 coreos-metadata[2221]: Dec 13 01:53:48.140 INFO Fetch successful Dec 13 01:53:48.146967 unknown[2221]: wrote ssh authorized keys file for user: core Dec 13 01:53:48.176550 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO Agent will take identity from EC2 Dec 13 01:53:48.207012 update-ssh-keys[2288]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:48.211137 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:53:48.220263 systemd[1]: Finished sshkeys.service. Dec 13 01:53:48.277415 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:48.299474 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:53:48.299745 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:53:48.308426 dbus-daemon[2080]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2165 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:48.340028 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:53:48.384250 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:48.388629 polkitd[2307]: Started polkitd version 121 Dec 13 01:53:48.441599 polkitd[2307]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:53:48.441750 polkitd[2307]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:53:48.451233 polkitd[2307]: Finished loading, compiling and executing 2 rules Dec 13 01:53:48.452960 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:53:48.454540 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:53:48.461595 polkitd[2307]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:53:48.483955 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:48.555794 systemd-hostnamed[2165]: Hostname set to (transient) Dec 13 01:53:48.557551 systemd-resolved[2029]: System hostname changed to 'ip-172-31-16-194'. Dec 13 01:53:48.585913 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:53:48.684082 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:53:48.784113 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:53:48.886239 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:53:48.986694 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [Registrar] Starting registrar module Dec 13 01:53:49.093543 amazon-ssm-agent[2170]: 2024-12-13 01:53:47 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:53:49.096881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:53:49.117941 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:53:49.421886 tar[2127]: linux-arm64/LICENSE Dec 13 01:53:49.421886 tar[2127]: linux-arm64/README.md Dec 13 01:53:49.476293 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:53:49.604439 amazon-ssm-agent[2170]: 2024-12-13 01:53:49 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:53:49.652185 amazon-ssm-agent[2170]: 2024-12-13 01:53:49 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:53:49.652185 amazon-ssm-agent[2170]: 2024-12-13 01:53:49 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:53:49.652346 amazon-ssm-agent[2170]: 2024-12-13 01:53:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:53:49.704839 amazon-ssm-agent[2170]: 2024-12-13 01:53:49 INFO [CredentialRefresher] Next credential rotation will be in 32.35832604403333 minutes Dec 13 01:53:50.090631 kubelet[2351]: E1213 01:53:50.090360 2351 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:50.095778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:50.096139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:50.617751 sshd_keygen[2129]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:53:50.664153 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:53:50.680404 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:53:50.685658 amazon-ssm-agent[2170]: 2024-12-13 01:53:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:53:50.703980 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:53:50.704505 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:53:50.720211 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:53:50.746233 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:53:50.760068 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:53:50.771093 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:53:50.775060 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:53:50.777702 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:53:50.779921 systemd[1]: Startup finished in 11.001s (kernel) + 10.275s (userspace) = 21.276s. Dec 13 01:53:50.787608 amazon-ssm-agent[2170]: 2024-12-13 01:53:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2376) started Dec 13 01:53:50.888963 amazon-ssm-agent[2170]: 2024-12-13 01:53:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:53:53.437574 systemd-resolved[2029]: Clock change detected. Flushing caches. Dec 13 01:53:54.483833 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:53:54.492108 systemd[1]: Started sshd@0-172.31.16.194:22-139.178.68.195:57946.service - OpenSSH per-connection server daemon (139.178.68.195:57946). Dec 13 01:53:54.668594 sshd[2399]: Accepted publickey for core from 139.178.68.195 port 57946 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:54.671902 sshd[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:54.686806 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:53:54.694069 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:53:54.698932 systemd-logind[2107]: New session 1 of user core. Dec 13 01:53:54.726108 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:53:54.741164 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:53:54.748370 (systemd)[2405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:54.956630 systemd[2405]: Queued start job for default target default.target. Dec 13 01:53:54.957293 systemd[2405]: Created slice app.slice - User Application Slice. Dec 13 01:53:54.957346 systemd[2405]: Reached target paths.target - Paths. Dec 13 01:53:54.957378 systemd[2405]: Reached target timers.target - Timers. Dec 13 01:53:54.963784 systemd[2405]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:53:54.991292 systemd[2405]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:53:54.991409 systemd[2405]: Reached target sockets.target - Sockets. Dec 13 01:53:54.991440 systemd[2405]: Reached target basic.target - Basic System. Dec 13 01:53:54.991523 systemd[2405]: Reached target default.target - Main User Target. Dec 13 01:53:54.991584 systemd[2405]: Startup finished in 231ms. Dec 13 01:53:54.992158 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:53:55.001058 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:53:55.150444 systemd[1]: Started sshd@1-172.31.16.194:22-139.178.68.195:57956.service - OpenSSH per-connection server daemon (139.178.68.195:57956). Dec 13 01:53:55.330300 sshd[2417]: Accepted publickey for core from 139.178.68.195 port 57956 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:55.332908 sshd[2417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:55.341544 systemd-logind[2107]: New session 2 of user core. Dec 13 01:53:55.351219 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:53:55.480933 sshd[2417]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:55.487906 systemd-logind[2107]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:53:55.488147 systemd[1]: sshd@1-172.31.16.194:22-139.178.68.195:57956.service: Deactivated successfully. Dec 13 01:53:55.493337 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:53:55.494808 systemd-logind[2107]: Removed session 2. Dec 13 01:53:55.513138 systemd[1]: Started sshd@2-172.31.16.194:22-139.178.68.195:57970.service - OpenSSH per-connection server daemon (139.178.68.195:57970). Dec 13 01:53:55.679818 sshd[2425]: Accepted publickey for core from 139.178.68.195 port 57970 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:55.682385 sshd[2425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:55.690762 systemd-logind[2107]: New session 3 of user core. Dec 13 01:53:55.694200 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:53:55.814126 sshd[2425]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:55.821016 systemd[1]: sshd@2-172.31.16.194:22-139.178.68.195:57970.service: Deactivated successfully. Dec 13 01:53:55.821284 systemd-logind[2107]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:53:55.828175 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:53:55.829934 systemd-logind[2107]: Removed session 3. Dec 13 01:53:55.842049 systemd[1]: Started sshd@3-172.31.16.194:22-139.178.68.195:57982.service - OpenSSH per-connection server daemon (139.178.68.195:57982). Dec 13 01:53:56.024414 sshd[2433]: Accepted publickey for core from 139.178.68.195 port 57982 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:56.026419 sshd[2433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:56.033867 systemd-logind[2107]: New session 4 of user core. Dec 13 01:53:56.043036 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:53:56.170752 sshd[2433]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:56.175206 systemd[1]: sshd@3-172.31.16.194:22-139.178.68.195:57982.service: Deactivated successfully. Dec 13 01:53:56.182352 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:53:56.184539 systemd-logind[2107]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:53:56.186798 systemd-logind[2107]: Removed session 4. Dec 13 01:53:56.200079 systemd[1]: Started sshd@4-172.31.16.194:22-139.178.68.195:41510.service - OpenSSH per-connection server daemon (139.178.68.195:41510). Dec 13 01:53:56.372511 sshd[2441]: Accepted publickey for core from 139.178.68.195 port 41510 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:56.375048 sshd[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:56.382696 systemd-logind[2107]: New session 5 of user core. Dec 13 01:53:56.390062 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:53:56.507385 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:53:56.508143 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:56.523137 sudo[2445]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:56.547008 sshd[2441]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:56.554403 systemd[1]: sshd@4-172.31.16.194:22-139.178.68.195:41510.service: Deactivated successfully. Dec 13 01:53:56.559209 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:53:56.560757 systemd-logind[2107]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:53:56.562843 systemd-logind[2107]: Removed session 5. Dec 13 01:53:56.576091 systemd[1]: Started sshd@5-172.31.16.194:22-139.178.68.195:41526.service - OpenSSH per-connection server daemon (139.178.68.195:41526). Dec 13 01:53:56.754681 sshd[2450]: Accepted publickey for core from 139.178.68.195 port 41526 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:56.757495 sshd[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:56.765970 systemd-logind[2107]: New session 6 of user core. Dec 13 01:53:56.773080 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:53:56.880189 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:53:56.880869 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:56.886677 sudo[2455]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:56.896697 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:53:56.897316 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:56.922064 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:56.925980 auditctl[2458]: No rules Dec 13 01:53:56.926971 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:53:56.927459 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:56.939667 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:56.982957 augenrules[2477]: No rules Dec 13 01:53:56.985157 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:56.988497 sudo[2454]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:57.013081 sshd[2450]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:57.019327 systemd-logind[2107]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:53:57.023420 systemd[1]: sshd@5-172.31.16.194:22-139.178.68.195:41526.service: Deactivated successfully. Dec 13 01:53:57.028658 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:53:57.030691 systemd-logind[2107]: Removed session 6. Dec 13 01:53:57.045127 systemd[1]: Started sshd@6-172.31.16.194:22-139.178.68.195:41538.service - OpenSSH per-connection server daemon (139.178.68.195:41538). Dec 13 01:53:57.223584 sshd[2486]: Accepted publickey for core from 139.178.68.195 port 41538 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:57.226163 sshd[2486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:57.233551 systemd-logind[2107]: New session 7 of user core. Dec 13 01:53:57.245059 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:53:57.350288 sudo[2490]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:53:57.350949 sudo[2490]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:57.769030 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:53:57.769355 (dockerd)[2505]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:53:58.123871 dockerd[2505]: time="2024-12-13T01:53:58.123777619Z" level=info msg="Starting up" Dec 13 01:53:58.788206 dockerd[2505]: time="2024-12-13T01:53:58.788147098Z" level=info msg="Loading containers: start." Dec 13 01:53:58.945813 kernel: Initializing XFRM netlink socket Dec 13 01:53:58.977641 (udev-worker)[2528]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:59.060088 systemd-networkd[1692]: docker0: Link UP Dec 13 01:53:59.086968 dockerd[2505]: time="2024-12-13T01:53:59.086898008Z" level=info msg="Loading containers: done." Dec 13 01:53:59.109427 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1911675021-merged.mount: Deactivated successfully. Dec 13 01:53:59.115948 dockerd[2505]: time="2024-12-13T01:53:59.115851356Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:53:59.116168 dockerd[2505]: time="2024-12-13T01:53:59.116035916Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:53:59.116258 dockerd[2505]: time="2024-12-13T01:53:59.116224820Z" level=info msg="Daemon has completed initialization" Dec 13 01:53:59.169901 dockerd[2505]: time="2024-12-13T01:53:59.169742960Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:53:59.170744 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:53:59.844034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:59.855111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:00.354137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:00.378932 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:54:00.393137 containerd[2130]: time="2024-12-13T01:54:00.392591854Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:54:00.496853 kubelet[2661]: E1213 01:54:00.496786 2661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:54:00.504489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:54:00.505054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:54:01.052713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961938197.mount: Deactivated successfully. Dec 13 01:54:02.715625 containerd[2130]: time="2024-12-13T01:54:02.713719406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:02.717327 containerd[2130]: time="2024-12-13T01:54:02.717284414Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:54:02.719944 containerd[2130]: time="2024-12-13T01:54:02.719897966Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:02.725058 containerd[2130]: time="2024-12-13T01:54:02.725007098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:02.727388 containerd[2130]: time="2024-12-13T01:54:02.727323374Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.334621624s" Dec 13 01:54:02.727507 containerd[2130]: time="2024-12-13T01:54:02.727388942Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:54:02.765566 containerd[2130]: time="2024-12-13T01:54:02.765512546Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:54:04.487662 containerd[2130]: time="2024-12-13T01:54:04.487293350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:04.489500 containerd[2130]: time="2024-12-13T01:54:04.489431726Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:54:04.490843 containerd[2130]: time="2024-12-13T01:54:04.490791590Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:04.504872 containerd[2130]: time="2024-12-13T01:54:04.504341558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:04.508834 containerd[2130]: time="2024-12-13T01:54:04.508775150Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.743199952s" Dec 13 01:54:04.508992 containerd[2130]: time="2024-12-13T01:54:04.508837670Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:54:04.547844 containerd[2130]: time="2024-12-13T01:54:04.547787931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:54:05.686253 containerd[2130]: time="2024-12-13T01:54:05.686199616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.689008 containerd[2130]: time="2024-12-13T01:54:05.688938952Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:54:05.691349 containerd[2130]: time="2024-12-13T01:54:05.691277776Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.697295 containerd[2130]: time="2024-12-13T01:54:05.697203076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.700227 containerd[2130]: time="2024-12-13T01:54:05.700051372Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.152199277s" Dec 13 01:54:05.700227 containerd[2130]: time="2024-12-13T01:54:05.700106884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:54:05.736575 containerd[2130]: time="2024-12-13T01:54:05.736273661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:54:07.056162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552537043.mount: Deactivated successfully. Dec 13 01:54:07.589331 containerd[2130]: time="2024-12-13T01:54:07.589239354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:07.591126 containerd[2130]: time="2024-12-13T01:54:07.591067374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:54:07.592550 containerd[2130]: time="2024-12-13T01:54:07.592474182Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:07.596853 containerd[2130]: time="2024-12-13T01:54:07.596796462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:07.598704 containerd[2130]: time="2024-12-13T01:54:07.598647978Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.862309901s" Dec 13 01:54:07.598811 containerd[2130]: time="2024-12-13T01:54:07.598703994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:54:07.636708 containerd[2130]: time="2024-12-13T01:54:07.636631230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:54:08.213517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726899182.mount: Deactivated successfully. Dec 13 01:54:09.323327 containerd[2130]: time="2024-12-13T01:54:09.322721958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:09.324941 containerd[2130]: time="2024-12-13T01:54:09.324875442Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:54:09.326154 containerd[2130]: time="2024-12-13T01:54:09.326071398Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:09.331918 containerd[2130]: time="2024-12-13T01:54:09.331811406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:09.334479 containerd[2130]: time="2024-12-13T01:54:09.334284258Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.697589956s" Dec 13 01:54:09.334479 containerd[2130]: time="2024-12-13T01:54:09.334343442Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:54:09.372350 containerd[2130]: time="2024-12-13T01:54:09.372276499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:54:09.886746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259335144.mount: Deactivated successfully. Dec 13 01:54:09.896056 containerd[2130]: time="2024-12-13T01:54:09.895976433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:09.897715 containerd[2130]: time="2024-12-13T01:54:09.897660705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:54:09.899108 containerd[2130]: time="2024-12-13T01:54:09.899020977Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:09.903644 containerd[2130]: time="2024-12-13T01:54:09.903523485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:09.905347 containerd[2130]: time="2024-12-13T01:54:09.905149677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 532.81193ms" Dec 13 01:54:09.905347 containerd[2130]: time="2024-12-13T01:54:09.905202789Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:54:09.941810 containerd[2130]: time="2024-12-13T01:54:09.941725905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:54:10.522529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294429907.mount: Deactivated successfully. Dec 13 01:54:10.526437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:54:10.536447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:11.563931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:11.572525 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:54:11.680293 kubelet[2837]: E1213 01:54:11.680180 2837 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:54:11.685520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:54:11.686229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:54:13.978175 containerd[2130]: time="2024-12-13T01:54:13.977726810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:13.980054 containerd[2130]: time="2024-12-13T01:54:13.980000294Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:54:13.981741 containerd[2130]: time="2024-12-13T01:54:13.981650966Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:13.987772 containerd[2130]: time="2024-12-13T01:54:13.987723446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:13.990566 containerd[2130]: time="2024-12-13T01:54:13.990356858Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.048569213s" Dec 13 01:54:13.990566 containerd[2130]: time="2024-12-13T01:54:13.990425882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:54:18.214253 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:54:21.375890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:21.389067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:21.437561 systemd[1]: Reloading requested from client PID 2947 ('systemctl') (unit session-7.scope)... Dec 13 01:54:21.437823 systemd[1]: Reloading... Dec 13 01:54:21.638658 zram_generator::config[2993]: No configuration found. Dec 13 01:54:21.889419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:22.046882 systemd[1]: Reloading finished in 608 ms. Dec 13 01:54:22.115800 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:54:22.116022 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:54:22.116865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:22.129435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:22.516977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:22.532286 (kubelet)[3060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:22.611644 kubelet[3060]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:22.611644 kubelet[3060]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:22.611644 kubelet[3060]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:22.611644 kubelet[3060]: I1213 01:54:22.610757 3060 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:23.773755 kubelet[3060]: I1213 01:54:23.773688 3060 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:54:23.773755 kubelet[3060]: I1213 01:54:23.773740 3060 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:23.774465 kubelet[3060]: I1213 01:54:23.774117 3060 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:54:23.805513 kubelet[3060]: I1213 01:54:23.805291 3060 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:23.805955 kubelet[3060]: E1213 01:54:23.805906 3060 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.819467 kubelet[3060]: I1213 01:54:23.819413 3060 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:23.820175 kubelet[3060]: I1213 01:54:23.820132 3060 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:23.820472 kubelet[3060]: I1213 01:54:23.820438 3060 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:54:23.820658 kubelet[3060]: I1213 01:54:23.820485 3060 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:23.820658 kubelet[3060]: I1213 01:54:23.820507 3060 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:54:23.820760 kubelet[3060]: I1213 01:54:23.820707 3060 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:23.825446 kubelet[3060]: I1213 01:54:23.825087 3060 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:54:23.825446 kubelet[3060]: I1213 01:54:23.825135 3060 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:23.825446 kubelet[3060]: I1213 01:54:23.825189 3060 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:54:23.825446 kubelet[3060]: I1213 01:54:23.825230 3060 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:23.828420 kubelet[3060]: W1213 01:54:23.827805 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.16.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-194&limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.828420 kubelet[3060]: E1213 01:54:23.827898 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-194&limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.828420 kubelet[3060]: W1213 01:54:23.828325 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.16.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.828420 kubelet[3060]: E1213 01:54:23.828385 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.829022 kubelet[3060]: I1213 01:54:23.828992 3060 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:23.829576 kubelet[3060]: I1213 01:54:23.829551 3060 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:23.829814 kubelet[3060]: W1213 01:54:23.829795 3060 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:54:23.830985 kubelet[3060]: I1213 01:54:23.830950 3060 server.go:1256] "Started kubelet" Dec 13 01:54:23.843682 kubelet[3060]: E1213 01:54:23.843586 3060 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.194:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.194:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-194.181099b613ae0b4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-194,UID:ip-172-31-16-194,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-194,},FirstTimestamp:2024-12-13 01:54:23.830911822 +0000 UTC m=+1.291486675,LastTimestamp:2024-12-13 01:54:23.830911822 +0000 UTC m=+1.291486675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-194,}" Dec 13 01:54:23.845318 kubelet[3060]: I1213 01:54:23.845025 3060 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:23.849796 kubelet[3060]: I1213 01:54:23.849758 3060 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:23.850664 kubelet[3060]: I1213 01:54:23.850402 3060 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:23.852594 kubelet[3060]: I1213 01:54:23.852554 3060 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:54:23.855384 kubelet[3060]: I1213 01:54:23.855234 3060 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:54:23.855384 kubelet[3060]: I1213 01:54:23.855376 3060 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:54:23.856117 kubelet[3060]: W1213 01:54:23.856003 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.16.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.856259 kubelet[3060]: E1213 01:54:23.856128 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.856315 kubelet[3060]: E1213 01:54:23.856291 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-194?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" interval="200ms" Dec 13 01:54:23.859457 kubelet[3060]: I1213 01:54:23.856763 3060 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:23.859457 kubelet[3060]: I1213 01:54:23.858254 3060 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:54:23.860485 kubelet[3060]: I1213 01:54:23.860440 3060 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:23.860691 kubelet[3060]: I1213 01:54:23.860580 3060 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:23.862673 kubelet[3060]: I1213 01:54:23.862595 3060 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:23.881082 kubelet[3060]: I1213 01:54:23.881041 3060 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:23.883697 kubelet[3060]: I1213 01:54:23.883663 3060 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:23.883845 kubelet[3060]: I1213 01:54:23.883827 3060 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:23.883953 kubelet[3060]: I1213 01:54:23.883935 3060 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:54:23.884140 kubelet[3060]: E1213 01:54:23.884118 3060 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:23.905726 kubelet[3060]: W1213 01:54:23.905586 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.16.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.906349 kubelet[3060]: E1213 01:54:23.905749 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:23.906517 kubelet[3060]: E1213 01:54:23.906472 3060 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:23.920397 kubelet[3060]: I1213 01:54:23.919974 3060 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:23.920397 kubelet[3060]: I1213 01:54:23.920006 3060 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:23.920397 kubelet[3060]: I1213 01:54:23.920036 3060 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:23.923666 kubelet[3060]: I1213 01:54:23.923534 3060 policy_none.go:49] "None policy: Start" Dec 13 01:54:23.924887 kubelet[3060]: I1213 01:54:23.924822 3060 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:23.924984 kubelet[3060]: I1213 01:54:23.924897 3060 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:23.937649 kubelet[3060]: I1213 01:54:23.937431 3060 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:23.939760 kubelet[3060]: I1213 01:54:23.938018 3060 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:23.945685 kubelet[3060]: E1213 01:54:23.945651 3060 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-194\" not found" Dec 13 01:54:23.955643 kubelet[3060]: I1213 01:54:23.955554 3060 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-194" Dec 13 01:54:23.956208 kubelet[3060]: E1213 01:54:23.956174 3060 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.194:6443/api/v1/nodes\": dial tcp 172.31.16.194:6443: connect: connection refused" node="ip-172-31-16-194" Dec 13 01:54:23.984465 kubelet[3060]: I1213 01:54:23.984362 3060 topology_manager.go:215] "Topology Admit Handler" podUID="0c32e3ccf10d9ae849dac138dda778c4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-194" Dec 13 01:54:23.986651 kubelet[3060]: I1213 01:54:23.986427 3060 topology_manager.go:215] "Topology Admit Handler" podUID="882a713603e1267a8d07960c8f2124a1" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:23.989640 kubelet[3060]: I1213 01:54:23.988501 3060 topology_manager.go:215] "Topology Admit Handler" podUID="8f4f90bb45ab1baa1b012c21da0953e5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-194" Dec 13 01:54:24.055838 kubelet[3060]: I1213 01:54:24.055704 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f4f90bb45ab1baa1b012c21da0953e5-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-194\" (UID: \"8f4f90bb45ab1baa1b012c21da0953e5\") " pod="kube-system/kube-scheduler-ip-172-31-16-194" Dec 13 01:54:24.055838 kubelet[3060]: I1213 01:54:24.055780 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c32e3ccf10d9ae849dac138dda778c4-ca-certs\") pod \"kube-apiserver-ip-172-31-16-194\" (UID: \"0c32e3ccf10d9ae849dac138dda778c4\") " pod="kube-system/kube-apiserver-ip-172-31-16-194" Dec 13 01:54:24.057131 kubelet[3060]: E1213 01:54:24.057100 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-194?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" interval="400ms" Dec 13 01:54:24.082639 kubelet[3060]: E1213 01:54:24.082567 3060 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.194:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.194:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-194.181099b613ae0b4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-194,UID:ip-172-31-16-194,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-194,},FirstTimestamp:2024-12-13 01:54:23.830911822 +0000 UTC m=+1.291486675,LastTimestamp:2024-12-13 01:54:23.830911822 +0000 UTC m=+1.291486675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-194,}" Dec 13 01:54:24.156166 kubelet[3060]: I1213 01:54:24.156127 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:24.156723 kubelet[3060]: I1213 01:54:24.156692 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c32e3ccf10d9ae849dac138dda778c4-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-194\" (UID: \"0c32e3ccf10d9ae849dac138dda778c4\") " pod="kube-system/kube-apiserver-ip-172-31-16-194" Dec 13 01:54:24.156902 kubelet[3060]: I1213 01:54:24.156881 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:24.157055 kubelet[3060]: I1213 01:54:24.157036 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:24.157183 kubelet[3060]: I1213 01:54:24.157165 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c32e3ccf10d9ae849dac138dda778c4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-194\" (UID: \"0c32e3ccf10d9ae849dac138dda778c4\") " pod="kube-system/kube-apiserver-ip-172-31-16-194" Dec 13 01:54:24.157586 kubelet[3060]: I1213 01:54:24.157309 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:24.157586 kubelet[3060]: I1213 01:54:24.157366 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:24.158873 kubelet[3060]: I1213 01:54:24.158831 3060 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-194" Dec 13 01:54:24.159362 kubelet[3060]: E1213 01:54:24.159305 3060 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.194:6443/api/v1/nodes\": dial tcp 172.31.16.194:6443: connect: connection refused" node="ip-172-31-16-194" Dec 13 01:54:24.296417 containerd[2130]: time="2024-12-13T01:54:24.296361657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-194,Uid:0c32e3ccf10d9ae849dac138dda778c4,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:24.300218 containerd[2130]: time="2024-12-13T01:54:24.299835477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-194,Uid:8f4f90bb45ab1baa1b012c21da0953e5,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:24.304240 containerd[2130]: time="2024-12-13T01:54:24.304177917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-194,Uid:882a713603e1267a8d07960c8f2124a1,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:24.458554 kubelet[3060]: E1213 01:54:24.458516 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-194?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" interval="800ms" Dec 13 01:54:24.562103 kubelet[3060]: I1213 01:54:24.562070 3060 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-194" Dec 13 01:54:24.562847 kubelet[3060]: E1213 01:54:24.562821 3060 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.194:6443/api/v1/nodes\": dial tcp 172.31.16.194:6443: connect: connection refused" node="ip-172-31-16-194" Dec 13 01:54:24.812569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196532979.mount: Deactivated successfully. Dec 13 01:54:24.820016 containerd[2130]: time="2024-12-13T01:54:24.819936431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:24.830777 containerd[2130]: time="2024-12-13T01:54:24.830707391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:24.832770 containerd[2130]: time="2024-12-13T01:54:24.832711787Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:24.833779 containerd[2130]: time="2024-12-13T01:54:24.833727995Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:24.837151 containerd[2130]: time="2024-12-13T01:54:24.836918831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:24.837151 containerd[2130]: time="2024-12-13T01:54:24.836984915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:54:24.838148 containerd[2130]: time="2024-12-13T01:54:24.837656639Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:24.845233 containerd[2130]: time="2024-12-13T01:54:24.845130924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:24.847570 containerd[2130]: time="2024-12-13T01:54:24.847184616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.712703ms" Dec 13 01:54:24.850717 containerd[2130]: time="2024-12-13T01:54:24.850646376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.705779ms" Dec 13 01:54:24.852064 containerd[2130]: time="2024-12-13T01:54:24.851981148Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.693947ms" Dec 13 01:54:25.029645 containerd[2130]: time="2024-12-13T01:54:25.027161060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:25.029645 containerd[2130]: time="2024-12-13T01:54:25.027318560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:25.029645 containerd[2130]: time="2024-12-13T01:54:25.027346184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:25.029645 containerd[2130]: time="2024-12-13T01:54:25.027594656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:25.031709 containerd[2130]: time="2024-12-13T01:54:25.031515620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:25.032174 containerd[2130]: time="2024-12-13T01:54:25.031756928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:25.032174 containerd[2130]: time="2024-12-13T01:54:25.031802360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:25.032174 containerd[2130]: time="2024-12-13T01:54:25.031979420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:25.040997 containerd[2130]: time="2024-12-13T01:54:25.040837520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:25.040997 containerd[2130]: time="2024-12-13T01:54:25.040955240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:25.042828 containerd[2130]: time="2024-12-13T01:54:25.041849804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:25.043932 containerd[2130]: time="2024-12-13T01:54:25.043680740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:25.172396 containerd[2130]: time="2024-12-13T01:54:25.172255377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-194,Uid:0c32e3ccf10d9ae849dac138dda778c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"86fd874ee7883c967a26318d1814f4bb323a1208271bc4338331b3af05710ed2\"" Dec 13 01:54:25.182882 kubelet[3060]: W1213 01:54:25.182203 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.16.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.182882 kubelet[3060]: E1213 01:54:25.182299 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.184698 containerd[2130]: time="2024-12-13T01:54:25.184320405Z" level=info msg="CreateContainer within sandbox \"86fd874ee7883c967a26318d1814f4bb323a1208271bc4338331b3af05710ed2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:54:25.201516 containerd[2130]: time="2024-12-13T01:54:25.201331233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-194,Uid:882a713603e1267a8d07960c8f2124a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8993ad63b7611756b2a4239d08a4ec7ec6748ae07d625cc97f0bd24f9dc3deb\"" Dec 13 01:54:25.207954 containerd[2130]: time="2024-12-13T01:54:25.207676017Z" level=info msg="CreateContainer within sandbox \"f8993ad63b7611756b2a4239d08a4ec7ec6748ae07d625cc97f0bd24f9dc3deb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:54:25.216828 containerd[2130]: time="2024-12-13T01:54:25.216779109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-194,Uid:8f4f90bb45ab1baa1b012c21da0953e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"feeed4b474efc2f3899231e2694b799bf40881a28bfcd2c5f45e5a2b815af3bd\"" Dec 13 01:54:25.221285 containerd[2130]: time="2024-12-13T01:54:25.221011869Z" level=info msg="CreateContainer within sandbox \"feeed4b474efc2f3899231e2694b799bf40881a28bfcd2c5f45e5a2b815af3bd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:54:25.224982 containerd[2130]: time="2024-12-13T01:54:25.224929221Z" level=info msg="CreateContainer within sandbox \"86fd874ee7883c967a26318d1814f4bb323a1208271bc4338331b3af05710ed2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f3da832de5da24e5d313cd6075a61c70573f4b257822adedb64c21bdde94e96e\"" Dec 13 01:54:25.227639 containerd[2130]: time="2024-12-13T01:54:25.226256433Z" level=info msg="StartContainer for \"f3da832de5da24e5d313cd6075a61c70573f4b257822adedb64c21bdde94e96e\"" Dec 13 01:54:25.243139 kubelet[3060]: W1213 01:54:25.243055 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.16.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.243139 kubelet[3060]: E1213 01:54:25.243147 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.260407 kubelet[3060]: E1213 01:54:25.260370 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-194?timeout=10s\": dial tcp 172.31.16.194:6443: connect: connection refused" interval="1.6s" Dec 13 01:54:25.264985 containerd[2130]: time="2024-12-13T01:54:25.264906670Z" level=info msg="CreateContainer within sandbox \"f8993ad63b7611756b2a4239d08a4ec7ec6748ae07d625cc97f0bd24f9dc3deb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b75bd81b82b82ea50e354d0f3ef259b7ad29a43be11136008fa371f6cfefaff5\"" Dec 13 01:54:25.271474 containerd[2130]: time="2024-12-13T01:54:25.271399522Z" level=info msg="StartContainer for \"b75bd81b82b82ea50e354d0f3ef259b7ad29a43be11136008fa371f6cfefaff5\"" Dec 13 01:54:25.274872 containerd[2130]: time="2024-12-13T01:54:25.274691338Z" level=info msg="CreateContainer within sandbox \"feeed4b474efc2f3899231e2694b799bf40881a28bfcd2c5f45e5a2b815af3bd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"55c70daf527ec8b01fce84a4b45fc00328a666449161d089f510de0f45b99a5d\"" Dec 13 01:54:25.285696 containerd[2130]: time="2024-12-13T01:54:25.285594898Z" level=info msg="StartContainer for \"55c70daf527ec8b01fce84a4b45fc00328a666449161d089f510de0f45b99a5d\"" Dec 13 01:54:25.290066 kubelet[3060]: W1213 01:54:25.289979 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.16.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.290231 kubelet[3060]: E1213 01:54:25.290076 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.194:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.308504 kubelet[3060]: W1213 01:54:25.308279 3060 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.16.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-194&limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.308655 kubelet[3060]: E1213 01:54:25.308637 3060 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-194&limit=500&resourceVersion=0": dial tcp 172.31.16.194:6443: connect: connection refused Dec 13 01:54:25.366669 kubelet[3060]: I1213 01:54:25.366228 3060 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-194" Dec 13 01:54:25.370510 kubelet[3060]: E1213 01:54:25.369210 3060 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.194:6443/api/v1/nodes\": dial tcp 172.31.16.194:6443: connect: connection refused" node="ip-172-31-16-194" Dec 13 01:54:25.378776 containerd[2130]: time="2024-12-13T01:54:25.378065578Z" level=info msg="StartContainer for \"f3da832de5da24e5d313cd6075a61c70573f4b257822adedb64c21bdde94e96e\" returns successfully" Dec 13 01:54:25.499303 containerd[2130]: time="2024-12-13T01:54:25.496664159Z" level=info msg="StartContainer for \"b75bd81b82b82ea50e354d0f3ef259b7ad29a43be11136008fa371f6cfefaff5\" returns successfully" Dec 13 01:54:25.547790 containerd[2130]: time="2024-12-13T01:54:25.546959891Z" level=info msg="StartContainer for \"55c70daf527ec8b01fce84a4b45fc00328a666449161d089f510de0f45b99a5d\" returns successfully" Dec 13 01:54:26.973877 kubelet[3060]: I1213 01:54:26.973821 3060 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-194" Dec 13 01:54:29.120530 kubelet[3060]: I1213 01:54:29.120394 3060 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-194" Dec 13 01:54:29.212934 kubelet[3060]: E1213 01:54:29.212878 3060 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 01:54:29.835566 kubelet[3060]: I1213 01:54:29.835281 3060 apiserver.go:52] "Watching apiserver" Dec 13 01:54:29.856180 kubelet[3060]: I1213 01:54:29.856139 3060 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:32.140858 update_engine[2109]: I20241213 01:54:32.140762 2109 update_attempter.cc:509] Updating boot flags... Dec 13 01:54:32.260431 systemd[1]: Reloading requested from client PID 3349 ('systemctl') (unit session-7.scope)... Dec 13 01:54:32.260459 systemd[1]: Reloading... Dec 13 01:54:32.270048 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3345) Dec 13 01:54:32.546675 zram_generator::config[3467]: No configuration found. Dec 13 01:54:32.695687 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3347) Dec 13 01:54:32.971948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:33.148953 systemd[1]: Reloading finished in 887 ms. Dec 13 01:54:33.293706 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:33.295439 kubelet[3060]: I1213 01:54:33.293709 3060 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:33.324156 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:54:33.326877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:33.340053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:33.910948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:33.922815 (kubelet)[3623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:34.053476 kubelet[3623]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:34.053476 kubelet[3623]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:34.053476 kubelet[3623]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:34.053476 kubelet[3623]: I1213 01:54:34.051707 3623 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:34.065180 kubelet[3623]: I1213 01:54:34.064806 3623 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:54:34.065180 kubelet[3623]: I1213 01:54:34.064930 3623 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:34.065871 kubelet[3623]: I1213 01:54:34.065837 3623 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:54:34.071412 kubelet[3623]: I1213 01:54:34.071062 3623 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:54:34.072871 sudo[3635]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:54:34.073515 sudo[3635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:54:34.076909 kubelet[3623]: I1213 01:54:34.076857 3623 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:34.089094 kubelet[3623]: I1213 01:54:34.089044 3623 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:34.089961 kubelet[3623]: I1213 01:54:34.089923 3623 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:34.090528 kubelet[3623]: I1213 01:54:34.090251 3623 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:54:34.090528 kubelet[3623]: I1213 01:54:34.090316 3623 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:34.090528 kubelet[3623]: I1213 01:54:34.090338 3623 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:54:34.090528 kubelet[3623]: I1213 01:54:34.090399 3623 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:34.091664 kubelet[3623]: I1213 01:54:34.090592 3623 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:54:34.091664 kubelet[3623]: I1213 01:54:34.091468 3623 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:34.091664 kubelet[3623]: I1213 01:54:34.091530 3623 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:54:34.093319 kubelet[3623]: I1213 01:54:34.093251 3623 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:34.116968 kubelet[3623]: I1213 01:54:34.116915 3623 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:34.117276 kubelet[3623]: I1213 01:54:34.117242 3623 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:34.120004 kubelet[3623]: I1213 01:54:34.119958 3623 server.go:1256] "Started kubelet" Dec 13 01:54:34.126636 kubelet[3623]: I1213 01:54:34.126346 3623 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:34.139049 kubelet[3623]: I1213 01:54:34.138999 3623 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:34.140436 kubelet[3623]: I1213 01:54:34.140317 3623 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:54:34.148251 kubelet[3623]: I1213 01:54:34.146365 3623 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:34.148381 kubelet[3623]: I1213 01:54:34.148284 3623 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:34.167051 kubelet[3623]: I1213 01:54:34.156939 3623 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:54:34.167051 kubelet[3623]: I1213 01:54:34.157663 3623 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:54:34.167051 kubelet[3623]: I1213 01:54:34.157961 3623 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:54:34.171111 kubelet[3623]: I1213 01:54:34.171056 3623 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:34.180236 kubelet[3623]: I1213 01:54:34.179046 3623 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:34.180236 kubelet[3623]: I1213 01:54:34.179086 3623 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:34.183039 kubelet[3623]: E1213 01:54:34.182982 3623 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:34.183539 kubelet[3623]: I1213 01:54:34.183304 3623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:34.192407 kubelet[3623]: I1213 01:54:34.192128 3623 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:34.192407 kubelet[3623]: I1213 01:54:34.192176 3623 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:34.192407 kubelet[3623]: I1213 01:54:34.192278 3623 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:54:34.192666 kubelet[3623]: E1213 01:54:34.192431 3623 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:34.269768 kubelet[3623]: I1213 01:54:34.269719 3623 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-194" Dec 13 01:54:34.285995 kubelet[3623]: I1213 01:54:34.281564 3623 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-194" Dec 13 01:54:34.285995 kubelet[3623]: I1213 01:54:34.282847 3623 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-194" Dec 13 01:54:34.292742 kubelet[3623]: E1213 01:54:34.292708 3623 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:54:34.424900 kubelet[3623]: I1213 01:54:34.424560 3623 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:34.426052 kubelet[3623]: I1213 01:54:34.425858 3623 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:34.426151 kubelet[3623]: I1213 01:54:34.426068 3623 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:34.426349 kubelet[3623]: I1213 01:54:34.426307 3623 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:54:34.426411 kubelet[3623]: I1213 01:54:34.426358 3623 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:54:34.426411 kubelet[3623]: I1213 01:54:34.426377 3623 policy_none.go:49] "None policy: Start" Dec 13 01:54:34.428388 kubelet[3623]: I1213 01:54:34.428311 3623 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:34.428388 kubelet[3623]: I1213 01:54:34.428365 3623 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:34.429680 kubelet[3623]: I1213 01:54:34.428649 3623 state_mem.go:75] "Updated machine memory state" Dec 13 01:54:34.432792 kubelet[3623]: I1213 01:54:34.432145 3623 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:34.434895 kubelet[3623]: I1213 01:54:34.434849 3623 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:34.493505 kubelet[3623]: I1213 01:54:34.493096 3623 topology_manager.go:215] "Topology Admit Handler" podUID="882a713603e1267a8d07960c8f2124a1" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:34.495420 kubelet[3623]: I1213 01:54:34.495218 3623 topology_manager.go:215] "Topology Admit Handler" podUID="8f4f90bb45ab1baa1b012c21da0953e5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-194" Dec 13 01:54:34.495420 kubelet[3623]: I1213 01:54:34.495375 3623 topology_manager.go:215] "Topology Admit Handler" podUID="0c32e3ccf10d9ae849dac138dda778c4" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-194" Dec 13 01:54:34.514434 kubelet[3623]: E1213 01:54:34.514369 3623 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-194\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:34.515621 kubelet[3623]: E1213 01:54:34.515565 3623 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-194\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-194" Dec 13 01:54:34.560447 kubelet[3623]: I1213 01:54:34.560380 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0c32e3ccf10d9ae849dac138dda778c4-ca-certs\") pod \"kube-apiserver-ip-172-31-16-194\" (UID: \"0c32e3ccf10d9ae849dac138dda778c4\") " pod="kube-system/kube-apiserver-ip-172-31-16-194" Dec 13 01:54:34.560648 kubelet[3623]: I1213 01:54:34.560468 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0c32e3ccf10d9ae849dac138dda778c4-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-194\" (UID: \"0c32e3ccf10d9ae849dac138dda778c4\") " pod="kube-system/kube-apiserver-ip-172-31-16-194" Dec 13 01:54:34.560648 kubelet[3623]: I1213 01:54:34.560519 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:34.560648 kubelet[3623]: I1213 01:54:34.560566 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:34.560648 kubelet[3623]: I1213 01:54:34.560644 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8f4f90bb45ab1baa1b012c21da0953e5-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-194\" (UID: \"8f4f90bb45ab1baa1b012c21da0953e5\") " pod="kube-system/kube-scheduler-ip-172-31-16-194" Dec 13 01:54:34.560851 kubelet[3623]: I1213 01:54:34.560694 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0c32e3ccf10d9ae849dac138dda778c4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-194\" (UID: \"0c32e3ccf10d9ae849dac138dda778c4\") " pod="kube-system/kube-apiserver-ip-172-31-16-194" Dec 13 01:54:34.560851 kubelet[3623]: I1213 01:54:34.560739 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:34.561882 kubelet[3623]: I1213 01:54:34.561355 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:34.562224 kubelet[3623]: I1213 01:54:34.562188 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/882a713603e1267a8d07960c8f2124a1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-194\" (UID: \"882a713603e1267a8d07960c8f2124a1\") " pod="kube-system/kube-controller-manager-ip-172-31-16-194" Dec 13 01:54:34.993946 sudo[3635]: pam_unix(sudo:session): session closed for user root Dec 13 01:54:35.094510 kubelet[3623]: I1213 01:54:35.094462 3623 apiserver.go:52] "Watching apiserver" Dec 13 01:54:35.158903 kubelet[3623]: I1213 01:54:35.158831 3623 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:35.253920 kubelet[3623]: I1213 01:54:35.253718 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-194" podStartSLOduration=3.253650955 podStartE2EDuration="3.253650955s" podCreationTimestamp="2024-12-13 01:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:35.238165639 +0000 UTC m=+1.304627047" watchObservedRunningTime="2024-12-13 01:54:35.253650955 +0000 UTC m=+1.320112351" Dec 13 01:54:35.272636 kubelet[3623]: I1213 01:54:35.271687 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-194" podStartSLOduration=4.271626583 podStartE2EDuration="4.271626583s" podCreationTimestamp="2024-12-13 01:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:35.255906103 +0000 UTC m=+1.322367511" watchObservedRunningTime="2024-12-13 01:54:35.271626583 +0000 UTC m=+1.338088003" Dec 13 01:54:35.287621 kubelet[3623]: I1213 01:54:35.286471 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-194" podStartSLOduration=1.286386007 podStartE2EDuration="1.286386007s" podCreationTimestamp="2024-12-13 01:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:35.271800199 +0000 UTC m=+1.338261619" watchObservedRunningTime="2024-12-13 01:54:35.286386007 +0000 UTC m=+1.352847391" Dec 13 01:54:37.433781 sudo[2490]: pam_unix(sudo:session): session closed for user root Dec 13 01:54:37.458034 sshd[2486]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:37.465902 systemd-logind[2107]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:54:37.467061 systemd[1]: sshd@6-172.31.16.194:22-139.178.68.195:41538.service: Deactivated successfully. Dec 13 01:54:37.474860 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:54:37.477987 systemd-logind[2107]: Removed session 7. Dec 13 01:54:45.526663 kubelet[3623]: I1213 01:54:45.525352 3623 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:54:45.530019 containerd[2130]: time="2024-12-13T01:54:45.528318450Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:54:45.533472 kubelet[3623]: I1213 01:54:45.529989 3623 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:54:45.721097 kubelet[3623]: I1213 01:54:45.721039 3623 topology_manager.go:215] "Topology Admit Handler" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" podNamespace="kube-system" podName="cilium-fxqtr" Dec 13 01:54:45.744416 kubelet[3623]: W1213 01:54:45.743044 3623 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.744416 kubelet[3623]: E1213 01:54:45.743099 3623 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.744416 kubelet[3623]: W1213 01:54:45.743186 3623 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.744416 kubelet[3623]: E1213 01:54:45.743210 3623 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.744416 kubelet[3623]: W1213 01:54:45.743249 3623 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.744868 kubelet[3623]: W1213 01:54:45.743268 3623 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.744868 kubelet[3623]: E1213 01:54:45.743279 3623 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.744868 kubelet[3623]: E1213 01:54:45.743291 3623 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.762623 kubelet[3623]: I1213 01:54:45.760997 3623 topology_manager.go:215] "Topology Admit Handler" podUID="f0e334bf-8b61-4f98-9b87-f3feb230c7e9" podNamespace="kube-system" podName="kube-proxy-pj6f9" Dec 13 01:54:45.777044 kubelet[3623]: W1213 01:54:45.776921 3623 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.777295 kubelet[3623]: E1213 01:54:45.777254 3623 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-16-194" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-16-194' and this object Dec 13 01:54:45.832965 kubelet[3623]: I1213 01:54:45.832109 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-etc-cni-netd\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.832965 kubelet[3623]: I1213 01:54:45.832180 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-xtables-lock\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.832965 kubelet[3623]: I1213 01:54:45.832227 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-hubble-tls\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.832965 kubelet[3623]: I1213 01:54:45.832275 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-bpf-maps\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.832965 kubelet[3623]: I1213 01:54:45.832325 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-hostproc\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.832965 kubelet[3623]: I1213 01:54:45.832369 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-net\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833390 kubelet[3623]: I1213 01:54:45.832413 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mjg6\" (UniqueName: \"kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-kube-api-access-5mjg6\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833390 kubelet[3623]: I1213 01:54:45.832459 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-run\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833390 kubelet[3623]: I1213 01:54:45.832504 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-lib-modules\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833390 kubelet[3623]: I1213 01:54:45.832549 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-kernel\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833390 kubelet[3623]: I1213 01:54:45.832592 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-config-path\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833697 kubelet[3623]: I1213 01:54:45.832662 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-cgroup\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833697 kubelet[3623]: I1213 01:54:45.832708 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cni-path\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.833697 kubelet[3623]: I1213 01:54:45.832753 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f2564b3-56fc-41f6-a120-0d6592df6011-clustermesh-secrets\") pod \"cilium-fxqtr\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " pod="kube-system/cilium-fxqtr" Dec 13 01:54:45.896007 kubelet[3623]: I1213 01:54:45.895939 3623 topology_manager.go:215] "Topology Admit Handler" podUID="66f41f96-c2f9-4a02-b5d0-d7bc38745efa" podNamespace="kube-system" podName="cilium-operator-5cc964979-96bgf" Dec 13 01:54:45.935637 kubelet[3623]: I1213 01:54:45.933585 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0e334bf-8b61-4f98-9b87-f3feb230c7e9-kube-proxy\") pod \"kube-proxy-pj6f9\" (UID: \"f0e334bf-8b61-4f98-9b87-f3feb230c7e9\") " pod="kube-system/kube-proxy-pj6f9" Dec 13 01:54:45.935637 kubelet[3623]: I1213 01:54:45.933682 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0e334bf-8b61-4f98-9b87-f3feb230c7e9-xtables-lock\") pod \"kube-proxy-pj6f9\" (UID: \"f0e334bf-8b61-4f98-9b87-f3feb230c7e9\") " pod="kube-system/kube-proxy-pj6f9" Dec 13 01:54:45.935637 kubelet[3623]: I1213 01:54:45.933769 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0e334bf-8b61-4f98-9b87-f3feb230c7e9-lib-modules\") pod \"kube-proxy-pj6f9\" (UID: \"f0e334bf-8b61-4f98-9b87-f3feb230c7e9\") " pod="kube-system/kube-proxy-pj6f9" Dec 13 01:54:45.935637 kubelet[3623]: I1213 01:54:45.934028 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqztm\" (UniqueName: \"kubernetes.io/projected/f0e334bf-8b61-4f98-9b87-f3feb230c7e9-kube-api-access-qqztm\") pod \"kube-proxy-pj6f9\" (UID: \"f0e334bf-8b61-4f98-9b87-f3feb230c7e9\") " pod="kube-system/kube-proxy-pj6f9" Dec 13 01:54:46.035660 kubelet[3623]: I1213 01:54:46.034876 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22kz\" (UniqueName: \"kubernetes.io/projected/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-kube-api-access-f22kz\") pod \"cilium-operator-5cc964979-96bgf\" (UID: \"66f41f96-c2f9-4a02-b5d0-d7bc38745efa\") " pod="kube-system/cilium-operator-5cc964979-96bgf" Dec 13 01:54:46.035660 kubelet[3623]: I1213 01:54:46.034972 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-cilium-config-path\") pod \"cilium-operator-5cc964979-96bgf\" (UID: \"66f41f96-c2f9-4a02-b5d0-d7bc38745efa\") " pod="kube-system/cilium-operator-5cc964979-96bgf" Dec 13 01:54:46.934664 kubelet[3623]: E1213 01:54:46.934576 3623 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:46.935433 kubelet[3623]: E1213 01:54:46.934758 3623 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-config-path podName:6f2564b3-56fc-41f6-a120-0d6592df6011 nodeName:}" failed. No retries permitted until 2024-12-13 01:54:47.434721805 +0000 UTC m=+13.501183201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-config-path") pod "cilium-fxqtr" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:46.937049 kubelet[3623]: E1213 01:54:46.936921 3623 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 01:54:46.937228 kubelet[3623]: E1213 01:54:46.937056 3623 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f2564b3-56fc-41f6-a120-0d6592df6011-clustermesh-secrets podName:6f2564b3-56fc-41f6-a120-0d6592df6011 nodeName:}" failed. No retries permitted until 2024-12-13 01:54:47.437025793 +0000 UTC m=+13.503487189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/6f2564b3-56fc-41f6-a120-0d6592df6011-clustermesh-secrets") pod "cilium-fxqtr" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011") : failed to sync secret cache: timed out waiting for the condition Dec 13 01:54:46.950387 kubelet[3623]: E1213 01:54:46.949882 3623 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:46.950387 kubelet[3623]: E1213 01:54:46.949951 3623 projected.go:200] Error preparing data for projected volume kube-api-access-5mjg6 for pod kube-system/cilium-fxqtr: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:46.950387 kubelet[3623]: E1213 01:54:46.950053 3623 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-kube-api-access-5mjg6 podName:6f2564b3-56fc-41f6-a120-0d6592df6011 nodeName:}" failed. No retries permitted until 2024-12-13 01:54:47.450025057 +0000 UTC m=+13.516486453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5mjg6" (UniqueName: "kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-kube-api-access-5mjg6") pod "cilium-fxqtr" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:47.136680 kubelet[3623]: E1213 01:54:47.136536 3623 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:47.136851 kubelet[3623]: E1213 01:54:47.136695 3623 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-cilium-config-path podName:66f41f96-c2f9-4a02-b5d0-d7bc38745efa nodeName:}" failed. No retries permitted until 2024-12-13 01:54:47.636665138 +0000 UTC m=+13.703126534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-cilium-config-path") pod "cilium-operator-5cc964979-96bgf" (UID: "66f41f96-c2f9-4a02-b5d0-d7bc38745efa") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:47.275799 containerd[2130]: time="2024-12-13T01:54:47.275579119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pj6f9,Uid:f0e334bf-8b61-4f98-9b87-f3feb230c7e9,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:47.332143 containerd[2130]: time="2024-12-13T01:54:47.331733611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:47.332143 containerd[2130]: time="2024-12-13T01:54:47.331850167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:47.332143 containerd[2130]: time="2024-12-13T01:54:47.331888975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.333178 containerd[2130]: time="2024-12-13T01:54:47.332351239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.404918 containerd[2130]: time="2024-12-13T01:54:47.404839016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pj6f9,Uid:f0e334bf-8b61-4f98-9b87-f3feb230c7e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0890fa053f4691bf125cde122d6701b5b3f73712433620a15635f6eb5c4e13a7\"" Dec 13 01:54:47.412588 containerd[2130]: time="2024-12-13T01:54:47.412429844Z" level=info msg="CreateContainer within sandbox \"0890fa053f4691bf125cde122d6701b5b3f73712433620a15635f6eb5c4e13a7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:54:47.439566 containerd[2130]: time="2024-12-13T01:54:47.439505408Z" level=info msg="CreateContainer within sandbox \"0890fa053f4691bf125cde122d6701b5b3f73712433620a15635f6eb5c4e13a7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ea12533bafab1c9de28f30ac4f7e31ef37b280757ddb1509241d3300f581690\"" Dec 13 01:54:47.442403 containerd[2130]: time="2024-12-13T01:54:47.440783528Z" level=info msg="StartContainer for \"4ea12533bafab1c9de28f30ac4f7e31ef37b280757ddb1509241d3300f581690\"" Dec 13 01:54:47.569270 containerd[2130]: time="2024-12-13T01:54:47.569011940Z" level=info msg="StartContainer for \"4ea12533bafab1c9de28f30ac4f7e31ef37b280757ddb1509241d3300f581690\" returns successfully" Dec 13 01:54:47.712970 containerd[2130]: time="2024-12-13T01:54:47.712832085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-96bgf,Uid:66f41f96-c2f9-4a02-b5d0-d7bc38745efa,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:47.761530 containerd[2130]: time="2024-12-13T01:54:47.761057745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:47.761530 containerd[2130]: time="2024-12-13T01:54:47.761172693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:47.761530 containerd[2130]: time="2024-12-13T01:54:47.761211033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.761916 containerd[2130]: time="2024-12-13T01:54:47.761756805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.836891 containerd[2130]: time="2024-12-13T01:54:47.836047762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxqtr,Uid:6f2564b3-56fc-41f6-a120-0d6592df6011,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:47.906229 containerd[2130]: time="2024-12-13T01:54:47.906154198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-96bgf,Uid:66f41f96-c2f9-4a02-b5d0-d7bc38745efa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\"" Dec 13 01:54:47.912979 containerd[2130]: time="2024-12-13T01:54:47.912717046Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:54:47.938103 containerd[2130]: time="2024-12-13T01:54:47.937720990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:47.939263 containerd[2130]: time="2024-12-13T01:54:47.938820298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:47.939263 containerd[2130]: time="2024-12-13T01:54:47.938910154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.940671 containerd[2130]: time="2024-12-13T01:54:47.939337918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:47.999048 systemd[1]: run-containerd-runc-k8s.io-fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e-runc.tpw8EL.mount: Deactivated successfully. Dec 13 01:54:48.048631 containerd[2130]: time="2024-12-13T01:54:48.048555559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxqtr,Uid:6f2564b3-56fc-41f6-a120-0d6592df6011,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\"" Dec 13 01:54:50.307054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542770858.mount: Deactivated successfully. Dec 13 01:54:51.224548 containerd[2130]: time="2024-12-13T01:54:51.224462663Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:51.226889 containerd[2130]: time="2024-12-13T01:54:51.226825583Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137670" Dec 13 01:54:51.228931 containerd[2130]: time="2024-12-13T01:54:51.228850319Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:51.232546 containerd[2130]: time="2024-12-13T01:54:51.232317335Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.319525613s" Dec 13 01:54:51.232546 containerd[2130]: time="2024-12-13T01:54:51.232397639Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:54:51.234588 containerd[2130]: time="2024-12-13T01:54:51.234419843Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:54:51.238473 containerd[2130]: time="2024-12-13T01:54:51.238407515Z" level=info msg="CreateContainer within sandbox \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:54:51.261437 containerd[2130]: time="2024-12-13T01:54:51.261295607Z" level=info msg="CreateContainer within sandbox \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\"" Dec 13 01:54:51.263825 containerd[2130]: time="2024-12-13T01:54:51.262405127Z" level=info msg="StartContainer for \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\"" Dec 13 01:54:51.388819 containerd[2130]: time="2024-12-13T01:54:51.388586843Z" level=info msg="StartContainer for \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\" returns successfully" Dec 13 01:54:52.367962 kubelet[3623]: I1213 01:54:52.367898 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pj6f9" podStartSLOduration=7.367835316 podStartE2EDuration="7.367835316s" podCreationTimestamp="2024-12-13 01:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:48.32359268 +0000 UTC m=+14.390054100" watchObservedRunningTime="2024-12-13 01:54:52.367835316 +0000 UTC m=+18.434296748" Dec 13 01:54:58.419081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678327144.mount: Deactivated successfully. Dec 13 01:55:01.243652 containerd[2130]: time="2024-12-13T01:55:01.241786628Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:01.245123 containerd[2130]: time="2024-12-13T01:55:01.245036288Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650874" Dec 13 01:55:01.246940 containerd[2130]: time="2024-12-13T01:55:01.246853508Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:01.252925 containerd[2130]: time="2024-12-13T01:55:01.252854972Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.018314265s" Dec 13 01:55:01.253194 containerd[2130]: time="2024-12-13T01:55:01.252930200Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:55:01.257982 containerd[2130]: time="2024-12-13T01:55:01.257732084Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:55:01.280101 containerd[2130]: time="2024-12-13T01:55:01.279890564Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\"" Dec 13 01:55:01.282467 containerd[2130]: time="2024-12-13T01:55:01.280899236Z" level=info msg="StartContainer for \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\"" Dec 13 01:55:01.339567 systemd[1]: run-containerd-runc-k8s.io-92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e-runc.UENaRY.mount: Deactivated successfully. Dec 13 01:55:01.398923 containerd[2130]: time="2024-12-13T01:55:01.398658921Z" level=info msg="StartContainer for \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\" returns successfully" Dec 13 01:55:02.273740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e-rootfs.mount: Deactivated successfully. Dec 13 01:55:02.413707 kubelet[3623]: I1213 01:55:02.413632 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-96bgf" podStartSLOduration=14.091494185 podStartE2EDuration="17.413543086s" podCreationTimestamp="2024-12-13 01:54:45 +0000 UTC" firstStartedPulling="2024-12-13 01:54:47.911119474 +0000 UTC m=+13.977580858" lastFinishedPulling="2024-12-13 01:54:51.233168363 +0000 UTC m=+17.299629759" observedRunningTime="2024-12-13 01:54:52.369304224 +0000 UTC m=+18.435765632" watchObservedRunningTime="2024-12-13 01:55:02.413543086 +0000 UTC m=+28.480004482" Dec 13 01:55:02.522693 containerd[2130]: time="2024-12-13T01:55:02.522415055Z" level=info msg="shim disconnected" id=92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e namespace=k8s.io Dec 13 01:55:02.522693 containerd[2130]: time="2024-12-13T01:55:02.522488555Z" level=warning msg="cleaning up after shim disconnected" id=92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e namespace=k8s.io Dec 13 01:55:02.522693 containerd[2130]: time="2024-12-13T01:55:02.522508535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:03.401785 containerd[2130]: time="2024-12-13T01:55:03.401228075Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:55:03.437714 containerd[2130]: time="2024-12-13T01:55:03.437464607Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\"" Dec 13 01:55:03.438826 containerd[2130]: time="2024-12-13T01:55:03.438452543Z" level=info msg="StartContainer for \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\"" Dec 13 01:55:03.551668 containerd[2130]: time="2024-12-13T01:55:03.550968732Z" level=info msg="StartContainer for \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\" returns successfully" Dec 13 01:55:03.581016 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:55:03.581991 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:03.582456 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:03.604077 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:03.677309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3-rootfs.mount: Deactivated successfully. Dec 13 01:55:03.689338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:03.699366 containerd[2130]: time="2024-12-13T01:55:03.698788596Z" level=info msg="shim disconnected" id=700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3 namespace=k8s.io Dec 13 01:55:03.699366 containerd[2130]: time="2024-12-13T01:55:03.698888448Z" level=warning msg="cleaning up after shim disconnected" id=700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3 namespace=k8s.io Dec 13 01:55:03.699366 containerd[2130]: time="2024-12-13T01:55:03.699066264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:04.407833 containerd[2130]: time="2024-12-13T01:55:04.407775396Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:55:04.446078 containerd[2130]: time="2024-12-13T01:55:04.445865556Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\"" Dec 13 01:55:04.448236 containerd[2130]: time="2024-12-13T01:55:04.447971556Z" level=info msg="StartContainer for \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\"" Dec 13 01:55:04.564479 containerd[2130]: time="2024-12-13T01:55:04.564407269Z" level=info msg="StartContainer for \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\" returns successfully" Dec 13 01:55:04.636646 containerd[2130]: time="2024-12-13T01:55:04.635903533Z" level=error msg="collecting metrics for 427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa" error="cgroups: cgroup deleted: unknown" Dec 13 01:55:04.637492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa-rootfs.mount: Deactivated successfully. Dec 13 01:55:04.692646 containerd[2130]: time="2024-12-13T01:55:04.692296513Z" level=info msg="shim disconnected" id=427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa namespace=k8s.io Dec 13 01:55:04.692646 containerd[2130]: time="2024-12-13T01:55:04.692493865Z" level=warning msg="cleaning up after shim disconnected" id=427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa namespace=k8s.io Dec 13 01:55:04.692646 containerd[2130]: time="2024-12-13T01:55:04.692519665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:05.411280 containerd[2130]: time="2024-12-13T01:55:05.411172441Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:55:05.437661 containerd[2130]: time="2024-12-13T01:55:05.435552913Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\"" Dec 13 01:55:05.437661 containerd[2130]: time="2024-12-13T01:55:05.437082001Z" level=info msg="StartContainer for \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\"" Dec 13 01:55:05.552894 containerd[2130]: time="2024-12-13T01:55:05.552724514Z" level=info msg="StartContainer for \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\" returns successfully" Dec 13 01:55:05.585110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda-rootfs.mount: Deactivated successfully. Dec 13 01:55:05.596911 containerd[2130]: time="2024-12-13T01:55:05.596823182Z" level=info msg="shim disconnected" id=93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda namespace=k8s.io Dec 13 01:55:05.596911 containerd[2130]: time="2024-12-13T01:55:05.596906294Z" level=warning msg="cleaning up after shim disconnected" id=93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda namespace=k8s.io Dec 13 01:55:05.597671 containerd[2130]: time="2024-12-13T01:55:05.596929382Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:06.425152 containerd[2130]: time="2024-12-13T01:55:06.425010410Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:55:06.459990 containerd[2130]: time="2024-12-13T01:55:06.459482846Z" level=info msg="CreateContainer within sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\"" Dec 13 01:55:06.465043 containerd[2130]: time="2024-12-13T01:55:06.463571894Z" level=info msg="StartContainer for \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\"" Dec 13 01:55:06.577552 containerd[2130]: time="2024-12-13T01:55:06.577446063Z" level=info msg="StartContainer for \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\" returns successfully" Dec 13 01:55:06.738876 kubelet[3623]: I1213 01:55:06.738342 3623 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:55:06.794774 kubelet[3623]: I1213 01:55:06.791200 3623 topology_manager.go:215] "Topology Admit Handler" podUID="b5a7a145-d72a-4a6a-a268-acf104a6c45d" podNamespace="kube-system" podName="coredns-76f75df574-8lfbx" Dec 13 01:55:06.798497 kubelet[3623]: I1213 01:55:06.798396 3623 topology_manager.go:215] "Topology Admit Handler" podUID="7dc27534-080e-4b6e-b6f3-eb8222d2f473" podNamespace="kube-system" podName="coredns-76f75df574-658tp" Dec 13 01:55:06.895373 kubelet[3623]: I1213 01:55:06.895241 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5a7a145-d72a-4a6a-a268-acf104a6c45d-config-volume\") pod \"coredns-76f75df574-8lfbx\" (UID: \"b5a7a145-d72a-4a6a-a268-acf104a6c45d\") " pod="kube-system/coredns-76f75df574-8lfbx" Dec 13 01:55:06.895536 kubelet[3623]: I1213 01:55:06.895396 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t2x4\" (UniqueName: \"kubernetes.io/projected/7dc27534-080e-4b6e-b6f3-eb8222d2f473-kube-api-access-6t2x4\") pod \"coredns-76f75df574-658tp\" (UID: \"7dc27534-080e-4b6e-b6f3-eb8222d2f473\") " pod="kube-system/coredns-76f75df574-658tp" Dec 13 01:55:06.895536 kubelet[3623]: I1213 01:55:06.895451 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwsc7\" (UniqueName: \"kubernetes.io/projected/b5a7a145-d72a-4a6a-a268-acf104a6c45d-kube-api-access-qwsc7\") pod \"coredns-76f75df574-8lfbx\" (UID: \"b5a7a145-d72a-4a6a-a268-acf104a6c45d\") " pod="kube-system/coredns-76f75df574-8lfbx" Dec 13 01:55:06.895536 kubelet[3623]: I1213 01:55:06.895500 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dc27534-080e-4b6e-b6f3-eb8222d2f473-config-volume\") pod \"coredns-76f75df574-658tp\" (UID: \"7dc27534-080e-4b6e-b6f3-eb8222d2f473\") " pod="kube-system/coredns-76f75df574-658tp" Dec 13 01:55:07.112380 containerd[2130]: time="2024-12-13T01:55:07.110899093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8lfbx,Uid:b5a7a145-d72a-4a6a-a268-acf104a6c45d,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:07.133116 containerd[2130]: time="2024-12-13T01:55:07.132976442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-658tp,Uid:7dc27534-080e-4b6e-b6f3-eb8222d2f473,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:09.392067 (udev-worker)[4413]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:09.399680 (udev-worker)[4411]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:09.401297 systemd-networkd[1692]: cilium_host: Link UP Dec 13 01:55:09.401944 systemd-networkd[1692]: cilium_net: Link UP Dec 13 01:55:09.403927 systemd-networkd[1692]: cilium_net: Gained carrier Dec 13 01:55:09.406595 systemd-networkd[1692]: cilium_host: Gained carrier Dec 13 01:55:09.442243 systemd-networkd[1692]: cilium_net: Gained IPv6LL Dec 13 01:55:09.526001 systemd-networkd[1692]: cilium_host: Gained IPv6LL Dec 13 01:55:09.597860 systemd-networkd[1692]: cilium_vxlan: Link UP Dec 13 01:55:09.597879 systemd-networkd[1692]: cilium_vxlan: Gained carrier Dec 13 01:55:10.128667 kernel: NET: Registered PF_ALG protocol family Dec 13 01:55:11.507691 (udev-worker)[4457]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:11.519230 systemd-networkd[1692]: lxc_health: Link UP Dec 13 01:55:11.528342 systemd-networkd[1692]: lxc_health: Gained carrier Dec 13 01:55:11.559658 systemd-networkd[1692]: cilium_vxlan: Gained IPv6LL Dec 13 01:55:11.876428 kubelet[3623]: I1213 01:55:11.873550 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fxqtr" podStartSLOduration=13.672325088000001 podStartE2EDuration="26.873455385s" podCreationTimestamp="2024-12-13 01:54:45 +0000 UTC" firstStartedPulling="2024-12-13 01:54:48.052391923 +0000 UTC m=+14.118853319" lastFinishedPulling="2024-12-13 01:55:01.25352222 +0000 UTC m=+27.319983616" observedRunningTime="2024-12-13 01:55:07.478656183 +0000 UTC m=+33.545117711" watchObservedRunningTime="2024-12-13 01:55:11.873455385 +0000 UTC m=+37.939916805" Dec 13 01:55:12.285587 systemd-networkd[1692]: lxcdef31a537c08: Link UP Dec 13 01:55:12.300519 kernel: eth0: renamed from tmpacae0 Dec 13 01:55:12.320403 systemd-networkd[1692]: lxcdef31a537c08: Gained carrier Dec 13 01:55:12.326177 systemd-networkd[1692]: lxcce783741e693: Link UP Dec 13 01:55:12.327111 (udev-worker)[4463]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:12.335654 kernel: eth0: renamed from tmpfc847 Dec 13 01:55:12.345831 systemd-networkd[1692]: lxcce783741e693: Gained carrier Dec 13 01:55:12.965862 systemd-networkd[1692]: lxc_health: Gained IPv6LL Dec 13 01:55:14.057510 systemd-networkd[1692]: lxcce783741e693: Gained IPv6LL Dec 13 01:55:14.182355 systemd-networkd[1692]: lxcdef31a537c08: Gained IPv6LL Dec 13 01:55:15.159131 kubelet[3623]: I1213 01:55:15.159062 3623 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:16.436937 ntpd[2087]: Listen normally on 6 cilium_host 192.168.0.39:123 Dec 13 01:55:16.437798 ntpd[2087]: 13 Dec 01:55:16 ntpd[2087]: Listen normally on 6 cilium_host 192.168.0.39:123 Dec 13 01:55:16.437798 ntpd[2087]: 13 Dec 01:55:16 ntpd[2087]: Listen normally on 7 cilium_net [fe80::18a2:b3ff:fe02:8b65%4]:123 Dec 13 01:55:16.437798 ntpd[2087]: 13 Dec 01:55:16 ntpd[2087]: Listen normally on 8 cilium_host [fe80::c4d6:feff:fe9d:b777%5]:123 Dec 13 01:55:16.437798 ntpd[2087]: 13 Dec 01:55:16 ntpd[2087]: Listen normally on 9 cilium_vxlan [fe80::ac9d:a9ff:fe09:eb98%6]:123 Dec 13 01:55:16.437798 ntpd[2087]: 13 Dec 01:55:16 ntpd[2087]: Listen normally on 10 lxc_health [fe80::3810:afff:fe26:862%8]:123 Dec 13 01:55:16.437798 ntpd[2087]: 13 Dec 01:55:16 ntpd[2087]: Listen normally on 11 lxcdef31a537c08 [fe80::843c:eff:feb4:a89%10]:123 Dec 13 01:55:16.437798 ntpd[2087]: 13 Dec 01:55:16 ntpd[2087]: Listen normally on 12 lxcce783741e693 [fe80::301a:8bff:fe50:8992%12]:123 Dec 13 01:55:16.437065 ntpd[2087]: Listen normally on 7 cilium_net [fe80::18a2:b3ff:fe02:8b65%4]:123 Dec 13 01:55:16.437149 ntpd[2087]: Listen normally on 8 cilium_host [fe80::c4d6:feff:fe9d:b777%5]:123 Dec 13 01:55:16.437216 ntpd[2087]: Listen normally on 9 cilium_vxlan [fe80::ac9d:a9ff:fe09:eb98%6]:123 Dec 13 01:55:16.437287 ntpd[2087]: Listen normally on 10 lxc_health [fe80::3810:afff:fe26:862%8]:123 Dec 13 01:55:16.437353 ntpd[2087]: Listen normally on 11 lxcdef31a537c08 [fe80::843c:eff:feb4:a89%10]:123 Dec 13 01:55:16.437420 ntpd[2087]: Listen normally on 12 lxcce783741e693 [fe80::301a:8bff:fe50:8992%12]:123 Dec 13 01:55:17.521744 systemd[1]: Started sshd@7-172.31.16.194:22-139.178.68.195:49378.service - OpenSSH per-connection server daemon (139.178.68.195:49378). Dec 13 01:55:17.725945 sshd[4809]: Accepted publickey for core from 139.178.68.195 port 49378 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:17.729215 sshd[4809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:17.739592 systemd-logind[2107]: New session 8 of user core. Dec 13 01:55:17.749317 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:55:18.099358 sshd[4809]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:18.111047 systemd[1]: sshd@7-172.31.16.194:22-139.178.68.195:49378.service: Deactivated successfully. Dec 13 01:55:18.127036 systemd-logind[2107]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:55:18.129467 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:55:18.138788 systemd-logind[2107]: Removed session 8. Dec 13 01:55:21.562273 containerd[2130]: time="2024-12-13T01:55:21.561990005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:21.562273 containerd[2130]: time="2024-12-13T01:55:21.562095341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:21.562273 containerd[2130]: time="2024-12-13T01:55:21.562132013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.564216 containerd[2130]: time="2024-12-13T01:55:21.563956481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.660873 systemd[1]: run-containerd-runc-k8s.io-acae0af747b829351d9d941032db399be032ead5ed429fd4df277a048211fccb-runc.lpKQwd.mount: Deactivated successfully. Dec 13 01:55:21.695567 containerd[2130]: time="2024-12-13T01:55:21.695316234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:21.695567 containerd[2130]: time="2024-12-13T01:55:21.695497446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:21.696376 containerd[2130]: time="2024-12-13T01:55:21.695562378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.696376 containerd[2130]: time="2024-12-13T01:55:21.696398826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:21.852632 containerd[2130]: time="2024-12-13T01:55:21.850907731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-658tp,Uid:7dc27534-080e-4b6e-b6f3-eb8222d2f473,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc847662b646a82b86aeb7f77a25bec6164fb8e1265b5cd8cad8d557eec4abd9\"" Dec 13 01:55:21.863423 containerd[2130]: time="2024-12-13T01:55:21.863371819Z" level=info msg="CreateContainer within sandbox \"fc847662b646a82b86aeb7f77a25bec6164fb8e1265b5cd8cad8d557eec4abd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:21.895325 containerd[2130]: time="2024-12-13T01:55:21.895103131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-8lfbx,Uid:b5a7a145-d72a-4a6a-a268-acf104a6c45d,Namespace:kube-system,Attempt:0,} returns sandbox id \"acae0af747b829351d9d941032db399be032ead5ed429fd4df277a048211fccb\"" Dec 13 01:55:21.905378 containerd[2130]: time="2024-12-13T01:55:21.905115391Z" level=info msg="CreateContainer within sandbox \"acae0af747b829351d9d941032db399be032ead5ed429fd4df277a048211fccb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:21.924859 containerd[2130]: time="2024-12-13T01:55:21.924784975Z" level=info msg="CreateContainer within sandbox \"fc847662b646a82b86aeb7f77a25bec6164fb8e1265b5cd8cad8d557eec4abd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21ec5793cd7d362caa97ec1fda3629b2eeac5990e78f3b2ff5bb61d630280faa\"" Dec 13 01:55:21.927709 containerd[2130]: time="2024-12-13T01:55:21.927506983Z" level=info msg="StartContainer for \"21ec5793cd7d362caa97ec1fda3629b2eeac5990e78f3b2ff5bb61d630280faa\"" Dec 13 01:55:21.931513 containerd[2130]: time="2024-12-13T01:55:21.931457371Z" level=info msg="CreateContainer within sandbox \"acae0af747b829351d9d941032db399be032ead5ed429fd4df277a048211fccb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65a82c55365377c9f10bd4937b618483a6383b10c6efd0e84b6b0f52e08eab89\"" Dec 13 01:55:21.933967 containerd[2130]: time="2024-12-13T01:55:21.933893323Z" level=info msg="StartContainer for \"65a82c55365377c9f10bd4937b618483a6383b10c6efd0e84b6b0f52e08eab89\"" Dec 13 01:55:22.061290 containerd[2130]: time="2024-12-13T01:55:22.060592948Z" level=info msg="StartContainer for \"21ec5793cd7d362caa97ec1fda3629b2eeac5990e78f3b2ff5bb61d630280faa\" returns successfully" Dec 13 01:55:22.098095 containerd[2130]: time="2024-12-13T01:55:22.098033524Z" level=info msg="StartContainer for \"65a82c55365377c9f10bd4937b618483a6383b10c6efd0e84b6b0f52e08eab89\" returns successfully" Dec 13 01:55:22.516733 kubelet[3623]: I1213 01:55:22.516670 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-658tp" podStartSLOduration=37.516566958 podStartE2EDuration="37.516566958s" podCreationTimestamp="2024-12-13 01:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:22.512279178 +0000 UTC m=+48.578740610" watchObservedRunningTime="2024-12-13 01:55:22.516566958 +0000 UTC m=+48.583028366" Dec 13 01:55:22.537650 kubelet[3623]: I1213 01:55:22.535366 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-8lfbx" podStartSLOduration=37.535308642 podStartE2EDuration="37.535308642s" podCreationTimestamp="2024-12-13 01:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:22.533509098 +0000 UTC m=+48.599970494" watchObservedRunningTime="2024-12-13 01:55:22.535308642 +0000 UTC m=+48.601770038" Dec 13 01:55:23.131790 systemd[1]: Started sshd@8-172.31.16.194:22-139.178.68.195:49390.service - OpenSSH per-connection server daemon (139.178.68.195:49390). Dec 13 01:55:23.319507 sshd[4996]: Accepted publickey for core from 139.178.68.195 port 49390 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:23.322540 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:23.334427 systemd-logind[2107]: New session 9 of user core. Dec 13 01:55:23.340211 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:55:23.595259 sshd[4996]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:23.605480 systemd-logind[2107]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:55:23.606560 systemd[1]: sshd@8-172.31.16.194:22-139.178.68.195:49390.service: Deactivated successfully. Dec 13 01:55:23.612573 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:55:23.615349 systemd-logind[2107]: Removed session 9. Dec 13 01:55:28.628244 systemd[1]: Started sshd@9-172.31.16.194:22-139.178.68.195:43374.service - OpenSSH per-connection server daemon (139.178.68.195:43374). Dec 13 01:55:28.815857 sshd[5011]: Accepted publickey for core from 139.178.68.195 port 43374 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:28.818402 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:28.826820 systemd-logind[2107]: New session 10 of user core. Dec 13 01:55:28.833091 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:55:29.073532 sshd[5011]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:29.081541 systemd[1]: sshd@9-172.31.16.194:22-139.178.68.195:43374.service: Deactivated successfully. Dec 13 01:55:29.088376 systemd-logind[2107]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:55:29.089130 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:55:29.092490 systemd-logind[2107]: Removed session 10. Dec 13 01:55:34.106069 systemd[1]: Started sshd@10-172.31.16.194:22-139.178.68.195:43386.service - OpenSSH per-connection server daemon (139.178.68.195:43386). Dec 13 01:55:34.280542 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 43386 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:34.283296 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:34.292278 systemd-logind[2107]: New session 11 of user core. Dec 13 01:55:34.298374 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:55:34.546951 sshd[5026]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:34.554827 systemd[1]: sshd@10-172.31.16.194:22-139.178.68.195:43386.service: Deactivated successfully. Dec 13 01:55:34.561240 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:55:34.562974 systemd-logind[2107]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:55:34.564953 systemd-logind[2107]: Removed session 11. Dec 13 01:55:39.578180 systemd[1]: Started sshd@11-172.31.16.194:22-139.178.68.195:58216.service - OpenSSH per-connection server daemon (139.178.68.195:58216). Dec 13 01:55:39.770474 sshd[5044]: Accepted publickey for core from 139.178.68.195 port 58216 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:39.773994 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:39.785240 systemd-logind[2107]: New session 12 of user core. Dec 13 01:55:39.793506 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:55:40.064976 sshd[5044]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:40.071928 systemd[1]: sshd@11-172.31.16.194:22-139.178.68.195:58216.service: Deactivated successfully. Dec 13 01:55:40.081377 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:55:40.082991 systemd-logind[2107]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:55:40.087578 systemd-logind[2107]: Removed session 12. Dec 13 01:55:40.113183 systemd[1]: Started sshd@12-172.31.16.194:22-139.178.68.195:58222.service - OpenSSH per-connection server daemon (139.178.68.195:58222). Dec 13 01:55:40.299956 sshd[5059]: Accepted publickey for core from 139.178.68.195 port 58222 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:40.302555 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:40.310939 systemd-logind[2107]: New session 13 of user core. Dec 13 01:55:40.320305 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:55:40.669940 sshd[5059]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:40.687184 systemd[1]: sshd@12-172.31.16.194:22-139.178.68.195:58222.service: Deactivated successfully. Dec 13 01:55:40.701097 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:55:40.703417 systemd-logind[2107]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:55:40.721755 systemd[1]: Started sshd@13-172.31.16.194:22-139.178.68.195:58228.service - OpenSSH per-connection server daemon (139.178.68.195:58228). Dec 13 01:55:40.723811 systemd-logind[2107]: Removed session 13. Dec 13 01:55:40.905639 sshd[5071]: Accepted publickey for core from 139.178.68.195 port 58228 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:40.909052 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:40.920586 systemd-logind[2107]: New session 14 of user core. Dec 13 01:55:40.925197 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:55:41.179310 sshd[5071]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:41.187213 systemd[1]: sshd@13-172.31.16.194:22-139.178.68.195:58228.service: Deactivated successfully. Dec 13 01:55:41.196813 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:55:41.199136 systemd-logind[2107]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:55:41.201555 systemd-logind[2107]: Removed session 14. Dec 13 01:55:46.211276 systemd[1]: Started sshd@14-172.31.16.194:22-139.178.68.195:59034.service - OpenSSH per-connection server daemon (139.178.68.195:59034). Dec 13 01:55:46.393384 sshd[5087]: Accepted publickey for core from 139.178.68.195 port 59034 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:46.396329 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:46.404759 systemd-logind[2107]: New session 15 of user core. Dec 13 01:55:46.411492 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:55:46.667361 sshd[5087]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:46.674050 systemd[1]: sshd@14-172.31.16.194:22-139.178.68.195:59034.service: Deactivated successfully. Dec 13 01:55:46.682251 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:55:46.683786 systemd-logind[2107]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:55:46.685864 systemd-logind[2107]: Removed session 15. Dec 13 01:55:51.698688 systemd[1]: Started sshd@15-172.31.16.194:22-139.178.68.195:59036.service - OpenSSH per-connection server daemon (139.178.68.195:59036). Dec 13 01:55:51.873341 sshd[5103]: Accepted publickey for core from 139.178.68.195 port 59036 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:51.876158 sshd[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:51.884712 systemd-logind[2107]: New session 16 of user core. Dec 13 01:55:51.892325 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:55:52.135357 sshd[5103]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:52.144008 systemd[1]: sshd@15-172.31.16.194:22-139.178.68.195:59036.service: Deactivated successfully. Dec 13 01:55:52.144106 systemd-logind[2107]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:55:52.150979 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:55:52.152655 systemd-logind[2107]: Removed session 16. Dec 13 01:55:57.170472 systemd[1]: Started sshd@16-172.31.16.194:22-139.178.68.195:42340.service - OpenSSH per-connection server daemon (139.178.68.195:42340). Dec 13 01:55:57.349514 sshd[5120]: Accepted publickey for core from 139.178.68.195 port 42340 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:57.352651 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:57.362064 systemd-logind[2107]: New session 17 of user core. Dec 13 01:55:57.368672 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:55:57.648823 sshd[5120]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:57.655594 systemd[1]: sshd@16-172.31.16.194:22-139.178.68.195:42340.service: Deactivated successfully. Dec 13 01:55:57.665155 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:55:57.669746 systemd-logind[2107]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:55:57.672529 systemd-logind[2107]: Removed session 17. Dec 13 01:56:02.680102 systemd[1]: Started sshd@17-172.31.16.194:22-139.178.68.195:42342.service - OpenSSH per-connection server daemon (139.178.68.195:42342). Dec 13 01:56:02.852238 sshd[5135]: Accepted publickey for core from 139.178.68.195 port 42342 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:02.855015 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:02.863852 systemd-logind[2107]: New session 18 of user core. Dec 13 01:56:02.878155 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:56:03.129951 sshd[5135]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:03.137988 systemd[1]: sshd@17-172.31.16.194:22-139.178.68.195:42342.service: Deactivated successfully. Dec 13 01:56:03.145161 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:56:03.146843 systemd-logind[2107]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:56:03.148621 systemd-logind[2107]: Removed session 18. Dec 13 01:56:03.163109 systemd[1]: Started sshd@18-172.31.16.194:22-139.178.68.195:42346.service - OpenSSH per-connection server daemon (139.178.68.195:42346). Dec 13 01:56:03.345226 sshd[5149]: Accepted publickey for core from 139.178.68.195 port 42346 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:03.348089 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:03.356893 systemd-logind[2107]: New session 19 of user core. Dec 13 01:56:03.363333 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:56:03.673685 sshd[5149]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:03.679986 systemd-logind[2107]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:56:03.681024 systemd[1]: sshd@18-172.31.16.194:22-139.178.68.195:42346.service: Deactivated successfully. Dec 13 01:56:03.687941 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:56:03.694011 systemd-logind[2107]: Removed session 19. Dec 13 01:56:03.705102 systemd[1]: Started sshd@19-172.31.16.194:22-139.178.68.195:42362.service - OpenSSH per-connection server daemon (139.178.68.195:42362). Dec 13 01:56:03.877120 sshd[5161]: Accepted publickey for core from 139.178.68.195 port 42362 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:03.880104 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:03.892131 systemd-logind[2107]: New session 20 of user core. Dec 13 01:56:03.901275 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:56:06.453514 sshd[5161]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:06.471022 systemd[1]: sshd@19-172.31.16.194:22-139.178.68.195:42362.service: Deactivated successfully. Dec 13 01:56:06.485640 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:56:06.493305 systemd-logind[2107]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:56:06.511317 systemd[1]: Started sshd@20-172.31.16.194:22-139.178.68.195:59438.service - OpenSSH per-connection server daemon (139.178.68.195:59438). Dec 13 01:56:06.513680 systemd-logind[2107]: Removed session 20. Dec 13 01:56:06.688646 sshd[5180]: Accepted publickey for core from 139.178.68.195 port 59438 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:06.691759 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:06.704250 systemd-logind[2107]: New session 21 of user core. Dec 13 01:56:06.710351 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:56:07.208066 sshd[5180]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:07.219719 systemd-logind[2107]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:56:07.222195 systemd[1]: sshd@20-172.31.16.194:22-139.178.68.195:59438.service: Deactivated successfully. Dec 13 01:56:07.230332 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:56:07.243126 systemd[1]: Started sshd@21-172.31.16.194:22-139.178.68.195:59444.service - OpenSSH per-connection server daemon (139.178.68.195:59444). Dec 13 01:56:07.244874 systemd-logind[2107]: Removed session 21. Dec 13 01:56:07.421521 sshd[5192]: Accepted publickey for core from 139.178.68.195 port 59444 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:07.424323 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:07.434161 systemd-logind[2107]: New session 22 of user core. Dec 13 01:56:07.440246 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:56:07.683868 sshd[5192]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:07.691288 systemd[1]: sshd@21-172.31.16.194:22-139.178.68.195:59444.service: Deactivated successfully. Dec 13 01:56:07.701902 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:56:07.704096 systemd-logind[2107]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:56:07.707226 systemd-logind[2107]: Removed session 22. Dec 13 01:56:12.715134 systemd[1]: Started sshd@22-172.31.16.194:22-139.178.68.195:59460.service - OpenSSH per-connection server daemon (139.178.68.195:59460). Dec 13 01:56:12.903857 sshd[5206]: Accepted publickey for core from 139.178.68.195 port 59460 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:12.906681 sshd[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:12.914070 systemd-logind[2107]: New session 23 of user core. Dec 13 01:56:12.920331 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:56:13.166366 sshd[5206]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:13.173359 systemd[1]: sshd@22-172.31.16.194:22-139.178.68.195:59460.service: Deactivated successfully. Dec 13 01:56:13.179381 systemd-logind[2107]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:56:13.180574 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:56:13.183292 systemd-logind[2107]: Removed session 23. Dec 13 01:56:18.199554 systemd[1]: Started sshd@23-172.31.16.194:22-139.178.68.195:43578.service - OpenSSH per-connection server daemon (139.178.68.195:43578). Dec 13 01:56:18.390402 sshd[5226]: Accepted publickey for core from 139.178.68.195 port 43578 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:18.395141 sshd[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:18.404257 systemd-logind[2107]: New session 24 of user core. Dec 13 01:56:18.410099 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:56:18.655479 sshd[5226]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:18.662953 systemd[1]: sshd@23-172.31.16.194:22-139.178.68.195:43578.service: Deactivated successfully. Dec 13 01:56:18.671105 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:56:18.672857 systemd-logind[2107]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:56:18.674693 systemd-logind[2107]: Removed session 24. Dec 13 01:56:23.689197 systemd[1]: Started sshd@24-172.31.16.194:22-139.178.68.195:43592.service - OpenSSH per-connection server daemon (139.178.68.195:43592). Dec 13 01:56:23.872460 sshd[5240]: Accepted publickey for core from 139.178.68.195 port 43592 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:23.875198 sshd[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:23.883393 systemd-logind[2107]: New session 25 of user core. Dec 13 01:56:23.890310 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:56:24.134173 sshd[5240]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:24.141717 systemd[1]: sshd@24-172.31.16.194:22-139.178.68.195:43592.service: Deactivated successfully. Dec 13 01:56:24.142015 systemd-logind[2107]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:56:24.148752 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:56:24.151165 systemd-logind[2107]: Removed session 25. Dec 13 01:56:29.166104 systemd[1]: Started sshd@25-172.31.16.194:22-139.178.68.195:44710.service - OpenSSH per-connection server daemon (139.178.68.195:44710). Dec 13 01:56:29.340350 sshd[5253]: Accepted publickey for core from 139.178.68.195 port 44710 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:29.343741 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:29.353222 systemd-logind[2107]: New session 26 of user core. Dec 13 01:56:29.359346 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:56:29.607020 sshd[5253]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:29.612368 systemd[1]: sshd@25-172.31.16.194:22-139.178.68.195:44710.service: Deactivated successfully. Dec 13 01:56:29.621542 systemd-logind[2107]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:56:29.622872 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:56:29.625950 systemd-logind[2107]: Removed session 26. Dec 13 01:56:29.642125 systemd[1]: Started sshd@26-172.31.16.194:22-139.178.68.195:44722.service - OpenSSH per-connection server daemon (139.178.68.195:44722). Dec 13 01:56:29.815895 sshd[5267]: Accepted publickey for core from 139.178.68.195 port 44722 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:29.818496 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:29.827742 systemd-logind[2107]: New session 27 of user core. Dec 13 01:56:29.834294 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:56:33.122543 containerd[2130]: time="2024-12-13T01:56:33.121416217Z" level=info msg="StopContainer for \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\" with timeout 30 (s)" Dec 13 01:56:33.125185 containerd[2130]: time="2024-12-13T01:56:33.125108557Z" level=info msg="Stop container \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\" with signal terminated" Dec 13 01:56:33.172965 containerd[2130]: time="2024-12-13T01:56:33.172906429Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:56:33.189325 containerd[2130]: time="2024-12-13T01:56:33.189261553Z" level=info msg="StopContainer for \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\" with timeout 2 (s)" Dec 13 01:56:33.190147 containerd[2130]: time="2024-12-13T01:56:33.190010809Z" level=info msg="Stop container \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\" with signal terminated" Dec 13 01:56:33.206446 systemd-networkd[1692]: lxc_health: Link DOWN Dec 13 01:56:33.206521 systemd-networkd[1692]: lxc_health: Lost carrier Dec 13 01:56:33.217933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3-rootfs.mount: Deactivated successfully. Dec 13 01:56:33.249837 containerd[2130]: time="2024-12-13T01:56:33.248662897Z" level=info msg="shim disconnected" id=c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3 namespace=k8s.io Dec 13 01:56:33.249837 containerd[2130]: time="2024-12-13T01:56:33.249182245Z" level=warning msg="cleaning up after shim disconnected" id=c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3 namespace=k8s.io Dec 13 01:56:33.249837 containerd[2130]: time="2024-12-13T01:56:33.249213673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:33.281980 containerd[2130]: time="2024-12-13T01:56:33.281768449Z" level=info msg="StopContainer for \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\" returns successfully" Dec 13 01:56:33.283850 containerd[2130]: time="2024-12-13T01:56:33.283368877Z" level=info msg="StopPodSandbox for \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\"" Dec 13 01:56:33.283850 containerd[2130]: time="2024-12-13T01:56:33.283685185Z" level=info msg="Container to stop \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.291945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08-rootfs.mount: Deactivated successfully. Dec 13 01:56:33.299516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775-shm.mount: Deactivated successfully. Dec 13 01:56:33.300245 containerd[2130]: time="2024-12-13T01:56:33.299742206Z" level=info msg="shim disconnected" id=2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08 namespace=k8s.io Dec 13 01:56:33.300245 containerd[2130]: time="2024-12-13T01:56:33.299811926Z" level=warning msg="cleaning up after shim disconnected" id=2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08 namespace=k8s.io Dec 13 01:56:33.300245 containerd[2130]: time="2024-12-13T01:56:33.299831294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:33.338401 containerd[2130]: time="2024-12-13T01:56:33.338241410Z" level=info msg="StopContainer for \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\" returns successfully" Dec 13 01:56:33.339952 containerd[2130]: time="2024-12-13T01:56:33.339893030Z" level=info msg="StopPodSandbox for \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\"" Dec 13 01:56:33.340125 containerd[2130]: time="2024-12-13T01:56:33.339970538Z" level=info msg="Container to stop \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.340125 containerd[2130]: time="2024-12-13T01:56:33.339999806Z" level=info msg="Container to stop \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.340125 containerd[2130]: time="2024-12-13T01:56:33.340024118Z" level=info msg="Container to stop \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.340125 containerd[2130]: time="2024-12-13T01:56:33.340053182Z" level=info msg="Container to stop \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.340125 containerd[2130]: time="2024-12-13T01:56:33.340077242Z" level=info msg="Container to stop \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:56:33.343939 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e-shm.mount: Deactivated successfully. Dec 13 01:56:33.384530 containerd[2130]: time="2024-12-13T01:56:33.383579378Z" level=info msg="shim disconnected" id=2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775 namespace=k8s.io Dec 13 01:56:33.384530 containerd[2130]: time="2024-12-13T01:56:33.383698046Z" level=warning msg="cleaning up after shim disconnected" id=2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775 namespace=k8s.io Dec 13 01:56:33.384530 containerd[2130]: time="2024-12-13T01:56:33.383720750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:33.422637 containerd[2130]: time="2024-12-13T01:56:33.420043106Z" level=info msg="TearDown network for sandbox \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" successfully" Dec 13 01:56:33.422637 containerd[2130]: time="2024-12-13T01:56:33.420108626Z" level=info msg="StopPodSandbox for \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" returns successfully" Dec 13 01:56:33.422637 containerd[2130]: time="2024-12-13T01:56:33.421696418Z" level=info msg="shim disconnected" id=fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e namespace=k8s.io Dec 13 01:56:33.424473 containerd[2130]: time="2024-12-13T01:56:33.422059070Z" level=warning msg="cleaning up after shim disconnected" id=fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e namespace=k8s.io Dec 13 01:56:33.424473 containerd[2130]: time="2024-12-13T01:56:33.422685830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:33.455933 containerd[2130]: time="2024-12-13T01:56:33.455868158Z" level=info msg="TearDown network for sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" successfully" Dec 13 01:56:33.456247 containerd[2130]: time="2024-12-13T01:56:33.456108086Z" level=info msg="StopPodSandbox for \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" returns successfully" Dec 13 01:56:33.567566 kubelet[3623]: I1213 01:56:33.567501 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-net\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568239 kubelet[3623]: I1213 01:56:33.567587 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f2564b3-56fc-41f6-a120-0d6592df6011-clustermesh-secrets\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568239 kubelet[3623]: I1213 01:56:33.567660 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-hostproc\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568239 kubelet[3623]: I1213 01:56:33.567709 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-config-path\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568239 kubelet[3623]: I1213 01:56:33.567758 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mjg6\" (UniqueName: \"kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-kube-api-access-5mjg6\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568239 kubelet[3623]: I1213 01:56:33.567803 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-cilium-config-path\") pod \"66f41f96-c2f9-4a02-b5d0-d7bc38745efa\" (UID: \"66f41f96-c2f9-4a02-b5d0-d7bc38745efa\") " Dec 13 01:56:33.568239 kubelet[3623]: I1213 01:56:33.567844 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-hubble-tls\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568643 kubelet[3623]: I1213 01:56:33.567888 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-lib-modules\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568643 kubelet[3623]: I1213 01:56:33.567931 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-xtables-lock\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568643 kubelet[3623]: I1213 01:56:33.567972 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-cgroup\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568643 kubelet[3623]: I1213 01:56:33.568010 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cni-path\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568643 kubelet[3623]: I1213 01:56:33.568047 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-etc-cni-netd\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568643 kubelet[3623]: I1213 01:56:33.568084 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-bpf-maps\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568984 kubelet[3623]: I1213 01:56:33.568124 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-run\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568984 kubelet[3623]: I1213 01:56:33.568163 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-kernel\") pod \"6f2564b3-56fc-41f6-a120-0d6592df6011\" (UID: \"6f2564b3-56fc-41f6-a120-0d6592df6011\") " Dec 13 01:56:33.568984 kubelet[3623]: I1213 01:56:33.568212 3623 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f22kz\" (UniqueName: \"kubernetes.io/projected/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-kube-api-access-f22kz\") pod \"66f41f96-c2f9-4a02-b5d0-d7bc38745efa\" (UID: \"66f41f96-c2f9-4a02-b5d0-d7bc38745efa\") " Dec 13 01:56:33.571341 kubelet[3623]: I1213 01:56:33.569229 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571341 kubelet[3623]: I1213 01:56:33.569318 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571341 kubelet[3623]: I1213 01:56:33.569922 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571341 kubelet[3623]: I1213 01:56:33.569979 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571341 kubelet[3623]: I1213 01:56:33.570020 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571812 kubelet[3623]: I1213 01:56:33.570062 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571812 kubelet[3623]: I1213 01:56:33.570101 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571812 kubelet[3623]: I1213 01:56:33.570142 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.571812 kubelet[3623]: I1213 01:56:33.570180 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.575069 kubelet[3623]: I1213 01:56:33.574921 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:56:33.576069 kubelet[3623]: I1213 01:56:33.575883 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-kube-api-access-f22kz" (OuterVolumeSpecName: "kube-api-access-f22kz") pod "66f41f96-c2f9-4a02-b5d0-d7bc38745efa" (UID: "66f41f96-c2f9-4a02-b5d0-d7bc38745efa"). InnerVolumeSpecName "kube-api-access-f22kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:33.581320 kubelet[3623]: I1213 01:56:33.581148 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-kube-api-access-5mjg6" (OuterVolumeSpecName: "kube-api-access-5mjg6") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "kube-api-access-5mjg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:33.583591 kubelet[3623]: I1213 01:56:33.583446 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:56:33.583851 kubelet[3623]: I1213 01:56:33.583787 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f2564b3-56fc-41f6-a120-0d6592df6011-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:56:33.589705 kubelet[3623]: I1213 01:56:33.589573 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "66f41f96-c2f9-4a02-b5d0-d7bc38745efa" (UID: "66f41f96-c2f9-4a02-b5d0-d7bc38745efa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:56:33.590396 kubelet[3623]: I1213 01:56:33.589934 3623 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f2564b3-56fc-41f6-a120-0d6592df6011" (UID: "6f2564b3-56fc-41f6-a120-0d6592df6011"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:56:33.669504 kubelet[3623]: I1213 01:56:33.669352 3623 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f22kz\" (UniqueName: \"kubernetes.io/projected/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-kube-api-access-f22kz\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.669504 kubelet[3623]: I1213 01:56:33.669407 3623 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f2564b3-56fc-41f6-a120-0d6592df6011-clustermesh-secrets\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.669504 kubelet[3623]: I1213 01:56:33.669437 3623 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-net\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.669504 kubelet[3623]: I1213 01:56:33.669464 3623 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-hostproc\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.669504 kubelet[3623]: I1213 01:56:33.669502 3623 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-config-path\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.669887 kubelet[3623]: I1213 01:56:33.669532 3623 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5mjg6\" (UniqueName: \"kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-kube-api-access-5mjg6\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.669887 kubelet[3623]: I1213 01:56:33.669558 3623 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66f41f96-c2f9-4a02-b5d0-d7bc38745efa-cilium-config-path\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.669887 kubelet[3623]: I1213 01:56:33.669585 3623 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f2564b3-56fc-41f6-a120-0d6592df6011-hubble-tls\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.670359 3623 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-lib-modules\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.670417 3623 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-etc-cni-netd\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.670444 3623 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-xtables-lock\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.670469 3623 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-cgroup\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.671832 3623 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cni-path\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.671877 3623 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-bpf-maps\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.671927 3623 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-cilium-run\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.672020 kubelet[3623]: I1213 01:56:33.671959 3623 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f2564b3-56fc-41f6-a120-0d6592df6011-host-proc-sys-kernel\") on node \"ip-172-31-16-194\" DevicePath \"\"" Dec 13 01:56:33.690396 kubelet[3623]: I1213 01:56:33.690145 3623 scope.go:117] "RemoveContainer" containerID="c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3" Dec 13 01:56:33.697328 containerd[2130]: time="2024-12-13T01:56:33.697149112Z" level=info msg="RemoveContainer for \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\"" Dec 13 01:56:33.711543 containerd[2130]: time="2024-12-13T01:56:33.711347704Z" level=info msg="RemoveContainer for \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\" returns successfully" Dec 13 01:56:33.713424 kubelet[3623]: I1213 01:56:33.713385 3623 scope.go:117] "RemoveContainer" containerID="c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3" Dec 13 01:56:33.716339 containerd[2130]: time="2024-12-13T01:56:33.715146196Z" level=error msg="ContainerStatus for \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\": not found" Dec 13 01:56:33.716779 kubelet[3623]: E1213 01:56:33.715881 3623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\": not found" containerID="c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3" Dec 13 01:56:33.716779 kubelet[3623]: I1213 01:56:33.716020 3623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3"} err="failed to get container status \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c57e6a0f40055b9ed0952d44773408e1a7080f3674889b90ba117b22d63db8e3\": not found" Dec 13 01:56:33.716779 kubelet[3623]: I1213 01:56:33.716047 3623 scope.go:117] "RemoveContainer" containerID="2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08" Dec 13 01:56:33.725159 containerd[2130]: time="2024-12-13T01:56:33.724639852Z" level=info msg="RemoveContainer for \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\"" Dec 13 01:56:33.738200 containerd[2130]: time="2024-12-13T01:56:33.738117712Z" level=info msg="RemoveContainer for \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\" returns successfully" Dec 13 01:56:33.739628 kubelet[3623]: I1213 01:56:33.739050 3623 scope.go:117] "RemoveContainer" containerID="93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda" Dec 13 01:56:33.749810 containerd[2130]: time="2024-12-13T01:56:33.749761756Z" level=info msg="RemoveContainer for \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\"" Dec 13 01:56:33.761132 containerd[2130]: time="2024-12-13T01:56:33.761012896Z" level=info msg="RemoveContainer for \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\" returns successfully" Dec 13 01:56:33.762300 kubelet[3623]: I1213 01:56:33.762267 3623 scope.go:117] "RemoveContainer" containerID="427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa" Dec 13 01:56:33.764714 containerd[2130]: time="2024-12-13T01:56:33.764662912Z" level=info msg="RemoveContainer for \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\"" Dec 13 01:56:33.770142 containerd[2130]: time="2024-12-13T01:56:33.770049892Z" level=info msg="RemoveContainer for \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\" returns successfully" Dec 13 01:56:33.770438 kubelet[3623]: I1213 01:56:33.770394 3623 scope.go:117] "RemoveContainer" containerID="700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3" Dec 13 01:56:33.773206 containerd[2130]: time="2024-12-13T01:56:33.772831444Z" level=info msg="RemoveContainer for \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\"" Dec 13 01:56:33.778399 containerd[2130]: time="2024-12-13T01:56:33.778347352Z" level=info msg="RemoveContainer for \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\" returns successfully" Dec 13 01:56:33.779040 kubelet[3623]: I1213 01:56:33.778906 3623 scope.go:117] "RemoveContainer" containerID="92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e" Dec 13 01:56:33.780851 containerd[2130]: time="2024-12-13T01:56:33.780797260Z" level=info msg="RemoveContainer for \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\"" Dec 13 01:56:33.786380 containerd[2130]: time="2024-12-13T01:56:33.786236224Z" level=info msg="RemoveContainer for \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\" returns successfully" Dec 13 01:56:33.786970 kubelet[3623]: I1213 01:56:33.786721 3623 scope.go:117] "RemoveContainer" containerID="2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08" Dec 13 01:56:33.787265 containerd[2130]: time="2024-12-13T01:56:33.787182688Z" level=error msg="ContainerStatus for \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\": not found" Dec 13 01:56:33.787836 kubelet[3623]: E1213 01:56:33.787652 3623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\": not found" containerID="2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08" Dec 13 01:56:33.788168 kubelet[3623]: I1213 01:56:33.787717 3623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08"} err="failed to get container status \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ccc52378c7d01c531db1cf02ca3578f6400572c185d08e67f7ee25ef294bd08\": not found" Dec 13 01:56:33.788168 kubelet[3623]: I1213 01:56:33.788000 3623 scope.go:117] "RemoveContainer" containerID="93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda" Dec 13 01:56:33.788909 containerd[2130]: time="2024-12-13T01:56:33.788640796Z" level=error msg="ContainerStatus for \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\": not found" Dec 13 01:56:33.789072 kubelet[3623]: E1213 01:56:33.788946 3623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\": not found" containerID="93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda" Dec 13 01:56:33.789072 kubelet[3623]: I1213 01:56:33.789002 3623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda"} err="failed to get container status \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\": rpc error: code = NotFound desc = an error occurred when try to find container \"93e4efecf4a6ef80303cf002770cb57866897c92dc6585c499d79d4ac058bdda\": not found" Dec 13 01:56:33.789072 kubelet[3623]: I1213 01:56:33.789034 3623 scope.go:117] "RemoveContainer" containerID="427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa" Dec 13 01:56:33.789873 containerd[2130]: time="2024-12-13T01:56:33.789802324Z" level=error msg="ContainerStatus for \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\": not found" Dec 13 01:56:33.790195 kubelet[3623]: E1213 01:56:33.790147 3623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\": not found" containerID="427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa" Dec 13 01:56:33.790340 kubelet[3623]: I1213 01:56:33.790212 3623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa"} err="failed to get container status \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"427fc4bbc3dc344180e6b6ec5262f07dd5e92c94fd97dd9bb0bd5e994e0471aa\": not found" Dec 13 01:56:33.790340 kubelet[3623]: I1213 01:56:33.790236 3623 scope.go:117] "RemoveContainer" containerID="700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3" Dec 13 01:56:33.790917 containerd[2130]: time="2024-12-13T01:56:33.790649512Z" level=error msg="ContainerStatus for \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\": not found" Dec 13 01:56:33.791264 kubelet[3623]: E1213 01:56:33.791119 3623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\": not found" containerID="700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3" Dec 13 01:56:33.791264 kubelet[3623]: I1213 01:56:33.791172 3623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3"} err="failed to get container status \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"700623e4ea5767597f37a923689274926c5ea801effa32a8e6a057252e2525b3\": not found" Dec 13 01:56:33.791264 kubelet[3623]: I1213 01:56:33.791194 3623 scope.go:117] "RemoveContainer" containerID="92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e" Dec 13 01:56:33.791884 containerd[2130]: time="2024-12-13T01:56:33.791813464Z" level=error msg="ContainerStatus for \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\": not found" Dec 13 01:56:33.792247 kubelet[3623]: E1213 01:56:33.792216 3623 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\": not found" containerID="92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e" Dec 13 01:56:33.792360 kubelet[3623]: I1213 01:56:33.792297 3623 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e"} err="failed to get container status \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\": rpc error: code = NotFound desc = an error occurred when try to find container \"92bdb123f074ffbcaeab59d6dded4aa3f23558445010a8cb6fe3321af29be36e\": not found" Dec 13 01:56:34.138295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e-rootfs.mount: Deactivated successfully. Dec 13 01:56:34.138591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775-rootfs.mount: Deactivated successfully. Dec 13 01:56:34.138899 systemd[1]: var-lib-kubelet-pods-6f2564b3\x2d56fc\x2d41f6\x2da120\x2d0d6592df6011-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5mjg6.mount: Deactivated successfully. Dec 13 01:56:34.139126 systemd[1]: var-lib-kubelet-pods-6f2564b3\x2d56fc\x2d41f6\x2da120\x2d0d6592df6011-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:56:34.139358 systemd[1]: var-lib-kubelet-pods-66f41f96\x2dc2f9\x2d4a02\x2db5d0\x2dd7bc38745efa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df22kz.mount: Deactivated successfully. Dec 13 01:56:34.139630 systemd[1]: var-lib-kubelet-pods-6f2564b3\x2d56fc\x2d41f6\x2da120\x2d0d6592df6011-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:56:34.174160 containerd[2130]: time="2024-12-13T01:56:34.174092354Z" level=info msg="StopPodSandbox for \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\"" Dec 13 01:56:34.176036 containerd[2130]: time="2024-12-13T01:56:34.174238490Z" level=info msg="TearDown network for sandbox \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" successfully" Dec 13 01:56:34.176036 containerd[2130]: time="2024-12-13T01:56:34.174276326Z" level=info msg="StopPodSandbox for \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" returns successfully" Dec 13 01:56:34.176036 containerd[2130]: time="2024-12-13T01:56:34.175389302Z" level=info msg="RemovePodSandbox for \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\"" Dec 13 01:56:34.176036 containerd[2130]: time="2024-12-13T01:56:34.175443362Z" level=info msg="Forcibly stopping sandbox \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\"" Dec 13 01:56:34.176036 containerd[2130]: time="2024-12-13T01:56:34.175866638Z" level=info msg="TearDown network for sandbox \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" successfully" Dec 13 01:56:34.181676 containerd[2130]: time="2024-12-13T01:56:34.181571150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:34.181930 containerd[2130]: time="2024-12-13T01:56:34.181742414Z" level=info msg="RemovePodSandbox \"2fb89bdb4bc78669f9e873aa3fab0d30c78a0f0993ebf34a9136b3ae11f3c775\" returns successfully" Dec 13 01:56:34.182928 containerd[2130]: time="2024-12-13T01:56:34.182881358Z" level=info msg="StopPodSandbox for \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\"" Dec 13 01:56:34.183076 containerd[2130]: time="2024-12-13T01:56:34.183022994Z" level=info msg="TearDown network for sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" successfully" Dec 13 01:56:34.183076 containerd[2130]: time="2024-12-13T01:56:34.183049346Z" level=info msg="StopPodSandbox for \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" returns successfully" Dec 13 01:56:34.184012 containerd[2130]: time="2024-12-13T01:56:34.183695930Z" level=info msg="RemovePodSandbox for \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\"" Dec 13 01:56:34.184012 containerd[2130]: time="2024-12-13T01:56:34.183778130Z" level=info msg="Forcibly stopping sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\"" Dec 13 01:56:34.184012 containerd[2130]: time="2024-12-13T01:56:34.183886178Z" level=info msg="TearDown network for sandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" successfully" Dec 13 01:56:34.189320 containerd[2130]: time="2024-12-13T01:56:34.189197294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:56:34.189493 containerd[2130]: time="2024-12-13T01:56:34.189362762Z" level=info msg="RemovePodSandbox \"fd7ea2803e84c5f52ae53c0d1f318ec07bf84752baca34f3629c1bef847d117e\" returns successfully" Dec 13 01:56:34.197587 kubelet[3623]: I1213 01:56:34.197525 3623 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="66f41f96-c2f9-4a02-b5d0-d7bc38745efa" path="/var/lib/kubelet/pods/66f41f96-c2f9-4a02-b5d0-d7bc38745efa/volumes" Dec 13 01:56:34.198956 kubelet[3623]: I1213 01:56:34.198894 3623 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" path="/var/lib/kubelet/pods/6f2564b3-56fc-41f6-a120-0d6592df6011/volumes" Dec 13 01:56:34.473695 kubelet[3623]: E1213 01:56:34.472803 3623 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:56:35.051915 sshd[5267]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:35.061207 systemd[1]: sshd@26-172.31.16.194:22-139.178.68.195:44722.service: Deactivated successfully. Dec 13 01:56:35.069629 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:56:35.071654 systemd-logind[2107]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:56:35.078798 systemd-logind[2107]: Removed session 27. Dec 13 01:56:35.089814 systemd[1]: Started sshd@27-172.31.16.194:22-139.178.68.195:44724.service - OpenSSH per-connection server daemon (139.178.68.195:44724). Dec 13 01:56:35.277360 sshd[5438]: Accepted publickey for core from 139.178.68.195 port 44724 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:35.280419 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:35.289235 systemd-logind[2107]: New session 28 of user core. Dec 13 01:56:35.295262 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:56:35.437009 ntpd[2087]: Deleting interface #10 lxc_health, fe80::3810:afff:fe26:862%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Dec 13 01:56:35.437588 ntpd[2087]: 13 Dec 01:56:35 ntpd[2087]: Deleting interface #10 lxc_health, fe80::3810:afff:fe26:862%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Dec 13 01:56:36.721909 sshd[5438]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:36.731933 kubelet[3623]: I1213 01:56:36.726789 3623 topology_manager.go:215] "Topology Admit Handler" podUID="ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af" podNamespace="kube-system" podName="cilium-rs22j" Dec 13 01:56:36.731933 kubelet[3623]: E1213 01:56:36.726877 3623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" containerName="clean-cilium-state" Dec 13 01:56:36.731933 kubelet[3623]: E1213 01:56:36.726901 3623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66f41f96-c2f9-4a02-b5d0-d7bc38745efa" containerName="cilium-operator" Dec 13 01:56:36.731933 kubelet[3623]: E1213 01:56:36.726919 3623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" containerName="mount-cgroup" Dec 13 01:56:36.731933 kubelet[3623]: E1213 01:56:36.726937 3623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" containerName="apply-sysctl-overwrites" Dec 13 01:56:36.731933 kubelet[3623]: E1213 01:56:36.726955 3623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" containerName="mount-bpf-fs" Dec 13 01:56:36.731933 kubelet[3623]: E1213 01:56:36.726973 3623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" containerName="cilium-agent" Dec 13 01:56:36.731933 kubelet[3623]: I1213 01:56:36.727018 3623 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f41f96-c2f9-4a02-b5d0-d7bc38745efa" containerName="cilium-operator" Dec 13 01:56:36.731933 kubelet[3623]: I1213 01:56:36.727036 3623 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f2564b3-56fc-41f6-a120-0d6592df6011" containerName="cilium-agent" Dec 13 01:56:36.734403 systemd[1]: sshd@27-172.31.16.194:22-139.178.68.195:44724.service: Deactivated successfully. Dec 13 01:56:36.759246 systemd-logind[2107]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:56:36.760443 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:56:36.791057 systemd[1]: Started sshd@28-172.31.16.194:22-139.178.68.195:58380.service - OpenSSH per-connection server daemon (139.178.68.195:58380). Dec 13 01:56:36.795777 systemd-logind[2107]: Removed session 28. Dec 13 01:56:36.895693 kubelet[3623]: I1213 01:56:36.894661 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-cilium-ipsec-secrets\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.895693 kubelet[3623]: I1213 01:56:36.894737 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-cilium-run\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.895693 kubelet[3623]: I1213 01:56:36.894782 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-hostproc\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.895693 kubelet[3623]: I1213 01:56:36.894824 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-lib-modules\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.895693 kubelet[3623]: I1213 01:56:36.894866 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-host-proc-sys-kernel\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.895693 kubelet[3623]: I1213 01:56:36.894912 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnpxd\" (UniqueName: \"kubernetes.io/projected/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-kube-api-access-bnpxd\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896090 kubelet[3623]: I1213 01:56:36.894953 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-xtables-lock\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896090 kubelet[3623]: I1213 01:56:36.894994 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-hubble-tls\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896090 kubelet[3623]: I1213 01:56:36.895070 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-cilium-cgroup\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896090 kubelet[3623]: I1213 01:56:36.895116 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-cilium-config-path\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896090 kubelet[3623]: I1213 01:56:36.895161 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-host-proc-sys-net\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896090 kubelet[3623]: I1213 01:56:36.895206 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-etc-cni-netd\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896389 kubelet[3623]: I1213 01:56:36.895253 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-clustermesh-secrets\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896389 kubelet[3623]: I1213 01:56:36.895295 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-bpf-maps\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:36.896389 kubelet[3623]: I1213 01:56:36.895336 3623 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af-cni-path\") pod \"cilium-rs22j\" (UID: \"ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af\") " pod="kube-system/cilium-rs22j" Dec 13 01:56:37.006976 sshd[5453]: Accepted publickey for core from 139.178.68.195 port 58380 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:37.013199 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:37.033158 systemd-logind[2107]: New session 29 of user core. Dec 13 01:56:37.063751 containerd[2130]: time="2024-12-13T01:56:37.061966660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rs22j,Uid:ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:37.070262 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:56:37.114742 containerd[2130]: time="2024-12-13T01:56:37.114095548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:37.114742 containerd[2130]: time="2024-12-13T01:56:37.114181828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:37.114742 containerd[2130]: time="2024-12-13T01:56:37.114207616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:37.114742 containerd[2130]: time="2024-12-13T01:56:37.114374620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:37.188200 containerd[2130]: time="2024-12-13T01:56:37.187925753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rs22j,Uid:ea2ce57f-fc2f-42b0-af2a-0e29cebfb5af,Namespace:kube-system,Attempt:0,} returns sandbox id \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\"" Dec 13 01:56:37.195915 containerd[2130]: time="2024-12-13T01:56:37.195690125Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:56:37.220501 sshd[5453]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:37.222222 containerd[2130]: time="2024-12-13T01:56:37.222168869Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bf805de6655b916f5da84a150b54f5661bea1b1ebdbbff959b622cf3b909139\"" Dec 13 01:56:37.224891 containerd[2130]: time="2024-12-13T01:56:37.224062385Z" level=info msg="StartContainer for \"0bf805de6655b916f5da84a150b54f5661bea1b1ebdbbff959b622cf3b909139\"" Dec 13 01:56:37.231562 systemd[1]: sshd@28-172.31.16.194:22-139.178.68.195:58380.service: Deactivated successfully. Dec 13 01:56:37.238817 systemd-logind[2107]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:56:37.238968 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:56:37.254991 systemd[1]: Started sshd@29-172.31.16.194:22-139.178.68.195:58388.service - OpenSSH per-connection server daemon (139.178.68.195:58388). Dec 13 01:56:37.259546 systemd-logind[2107]: Removed session 29. Dec 13 01:56:37.313261 kubelet[3623]: I1213 01:56:37.312681 3623 setters.go:568] "Node became not ready" node="ip-172-31-16-194" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:56:37Z","lastTransitionTime":"2024-12-13T01:56:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:56:37.349700 containerd[2130]: time="2024-12-13T01:56:37.347858034Z" level=info msg="StartContainer for \"0bf805de6655b916f5da84a150b54f5661bea1b1ebdbbff959b622cf3b909139\" returns successfully" Dec 13 01:56:37.427454 containerd[2130]: time="2024-12-13T01:56:37.427137558Z" level=info msg="shim disconnected" id=0bf805de6655b916f5da84a150b54f5661bea1b1ebdbbff959b622cf3b909139 namespace=k8s.io Dec 13 01:56:37.427454 containerd[2130]: time="2024-12-13T01:56:37.427281510Z" level=warning msg="cleaning up after shim disconnected" id=0bf805de6655b916f5da84a150b54f5661bea1b1ebdbbff959b622cf3b909139 namespace=k8s.io Dec 13 01:56:37.427454 containerd[2130]: time="2024-12-13T01:56:37.427304910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:37.458790 sshd[5511]: Accepted publickey for core from 139.178.68.195 port 58388 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:37.461733 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:37.470596 systemd-logind[2107]: New session 30 of user core. Dec 13 01:56:37.482227 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:56:37.743115 containerd[2130]: time="2024-12-13T01:56:37.743041880Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:56:37.764099 containerd[2130]: time="2024-12-13T01:56:37.763788812Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1535cfe3ecedc7ddb64b21c350c1a5d60ec239753231173436d3325b560d0b3\"" Dec 13 01:56:37.765671 containerd[2130]: time="2024-12-13T01:56:37.764566772Z" level=info msg="StartContainer for \"d1535cfe3ecedc7ddb64b21c350c1a5d60ec239753231173436d3325b560d0b3\"" Dec 13 01:56:37.862677 containerd[2130]: time="2024-12-13T01:56:37.862530140Z" level=info msg="StartContainer for \"d1535cfe3ecedc7ddb64b21c350c1a5d60ec239753231173436d3325b560d0b3\" returns successfully" Dec 13 01:56:37.916630 containerd[2130]: time="2024-12-13T01:56:37.916504688Z" level=info msg="shim disconnected" id=d1535cfe3ecedc7ddb64b21c350c1a5d60ec239753231173436d3325b560d0b3 namespace=k8s.io Dec 13 01:56:37.916630 containerd[2130]: time="2024-12-13T01:56:37.916593296Z" level=warning msg="cleaning up after shim disconnected" id=d1535cfe3ecedc7ddb64b21c350c1a5d60ec239753231173436d3325b560d0b3 namespace=k8s.io Dec 13 01:56:37.916929 containerd[2130]: time="2024-12-13T01:56:37.916642640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:37.936703 containerd[2130]: time="2024-12-13T01:56:37.936641277Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:56:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:56:38.749807 containerd[2130]: time="2024-12-13T01:56:38.749563437Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:56:38.788916 containerd[2130]: time="2024-12-13T01:56:38.788833377Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c837766a5a597e6df2eee3ec742ad6e63adcfe33ebc40bc82c4321e802d3f48\"" Dec 13 01:56:38.790775 containerd[2130]: time="2024-12-13T01:56:38.790029393Z" level=info msg="StartContainer for \"4c837766a5a597e6df2eee3ec742ad6e63adcfe33ebc40bc82c4321e802d3f48\"" Dec 13 01:56:38.898896 containerd[2130]: time="2024-12-13T01:56:38.898821861Z" level=info msg="StartContainer for \"4c837766a5a597e6df2eee3ec742ad6e63adcfe33ebc40bc82c4321e802d3f48\" returns successfully" Dec 13 01:56:38.950997 containerd[2130]: time="2024-12-13T01:56:38.950911978Z" level=info msg="shim disconnected" id=4c837766a5a597e6df2eee3ec742ad6e63adcfe33ebc40bc82c4321e802d3f48 namespace=k8s.io Dec 13 01:56:38.951463 containerd[2130]: time="2024-12-13T01:56:38.951411838Z" level=warning msg="cleaning up after shim disconnected" id=4c837766a5a597e6df2eee3ec742ad6e63adcfe33ebc40bc82c4321e802d3f48 namespace=k8s.io Dec 13 01:56:38.951629 containerd[2130]: time="2024-12-13T01:56:38.951584410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:39.037641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c837766a5a597e6df2eee3ec742ad6e63adcfe33ebc40bc82c4321e802d3f48-rootfs.mount: Deactivated successfully. Dec 13 01:56:39.474058 kubelet[3623]: E1213 01:56:39.473989 3623 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:56:39.762104 containerd[2130]: time="2024-12-13T01:56:39.761964190Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:56:39.807680 containerd[2130]: time="2024-12-13T01:56:39.804410350Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48818b67d6696bdbd6383459457f0b2773b11a9748c3af74c25d08d4bd396d2a\"" Dec 13 01:56:39.809985 containerd[2130]: time="2024-12-13T01:56:39.809867686Z" level=info msg="StartContainer for \"48818b67d6696bdbd6383459457f0b2773b11a9748c3af74c25d08d4bd396d2a\"" Dec 13 01:56:39.909425 containerd[2130]: time="2024-12-13T01:56:39.909273838Z" level=info msg="StartContainer for \"48818b67d6696bdbd6383459457f0b2773b11a9748c3af74c25d08d4bd396d2a\" returns successfully" Dec 13 01:56:39.948100 containerd[2130]: time="2024-12-13T01:56:39.947990759Z" level=info msg="shim disconnected" id=48818b67d6696bdbd6383459457f0b2773b11a9748c3af74c25d08d4bd396d2a namespace=k8s.io Dec 13 01:56:39.948100 containerd[2130]: time="2024-12-13T01:56:39.948088211Z" level=warning msg="cleaning up after shim disconnected" id=48818b67d6696bdbd6383459457f0b2773b11a9748c3af74c25d08d4bd396d2a namespace=k8s.io Dec 13 01:56:39.948480 containerd[2130]: time="2024-12-13T01:56:39.948110375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:40.037993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48818b67d6696bdbd6383459457f0b2773b11a9748c3af74c25d08d4bd396d2a-rootfs.mount: Deactivated successfully. Dec 13 01:56:40.766987 containerd[2130]: time="2024-12-13T01:56:40.766921583Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:56:40.792790 containerd[2130]: time="2024-12-13T01:56:40.792722435Z" level=info msg="CreateContainer within sandbox \"c057a79e2ed526bc6025b2bbc7427933e245c3a3b60f33ebe9e37d00c2b590f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2eaad1481f47a79c21a82be4e8e8f63a42bb41a6c153fe7d30b2949416a413d\"" Dec 13 01:56:40.794749 containerd[2130]: time="2024-12-13T01:56:40.794697959Z" level=info msg="StartContainer for \"e2eaad1481f47a79c21a82be4e8e8f63a42bb41a6c153fe7d30b2949416a413d\"" Dec 13 01:56:40.989850 containerd[2130]: time="2024-12-13T01:56:40.989782728Z" level=info msg="StartContainer for \"e2eaad1481f47a79c21a82be4e8e8f63a42bb41a6c153fe7d30b2949416a413d\" returns successfully" Dec 13 01:56:41.195285 kubelet[3623]: E1213 01:56:41.193184 3623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-658tp" podUID="7dc27534-080e-4b6e-b6f3-eb8222d2f473" Dec 13 01:56:41.763683 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:56:41.809511 kubelet[3623]: I1213 01:56:41.809451 3623 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rs22j" podStartSLOduration=5.8093907 podStartE2EDuration="5.8093907s" podCreationTimestamp="2024-12-13 01:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:41.80594964 +0000 UTC m=+127.872411060" watchObservedRunningTime="2024-12-13 01:56:41.8093907 +0000 UTC m=+127.875852108" Dec 13 01:56:43.193689 kubelet[3623]: E1213 01:56:43.193542 3623 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-658tp" podUID="7dc27534-080e-4b6e-b6f3-eb8222d2f473" Dec 13 01:56:44.142235 kubelet[3623]: E1213 01:56:44.142159 3623 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38782->127.0.0.1:43559: write tcp 127.0.0.1:38782->127.0.0.1:43559: write: connection reset by peer Dec 13 01:56:46.054238 systemd-networkd[1692]: lxc_health: Link UP Dec 13 01:56:46.063171 systemd-networkd[1692]: lxc_health: Gained carrier Dec 13 01:56:46.079380 (udev-worker)[6296]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:46.564357 kubelet[3623]: E1213 01:56:46.563954 3623 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38798->127.0.0.1:43559: write tcp 127.0.0.1:38798->127.0.0.1:43559: write: broken pipe Dec 13 01:56:47.749817 systemd-networkd[1692]: lxc_health: Gained IPv6LL Dec 13 01:56:50.438001 ntpd[2087]: Listen normally on 13 lxc_health [fe80::60fa:75ff:fefb:efd3%14]:123 Dec 13 01:56:50.438749 ntpd[2087]: 13 Dec 01:56:50 ntpd[2087]: Listen normally on 13 lxc_health [fe80::60fa:75ff:fefb:efd3%14]:123 Dec 13 01:56:51.280249 kubelet[3623]: E1213 01:56:51.279594 3623 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38808->127.0.0.1:43559: write tcp 127.0.0.1:38808->127.0.0.1:43559: write: connection reset by peer Dec 13 01:56:53.545236 kubelet[3623]: E1213 01:56:53.543432 3623 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47636->127.0.0.1:43559: write tcp 127.0.0.1:47636->127.0.0.1:43559: write: broken pipe Dec 13 01:56:53.573825 sshd[5511]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:53.583117 systemd[1]: sshd@29-172.31.16.194:22-139.178.68.195:58388.service: Deactivated successfully. Dec 13 01:56:53.585333 systemd-logind[2107]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:56:53.602909 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:56:53.612029 systemd-logind[2107]: Removed session 30.