Dec 13 13:14:15.152298 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 13:14:15.152344 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:14:15.152368 kernel: KASLR disabled due to lack of seed Dec 13 13:14:15.152384 kernel: efi: EFI v2.7 by EDK II Dec 13 13:14:15.152399 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Dec 13 13:14:15.152414 kernel: secureboot: Secure boot disabled Dec 13 13:14:15.152431 kernel: ACPI: Early table checksum verification disabled Dec 13 13:14:15.152446 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 13:14:15.152461 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 13:14:15.152476 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 13:14:15.152496 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 13:14:15.152511 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 13:14:15.152526 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 13:14:15.152541 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 13:14:15.152559 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 13:14:15.152579 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 13:14:15.152596 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 13:14:15.152611 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 13:14:15.152627 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 13:14:15.152642 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 13:14:15.152658 kernel: printk: bootconsole [uart0] enabled Dec 13 13:14:15.152673 kernel: NUMA: Failed to initialise from firmware Dec 13 13:14:15.152689 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 13:14:15.152705 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 13:14:15.152721 kernel: Zone ranges: Dec 13 13:14:15.152736 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 13:14:15.152756 kernel: DMA32 empty Dec 13 13:14:15.152772 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 13:14:15.152787 kernel: Movable zone start for each node Dec 13 13:14:15.152802 kernel: Early memory node ranges Dec 13 13:14:15.152818 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 13:14:15.152833 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 13:14:15.152849 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 13:14:15.152864 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 13:14:15.152879 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 13:14:15.152895 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 13:14:15.152910 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 13:14:15.152926 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 13:14:15.152945 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 13:14:15.152962 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 13:14:15.152984 kernel: psci: probing for conduit method from ACPI. Dec 13 13:14:15.153001 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 13:14:15.153017 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:14:15.153038 kernel: psci: Trusted OS migration not required Dec 13 13:14:15.153055 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:14:15.153071 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:14:15.153088 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:14:15.153105 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 13:14:15.153121 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:14:15.153137 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:14:15.153154 kernel: CPU features: detected: Spectre-v2 Dec 13 13:14:15.153170 kernel: CPU features: detected: Spectre-v3a Dec 13 13:14:15.153186 kernel: CPU features: detected: Spectre-BHB Dec 13 13:14:15.153203 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 13:14:15.154348 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 13:14:15.154382 kernel: alternatives: applying boot alternatives Dec 13 13:14:15.154401 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:14:15.154421 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:14:15.154438 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:14:15.154455 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:14:15.154471 kernel: Fallback order for Node 0: 0 Dec 13 13:14:15.154488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 13:14:15.154525 kernel: Policy zone: Normal Dec 13 13:14:15.154543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:14:15.154559 kernel: software IO TLB: area num 2. Dec 13 13:14:15.154582 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 13:14:15.154600 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Dec 13 13:14:15.154617 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:14:15.154633 kernel: trace event string verifier disabled Dec 13 13:14:15.154649 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:14:15.154667 kernel: rcu: RCU event tracing is enabled. Dec 13 13:14:15.154684 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:14:15.154701 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:14:15.154718 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:14:15.154734 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:14:15.154751 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:14:15.154772 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:14:15.154789 kernel: GICv3: 96 SPIs implemented Dec 13 13:14:15.154806 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:14:15.154822 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:14:15.154838 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 13:14:15.154854 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 13:14:15.154871 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 13:14:15.154887 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:14:15.154904 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:14:15.154921 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 13:14:15.154937 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 13:14:15.154954 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 13:14:15.154975 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:14:15.154992 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 13:14:15.155008 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 13:14:15.155025 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 13:14:15.155042 kernel: Console: colour dummy device 80x25 Dec 13 13:14:15.155059 kernel: printk: console [tty1] enabled Dec 13 13:14:15.155076 kernel: ACPI: Core revision 20230628 Dec 13 13:14:15.155093 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 13:14:15.155110 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:14:15.155127 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:14:15.155149 kernel: landlock: Up and running. Dec 13 13:14:15.155165 kernel: SELinux: Initializing. Dec 13 13:14:15.155182 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:14:15.155199 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:14:15.155236 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:14:15.155256 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:14:15.155274 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:14:15.155291 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:14:15.155308 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 13:14:15.155331 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 13:14:15.155348 kernel: Remapping and enabling EFI services. Dec 13 13:14:15.155365 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:14:15.155382 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:14:15.155399 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 13:14:15.155416 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 13:14:15.155433 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 13:14:15.155450 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:14:15.155467 kernel: SMP: Total of 2 processors activated. Dec 13 13:14:15.156301 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:14:15.156329 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 13:14:15.156368 kernel: CPU features: detected: CRC32 instructions Dec 13 13:14:15.156394 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:14:15.156412 kernel: alternatives: applying system-wide alternatives Dec 13 13:14:15.156431 kernel: devtmpfs: initialized Dec 13 13:14:15.156451 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:14:15.156469 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:14:15.156488 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:14:15.156512 kernel: SMBIOS 3.0.0 present. Dec 13 13:14:15.156530 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 13:14:15.156549 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:14:15.156568 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:14:15.156587 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:14:15.156606 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:14:15.156624 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:14:15.156646 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Dec 13 13:14:15.156666 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:14:15.156685 kernel: cpuidle: using governor menu Dec 13 13:14:15.156704 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:14:15.156722 kernel: ASID allocator initialised with 65536 entries Dec 13 13:14:15.156742 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:14:15.156760 kernel: Serial: AMBA PL011 UART driver Dec 13 13:14:15.156779 kernel: Modules: 17360 pages in range for non-PLT usage Dec 13 13:14:15.156796 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:14:15.156821 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:14:15.156844 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:14:15.156862 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:14:15.156882 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:14:15.156900 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:14:15.156918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:14:15.156936 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:14:15.156954 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:14:15.156972 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:14:15.156990 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:14:15.157013 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:14:15.157030 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:14:15.157048 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:14:15.157066 kernel: ACPI: Interpreter enabled Dec 13 13:14:15.157084 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:14:15.157102 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:14:15.157120 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 13:14:15.157516 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:14:15.157734 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:14:15.157938 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:14:15.158136 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 13:14:15.159428 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 13:14:15.159469 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 13:14:15.159489 kernel: acpiphp: Slot [1] registered Dec 13 13:14:15.159508 kernel: acpiphp: Slot [2] registered Dec 13 13:14:15.159526 kernel: acpiphp: Slot [3] registered Dec 13 13:14:15.159554 kernel: acpiphp: Slot [4] registered Dec 13 13:14:15.159571 kernel: acpiphp: Slot [5] registered Dec 13 13:14:15.159589 kernel: acpiphp: Slot [6] registered Dec 13 13:14:15.159607 kernel: acpiphp: Slot [7] registered Dec 13 13:14:15.159625 kernel: acpiphp: Slot [8] registered Dec 13 13:14:15.159642 kernel: acpiphp: Slot [9] registered Dec 13 13:14:15.159660 kernel: acpiphp: Slot [10] registered Dec 13 13:14:15.159678 kernel: acpiphp: Slot [11] registered Dec 13 13:14:15.159695 kernel: acpiphp: Slot [12] registered Dec 13 13:14:15.159718 kernel: acpiphp: Slot [13] registered Dec 13 13:14:15.159736 kernel: acpiphp: Slot [14] registered Dec 13 13:14:15.159754 kernel: acpiphp: Slot [15] registered Dec 13 13:14:15.159771 kernel: acpiphp: Slot [16] registered Dec 13 13:14:15.159789 kernel: acpiphp: Slot [17] registered Dec 13 13:14:15.159806 kernel: acpiphp: Slot [18] registered Dec 13 13:14:15.159823 kernel: acpiphp: Slot [19] registered Dec 13 13:14:15.159841 kernel: acpiphp: Slot [20] registered Dec 13 13:14:15.159858 kernel: acpiphp: Slot [21] registered Dec 13 13:14:15.159876 kernel: acpiphp: Slot [22] registered Dec 13 13:14:15.159899 kernel: acpiphp: Slot [23] registered Dec 13 13:14:15.159917 kernel: acpiphp: Slot [24] registered Dec 13 13:14:15.159935 kernel: acpiphp: Slot [25] registered Dec 13 13:14:15.159952 kernel: acpiphp: Slot [26] registered Dec 13 13:14:15.159970 kernel: acpiphp: Slot [27] registered Dec 13 13:14:15.159987 kernel: acpiphp: Slot [28] registered Dec 13 13:14:15.160005 kernel: acpiphp: Slot [29] registered Dec 13 13:14:15.160022 kernel: acpiphp: Slot [30] registered Dec 13 13:14:15.160040 kernel: acpiphp: Slot [31] registered Dec 13 13:14:15.160062 kernel: PCI host bridge to bus 0000:00 Dec 13 13:14:15.161433 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 13:14:15.161641 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:14:15.161818 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 13:14:15.161992 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 13:14:15.162240 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 13:14:15.162485 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 13:14:15.162723 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 13:14:15.162942 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 13:14:15.163146 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 13:14:15.166459 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:14:15.166728 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 13:14:15.166936 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 13:14:15.167148 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 13:14:15.167380 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 13:14:15.167584 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:14:15.167787 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 13:14:15.167990 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 13:14:15.168192 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 13:14:15.169501 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 13:14:15.169722 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 13:14:15.169906 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 13:14:15.170086 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:14:15.170287 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 13:14:15.170314 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:14:15.170333 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:14:15.170352 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:14:15.170371 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:14:15.170395 kernel: iommu: Default domain type: Translated Dec 13 13:14:15.170414 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:14:15.170431 kernel: efivars: Registered efivars operations Dec 13 13:14:15.170449 kernel: vgaarb: loaded Dec 13 13:14:15.170466 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:14:15.170484 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:14:15.170520 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:14:15.170540 kernel: pnp: PnP ACPI init Dec 13 13:14:15.170758 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 13:14:15.170791 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:14:15.170810 kernel: NET: Registered PF_INET protocol family Dec 13 13:14:15.170828 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:14:15.170847 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:14:15.170865 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:14:15.170885 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:14:15.170905 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:14:15.170925 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:14:15.170950 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:14:15.170970 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:14:15.170988 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:14:15.171006 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:14:15.171025 kernel: kvm [1]: HYP mode not available Dec 13 13:14:15.171043 kernel: Initialise system trusted keyrings Dec 13 13:14:15.171061 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:14:15.171079 kernel: Key type asymmetric registered Dec 13 13:14:15.171098 kernel: Asymmetric key parser 'x509' registered Dec 13 13:14:15.171120 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:14:15.171139 kernel: io scheduler mq-deadline registered Dec 13 13:14:15.171157 kernel: io scheduler kyber registered Dec 13 13:14:15.171176 kernel: io scheduler bfq registered Dec 13 13:14:15.172520 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 13:14:15.172560 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:14:15.172579 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:14:15.172597 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 13:14:15.172615 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 13:14:15.172641 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:14:15.172660 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 13:14:15.172872 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 13:14:15.172899 kernel: printk: console [ttyS0] disabled Dec 13 13:14:15.172918 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 13:14:15.172936 kernel: printk: console [ttyS0] enabled Dec 13 13:14:15.172954 kernel: printk: bootconsole [uart0] disabled Dec 13 13:14:15.172972 kernel: thunder_xcv, ver 1.0 Dec 13 13:14:15.172989 kernel: thunder_bgx, ver 1.0 Dec 13 13:14:15.173014 kernel: nicpf, ver 1.0 Dec 13 13:14:15.173032 kernel: nicvf, ver 1.0 Dec 13 13:14:15.173299 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:14:15.173594 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:14:14 UTC (1734095654) Dec 13 13:14:15.173621 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:14:15.173640 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 13:14:15.173658 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:14:15.173676 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:14:15.173704 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:14:15.173722 kernel: Segment Routing with IPv6 Dec 13 13:14:15.173739 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:14:15.173758 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:14:15.173776 kernel: Key type dns_resolver registered Dec 13 13:14:15.173794 kernel: registered taskstats version 1 Dec 13 13:14:15.173813 kernel: Loading compiled-in X.509 certificates Dec 13 13:14:15.173831 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:14:15.173849 kernel: Key type .fscrypt registered Dec 13 13:14:15.173871 kernel: Key type fscrypt-provisioning registered Dec 13 13:14:15.173888 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:14:15.173906 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:14:15.173924 kernel: ima: No architecture policies found Dec 13 13:14:15.173942 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:14:15.173959 kernel: clk: Disabling unused clocks Dec 13 13:14:15.173977 kernel: Freeing unused kernel memory: 39936K Dec 13 13:14:15.173994 kernel: Run /init as init process Dec 13 13:14:15.174011 kernel: with arguments: Dec 13 13:14:15.174033 kernel: /init Dec 13 13:14:15.174051 kernel: with environment: Dec 13 13:14:15.174068 kernel: HOME=/ Dec 13 13:14:15.174085 kernel: TERM=linux Dec 13 13:14:15.174103 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:14:15.174124 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:14:15.174147 systemd[1]: Detected virtualization amazon. Dec 13 13:14:15.174167 systemd[1]: Detected architecture arm64. Dec 13 13:14:15.174190 systemd[1]: Running in initrd. Dec 13 13:14:15.175245 systemd[1]: No hostname configured, using default hostname. Dec 13 13:14:15.175272 systemd[1]: Hostname set to . Dec 13 13:14:15.175293 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:14:15.175312 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:14:15.175331 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:14:15.175351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:14:15.175371 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:14:15.175398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:14:15.175417 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:14:15.175437 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:14:15.175459 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:14:15.175479 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:14:15.175499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:14:15.175524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:14:15.175544 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:14:15.175563 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:14:15.175582 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:14:15.175601 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:14:15.175620 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:14:15.175639 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:14:15.175659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:14:15.175678 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:14:15.175702 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:14:15.175722 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:14:15.175741 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:14:15.175760 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:14:15.175779 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:14:15.175798 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:14:15.175817 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:14:15.175837 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:14:15.175856 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:14:15.175879 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:14:15.175899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:15.175918 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:14:15.175937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:14:15.175956 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:14:15.176012 systemd-journald[250]: Collecting audit messages is disabled. Dec 13 13:14:15.176061 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:14:15.176091 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:14:15.176120 systemd-journald[250]: Journal started Dec 13 13:14:15.176165 systemd-journald[250]: Runtime Journal (/run/log/journal/ec24b56a43a2961c24fb2b557e3fda34) is 8.0M, max 75.3M, 67.3M free. Dec 13 13:14:15.179989 kernel: Bridge firewalling registered Dec 13 13:14:15.143506 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 13:14:15.179189 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 13:14:15.186338 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:14:15.188233 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:14:15.191053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:15.197822 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:14:15.218655 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:15.226445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:14:15.234168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:14:15.242179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:14:15.269524 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:14:15.290069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:14:15.299623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:14:15.307017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:15.314118 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:14:15.329652 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:14:15.355733 dracut-cmdline[290]: dracut-dracut-053 Dec 13 13:14:15.363356 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:14:15.390021 systemd-resolved[287]: Positive Trust Anchors: Dec 13 13:14:15.390056 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:14:15.390121 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:14:15.528232 kernel: SCSI subsystem initialized Dec 13 13:14:15.535250 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:14:15.547257 kernel: iscsi: registered transport (tcp) Dec 13 13:14:15.569245 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:14:15.569319 kernel: QLogic iSCSI HBA Driver Dec 13 13:14:15.634249 kernel: random: crng init done Dec 13 13:14:15.634539 systemd-resolved[287]: Defaulting to hostname 'linux'. Dec 13 13:14:15.636854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:14:15.640869 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:14:15.665841 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:14:15.677528 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:14:15.710638 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:14:15.710716 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:14:15.710743 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:14:15.777274 kernel: raid6: neonx8 gen() 6654 MB/s Dec 13 13:14:15.794243 kernel: raid6: neonx4 gen() 6583 MB/s Dec 13 13:14:15.811244 kernel: raid6: neonx2 gen() 5463 MB/s Dec 13 13:14:15.828244 kernel: raid6: neonx1 gen() 3963 MB/s Dec 13 13:14:15.845244 kernel: raid6: int64x8 gen() 3632 MB/s Dec 13 13:14:15.862247 kernel: raid6: int64x4 gen() 3716 MB/s Dec 13 13:14:15.879243 kernel: raid6: int64x2 gen() 3613 MB/s Dec 13 13:14:15.896979 kernel: raid6: int64x1 gen() 2765 MB/s Dec 13 13:14:15.897017 kernel: raid6: using algorithm neonx8 gen() 6654 MB/s Dec 13 13:14:15.914969 kernel: raid6: .... xor() 4696 MB/s, rmw enabled Dec 13 13:14:15.915010 kernel: raid6: using neon recovery algorithm Dec 13 13:14:15.923016 kernel: xor: measuring software checksum speed Dec 13 13:14:15.923073 kernel: 8regs : 12945 MB/sec Dec 13 13:14:15.924242 kernel: 32regs : 11970 MB/sec Dec 13 13:14:15.926242 kernel: arm64_neon : 8950 MB/sec Dec 13 13:14:15.926286 kernel: xor: using function: 8regs (12945 MB/sec) Dec 13 13:14:16.009261 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:14:16.028101 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:14:16.038549 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:14:16.081848 systemd-udevd[471]: Using default interface naming scheme 'v255'. Dec 13 13:14:16.091046 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:14:16.106030 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:14:16.141358 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Dec 13 13:14:16.197273 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:14:16.212431 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:14:16.322690 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:14:16.348262 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:14:16.396272 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:14:16.411887 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:14:16.416435 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:14:16.421430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:14:16.446543 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:14:16.488955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:14:16.520893 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:14:16.520966 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 13:14:16.540959 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 13:14:16.541268 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 13:14:16.541508 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:20:2f:99:b6:b1 Dec 13 13:14:16.542968 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:14:16.551798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:14:16.552053 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:16.558586 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:16.567409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:16.570019 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:16.574412 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:16.588880 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 13:14:16.588947 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 13:14:16.590717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:16.600458 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 13:14:16.607526 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:14:16.607610 kernel: GPT:9289727 != 16777215 Dec 13 13:14:16.608775 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:14:16.608820 kernel: GPT:9289727 != 16777215 Dec 13 13:14:16.608845 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:14:16.608920 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:14:16.620774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:16.635718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:16.675524 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:16.690251 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (528) Dec 13 13:14:16.745231 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (527) Dec 13 13:14:16.797727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 13:14:16.819663 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 13:14:16.848170 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 13:14:16.873896 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 13:14:16.874070 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 13:14:16.896572 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:14:16.909054 disk-uuid[662]: Primary Header is updated. Dec 13 13:14:16.909054 disk-uuid[662]: Secondary Entries is updated. Dec 13 13:14:16.909054 disk-uuid[662]: Secondary Header is updated. Dec 13 13:14:16.918247 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:14:17.938427 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:14:17.940425 disk-uuid[663]: The operation has completed successfully. Dec 13 13:14:18.118136 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:14:18.118390 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:14:18.168511 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:14:18.177677 sh[924]: Success Dec 13 13:14:18.202336 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:14:18.330559 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:14:18.339083 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:14:18.343843 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:14:18.382588 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:14:18.382660 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:18.382688 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:14:18.385465 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:14:18.385502 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:14:18.479254 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 13:14:18.492435 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:14:18.496418 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:14:18.507487 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:14:18.513504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:14:18.549338 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:18.549423 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:18.551117 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:14:18.558248 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:14:18.575375 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:14:18.579536 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:18.590290 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:14:18.613633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:14:18.687263 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:14:18.712472 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:14:18.755809 systemd-networkd[1128]: lo: Link UP Dec 13 13:14:18.755832 systemd-networkd[1128]: lo: Gained carrier Dec 13 13:14:18.759582 systemd-networkd[1128]: Enumeration completed Dec 13 13:14:18.761186 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:14:18.765016 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:18.765122 systemd-networkd[1128]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:14:18.765459 systemd[1]: Reached target network.target - Network. Dec 13 13:14:18.776786 systemd-networkd[1128]: eth0: Link UP Dec 13 13:14:18.776799 systemd-networkd[1128]: eth0: Gained carrier Dec 13 13:14:18.776816 systemd-networkd[1128]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:18.804351 systemd-networkd[1128]: eth0: DHCPv4 address 172.31.29.1/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 13:14:19.026998 ignition[1067]: Ignition 2.20.0 Dec 13 13:14:19.027028 ignition[1067]: Stage: fetch-offline Dec 13 13:14:19.027516 ignition[1067]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:19.027541 ignition[1067]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:14:19.030190 ignition[1067]: Ignition finished successfully Dec 13 13:14:19.037348 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:14:19.045538 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:14:19.078551 ignition[1138]: Ignition 2.20.0 Dec 13 13:14:19.079061 ignition[1138]: Stage: fetch Dec 13 13:14:19.079707 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:19.079733 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:14:19.079944 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:14:19.092091 ignition[1138]: PUT result: OK Dec 13 13:14:19.095119 ignition[1138]: parsed url from cmdline: "" Dec 13 13:14:19.095142 ignition[1138]: no config URL provided Dec 13 13:14:19.095160 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:14:19.095185 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:14:19.095284 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:14:19.098894 ignition[1138]: PUT result: OK Dec 13 13:14:19.100560 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 13:14:19.106471 ignition[1138]: GET result: OK Dec 13 13:14:19.107652 ignition[1138]: parsing config with SHA512: 4879f91d17e57afd2bad7315e5cabe64d098f0dc4c6e7dc724e810ade6ce7ba117957f46174caf87ba0b82ef7eb366569d4157e82e7d3432feaeda4786280f8f Dec 13 13:14:19.115190 unknown[1138]: fetched base config from "system" Dec 13 13:14:19.115644 unknown[1138]: fetched base config from "system" Dec 13 13:14:19.116333 ignition[1138]: fetch: fetch complete Dec 13 13:14:19.115671 unknown[1138]: fetched user config from "aws" Dec 13 13:14:19.116344 ignition[1138]: fetch: fetch passed Dec 13 13:14:19.116447 ignition[1138]: Ignition finished successfully Dec 13 13:14:19.124844 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:14:19.139500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:14:19.162303 ignition[1144]: Ignition 2.20.0 Dec 13 13:14:19.162329 ignition[1144]: Stage: kargs Dec 13 13:14:19.163595 ignition[1144]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:19.163630 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:14:19.163784 ignition[1144]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:14:19.164781 ignition[1144]: PUT result: OK Dec 13 13:14:19.175912 ignition[1144]: kargs: kargs passed Dec 13 13:14:19.176022 ignition[1144]: Ignition finished successfully Dec 13 13:14:19.179947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:14:19.190566 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:14:19.214666 ignition[1150]: Ignition 2.20.0 Dec 13 13:14:19.214696 ignition[1150]: Stage: disks Dec 13 13:14:19.216247 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:19.216284 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:14:19.216679 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:14:19.223906 ignition[1150]: PUT result: OK Dec 13 13:14:19.228106 ignition[1150]: disks: disks passed Dec 13 13:14:19.228269 ignition[1150]: Ignition finished successfully Dec 13 13:14:19.232952 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:14:19.235335 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:14:19.238324 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:14:19.242137 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:14:19.244035 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:14:19.246180 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:14:19.265126 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:14:19.306990 systemd-fsck[1158]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:14:19.312514 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:14:19.329559 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:14:19.409233 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:14:19.410169 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:14:19.413659 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:14:19.424404 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:14:19.436791 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:14:19.440792 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:14:19.440881 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:14:19.440930 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:14:19.460048 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:14:19.475507 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1177) Dec 13 13:14:19.475573 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:19.475600 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:19.476908 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:14:19.476467 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:14:19.486674 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:14:19.488508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:14:19.821091 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:14:19.841078 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:14:19.858776 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:14:19.866753 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:14:20.154387 systemd-networkd[1128]: eth0: Gained IPv6LL Dec 13 13:14:20.264719 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:14:20.274421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:14:20.288567 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:14:20.305379 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:20.305181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:14:20.340670 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:14:20.356849 ignition[1290]: INFO : Ignition 2.20.0 Dec 13 13:14:20.356849 ignition[1290]: INFO : Stage: mount Dec 13 13:14:20.359999 ignition[1290]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:20.359999 ignition[1290]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:14:20.364054 ignition[1290]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:14:20.366819 ignition[1290]: INFO : PUT result: OK Dec 13 13:14:20.372930 ignition[1290]: INFO : mount: mount passed Dec 13 13:14:20.374785 ignition[1290]: INFO : Ignition finished successfully Dec 13 13:14:20.380264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:14:20.393780 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:14:20.418602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:14:20.444254 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1302) Dec 13 13:14:20.448107 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:20.448147 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:20.448173 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:14:20.454257 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:14:20.457414 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:14:20.490640 ignition[1319]: INFO : Ignition 2.20.0 Dec 13 13:14:20.490640 ignition[1319]: INFO : Stage: files Dec 13 13:14:20.493902 ignition[1319]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:20.493902 ignition[1319]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:14:20.493902 ignition[1319]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:14:20.500296 ignition[1319]: INFO : PUT result: OK Dec 13 13:14:20.504687 ignition[1319]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:14:20.508422 ignition[1319]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:14:20.508422 ignition[1319]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:14:20.527184 ignition[1319]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:14:20.530023 ignition[1319]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:14:20.532775 unknown[1319]: wrote ssh authorized keys file for user: core Dec 13 13:14:20.535045 ignition[1319]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:14:20.539596 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:14:20.543230 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:14:20.614477 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:14:20.901294 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:14:20.901294 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:14:20.908143 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 13:14:21.421026 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:14:21.597126 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:14:21.597126 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:21.604918 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 13:14:22.030425 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:14:22.375947 ignition[1319]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:22.380032 ignition[1319]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:14:22.380032 ignition[1319]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:14:22.380032 ignition[1319]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:14:22.380032 ignition[1319]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:14:22.380032 ignition[1319]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:14:22.380032 ignition[1319]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:14:22.396366 ignition[1319]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:14:22.396366 ignition[1319]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:14:22.396366 ignition[1319]: INFO : files: files passed Dec 13 13:14:22.396366 ignition[1319]: INFO : Ignition finished successfully Dec 13 13:14:22.409259 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:14:22.419518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:14:22.432581 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:14:22.442206 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:14:22.444421 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:14:22.461196 initrd-setup-root-after-ignition[1347]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:22.464612 initrd-setup-root-after-ignition[1347]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:22.467923 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:22.472368 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:14:22.477095 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:14:22.496589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:14:22.545659 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:14:22.546031 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:14:22.553250 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:14:22.555155 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:14:22.555911 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:14:22.562108 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:14:22.606136 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:14:22.620614 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:14:22.644095 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:14:22.646556 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:14:22.652868 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:14:22.656144 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:14:22.656403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:14:22.663302 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:14:22.667631 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:14:22.669620 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:14:22.675630 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:14:22.677896 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:14:22.680138 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:14:22.682742 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:14:22.692935 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:14:22.694990 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:14:22.697792 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:14:22.703528 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:14:22.703755 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:14:22.706281 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:14:22.713681 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:14:22.720026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:14:22.723470 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:14:22.725876 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:14:22.726100 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:14:22.735873 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:14:22.736327 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:14:22.743866 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:14:22.744274 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:14:22.759676 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:14:22.763854 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:14:22.764253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:14:22.784691 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:14:22.787843 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:14:22.791457 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:14:22.798652 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:14:22.802562 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:14:22.816262 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:14:22.819442 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:14:22.828857 ignition[1371]: INFO : Ignition 2.20.0 Dec 13 13:14:22.831423 ignition[1371]: INFO : Stage: umount Dec 13 13:14:22.831423 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:22.831423 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:14:22.836910 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:14:22.840002 ignition[1371]: INFO : PUT result: OK Dec 13 13:14:22.844770 ignition[1371]: INFO : umount: umount passed Dec 13 13:14:22.849529 ignition[1371]: INFO : Ignition finished successfully Dec 13 13:14:22.848814 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:14:22.849002 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:14:22.863550 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:14:22.864686 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:14:22.864842 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:14:22.870570 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:14:22.870674 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:14:22.872663 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:14:22.872744 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:14:22.874691 systemd[1]: Stopped target network.target - Network. Dec 13 13:14:22.877385 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:14:22.877498 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:14:22.881363 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:14:22.884406 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:14:22.886072 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:14:22.888412 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:14:22.891674 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:14:22.893725 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:14:22.893807 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:14:22.896930 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:14:22.897001 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:14:22.898881 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:14:22.898963 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:14:22.900850 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:14:22.900934 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:14:22.903238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:14:22.906114 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:14:22.909388 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:14:22.909575 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:14:22.912511 systemd-networkd[1128]: eth0: DHCPv6 lease lost Dec 13 13:14:22.915274 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:14:22.916030 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:14:22.919900 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:14:22.920110 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:14:22.927811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:14:22.927933 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:14:22.946516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:14:22.949279 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:14:22.949403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:14:22.952020 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:14:22.961454 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:14:22.961675 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:14:22.971325 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:14:22.971817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:14:22.989893 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:14:22.990000 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:14:22.992585 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:14:22.992665 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:14:23.037839 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:14:23.038123 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:14:23.046098 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:14:23.047989 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:14:23.052834 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:14:23.053329 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:14:23.060196 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:14:23.060317 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:14:23.062582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:14:23.062666 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:14:23.066314 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:14:23.066401 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:23.086607 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:14:23.089286 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:14:23.089394 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:14:23.091776 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:23.091867 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:23.095102 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:14:23.095316 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:14:23.120539 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:14:23.122963 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:14:23.129169 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:14:23.137500 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:14:23.165206 systemd[1]: Switching root. Dec 13 13:14:23.212352 systemd-journald[250]: Journal stopped Dec 13 13:14:25.561538 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Dec 13 13:14:25.561675 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:14:25.561720 kernel: SELinux: policy capability open_perms=1 Dec 13 13:14:25.561751 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:14:25.561790 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:14:25.561820 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:14:25.561850 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:14:25.561886 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:14:25.561916 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:14:25.561944 kernel: audit: type=1403 audit(1734095663.749:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:14:25.561988 systemd[1]: Successfully loaded SELinux policy in 65.054ms. Dec 13 13:14:25.562031 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.880ms. Dec 13 13:14:25.562065 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:14:25.562095 systemd[1]: Detected virtualization amazon. Dec 13 13:14:25.562123 systemd[1]: Detected architecture arm64. Dec 13 13:14:25.562152 systemd[1]: Detected first boot. Dec 13 13:14:25.562187 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:14:25.562238 zram_generator::config[1414]: No configuration found. Dec 13 13:14:25.562287 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:14:25.562316 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:14:25.562347 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:14:25.562377 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:14:25.562409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:14:25.562439 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:14:25.562496 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:14:25.562533 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:14:25.562563 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:14:25.562592 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:14:25.562625 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:14:25.562655 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:14:25.562685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:14:25.562719 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:14:25.562748 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:14:25.562782 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:14:25.562813 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:14:25.562842 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:14:25.562871 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:14:25.562902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:14:25.562930 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:14:25.562959 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:14:25.562988 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:14:25.563023 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:14:25.563051 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:14:25.563081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:14:25.563109 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:14:25.563139 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:14:25.563168 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:14:25.563196 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:14:25.564257 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:14:25.564292 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:14:25.564327 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:14:25.564355 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:14:25.564392 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:14:25.564423 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:14:25.564453 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:14:25.564481 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:14:25.564512 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:14:25.564542 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:14:25.564575 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:14:25.564604 systemd[1]: Reached target machines.target - Containers. Dec 13 13:14:25.564632 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:14:25.564664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:14:25.564692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:14:25.564721 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:14:25.564750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:14:25.564778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:14:25.564808 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:14:25.564840 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:14:25.564870 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:14:25.564901 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:14:25.564932 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:14:25.564961 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:14:25.564989 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:14:25.565022 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:14:25.565050 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:14:25.565082 kernel: fuse: init (API version 7.39) Dec 13 13:14:25.565109 kernel: loop: module loaded Dec 13 13:14:25.565138 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:14:25.565166 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:14:25.565197 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:14:25.565298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:14:25.565332 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:14:25.565360 systemd[1]: Stopped verity-setup.service. Dec 13 13:14:25.565388 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:14:25.565422 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:14:25.565493 systemd-journald[1499]: Collecting audit messages is disabled. Dec 13 13:14:25.565558 kernel: ACPI: bus type drm_connector registered Dec 13 13:14:25.565591 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:14:25.565621 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:14:25.565651 systemd-journald[1499]: Journal started Dec 13 13:14:25.565706 systemd-journald[1499]: Runtime Journal (/run/log/journal/ec24b56a43a2961c24fb2b557e3fda34) is 8.0M, max 75.3M, 67.3M free. Dec 13 13:14:24.997286 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:14:25.052600 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 13:14:25.053390 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:14:25.573973 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:14:25.579531 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:14:25.588449 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:14:25.595327 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:14:25.599203 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:14:25.599580 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:14:25.602511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:14:25.602830 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:14:25.605956 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:14:25.608371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:14:25.611073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:14:25.611542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:14:25.615124 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:14:25.615516 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:14:25.618449 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:14:25.618790 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:14:25.623230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:14:25.625981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:14:25.632921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:14:25.638144 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:14:25.663109 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:14:25.675444 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:14:25.681469 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:14:25.685466 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:14:25.685543 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:14:25.689441 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:14:25.706107 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:14:25.715585 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:14:25.717882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:14:25.723535 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:14:25.735537 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:14:25.737897 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:14:25.743547 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:14:25.746444 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:14:25.749935 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:14:25.760665 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:14:25.769530 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:14:25.777185 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:14:25.779977 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:14:25.793606 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:14:25.827591 systemd-journald[1499]: Time spent on flushing to /var/log/journal/ec24b56a43a2961c24fb2b557e3fda34 is 45.408ms for 910 entries. Dec 13 13:14:25.827591 systemd-journald[1499]: System Journal (/var/log/journal/ec24b56a43a2961c24fb2b557e3fda34) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:14:25.895992 systemd-journald[1499]: Received client request to flush runtime journal. Dec 13 13:14:25.896257 kernel: loop0: detected capacity change from 0 to 116784 Dec 13 13:14:25.830863 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:14:25.833571 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:14:25.846574 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:14:25.901873 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:14:25.919368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:14:25.922980 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:14:25.941166 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:14:25.957311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:14:25.969353 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:14:25.971626 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:14:25.997555 udevadm[1560]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:14:26.004287 kernel: loop1: detected capacity change from 0 to 194096 Dec 13 13:14:26.008157 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:14:26.018597 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:14:26.096039 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Dec 13 13:14:26.096703 systemd-tmpfiles[1562]: ACLs are not supported, ignoring. Dec 13 13:14:26.110318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:14:26.149896 kernel: loop2: detected capacity change from 0 to 53784 Dec 13 13:14:26.207293 kernel: loop3: detected capacity change from 0 to 113552 Dec 13 13:14:26.337241 kernel: loop4: detected capacity change from 0 to 116784 Dec 13 13:14:26.352114 kernel: loop5: detected capacity change from 0 to 194096 Dec 13 13:14:26.385260 kernel: loop6: detected capacity change from 0 to 53784 Dec 13 13:14:26.411650 kernel: loop7: detected capacity change from 0 to 113552 Dec 13 13:14:26.430081 (sd-merge)[1568]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 13:14:26.432593 (sd-merge)[1568]: Merged extensions into '/usr'. Dec 13 13:14:26.445165 systemd[1]: Reloading requested from client PID 1543 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:14:26.445191 systemd[1]: Reloading... Dec 13 13:14:26.612241 zram_generator::config[1598]: No configuration found. Dec 13 13:14:26.966879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:14:27.078492 systemd[1]: Reloading finished in 631 ms. Dec 13 13:14:27.128307 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:14:27.131665 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:14:27.154659 systemd[1]: Starting ensure-sysext.service... Dec 13 13:14:27.165108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:14:27.173610 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:14:27.199885 systemd[1]: Reloading requested from client PID 1647 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:14:27.199920 systemd[1]: Reloading... Dec 13 13:14:27.242016 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:14:27.243764 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:14:27.249306 systemd-tmpfiles[1648]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:14:27.249917 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Dec 13 13:14:27.250067 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Dec 13 13:14:27.273574 systemd-tmpfiles[1648]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:14:27.274081 systemd-tmpfiles[1648]: Skipping /boot Dec 13 13:14:27.309875 systemd-tmpfiles[1648]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:14:27.310108 systemd-tmpfiles[1648]: Skipping /boot Dec 13 13:14:27.331791 systemd-udevd[1649]: Using default interface naming scheme 'v255'. Dec 13 13:14:27.423075 zram_generator::config[1684]: No configuration found. Dec 13 13:14:27.472155 ldconfig[1538]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:14:27.599261 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1701) Dec 13 13:14:27.609781 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1701) Dec 13 13:14:27.619935 (udev-worker)[1697]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:14:27.791271 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1697) Dec 13 13:14:27.813733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:14:27.970570 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:14:27.971687 systemd[1]: Reloading finished in 771 ms. Dec 13 13:14:28.001171 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:14:28.005380 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:14:28.028064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:14:28.082733 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:14:28.091555 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:14:28.102586 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:14:28.117614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:14:28.126605 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:14:28.133877 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:14:28.190891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:14:28.194969 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:14:28.201649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:14:28.207840 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:14:28.209962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:14:28.216921 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:14:28.225448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:28.264002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:14:28.279114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:14:28.281246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:14:28.281842 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:14:28.286298 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:14:28.292322 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:14:28.303293 systemd[1]: Finished ensure-sysext.service. Dec 13 13:14:28.315297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:14:28.355198 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:14:28.356108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:14:28.370605 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:14:28.372332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:14:28.377124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:14:28.381906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:14:28.385143 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:14:28.385466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:14:28.395794 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:14:28.399433 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:14:28.408433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 13:14:28.430976 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:14:28.434038 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:14:28.437026 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:14:28.440945 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:14:28.461693 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:14:28.464382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:14:28.476417 augenrules[1891]: No rules Dec 13 13:14:28.478416 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:14:28.479911 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:14:28.501083 lvm[1889]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:14:28.519787 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:14:28.523539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:14:28.555304 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:14:28.560718 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:14:28.567579 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:14:28.573316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:28.597546 lvm[1907]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:14:28.636363 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:14:28.673759 systemd-networkd[1848]: lo: Link UP Dec 13 13:14:28.673774 systemd-networkd[1848]: lo: Gained carrier Dec 13 13:14:28.677120 systemd-networkd[1848]: Enumeration completed Dec 13 13:14:28.677526 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:14:28.680835 systemd-networkd[1848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:28.682409 systemd-networkd[1848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:14:28.684554 systemd-networkd[1848]: eth0: Link UP Dec 13 13:14:28.685173 systemd-networkd[1848]: eth0: Gained carrier Dec 13 13:14:28.685229 systemd-networkd[1848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:28.686594 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:14:28.694374 systemd-networkd[1848]: eth0: DHCPv4 address 172.31.29.1/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 13:14:28.711392 systemd-resolved[1849]: Positive Trust Anchors: Dec 13 13:14:28.711434 systemd-resolved[1849]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:14:28.711498 systemd-resolved[1849]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:14:28.730568 systemd-resolved[1849]: Defaulting to hostname 'linux'. Dec 13 13:14:28.733663 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:14:28.735906 systemd[1]: Reached target network.target - Network. Dec 13 13:14:28.737596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:14:28.739889 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:14:28.742075 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:14:28.744504 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:14:28.747098 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:14:28.749231 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:14:28.751524 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:14:28.753763 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:14:28.753819 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:14:28.755490 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:14:28.758744 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:14:28.763563 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:14:28.774664 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:14:28.777822 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:14:28.780282 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:14:28.782126 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:14:28.784201 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:14:28.784272 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:14:28.792422 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:14:28.798598 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:14:28.803581 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:14:28.808483 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:14:28.815551 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:14:28.818405 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:14:28.829572 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:14:28.835417 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 13:14:28.842423 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:14:28.847572 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 13:14:28.855614 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:14:28.860347 jq[1918]: false Dec 13 13:14:28.861711 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:14:28.871801 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:14:28.884666 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:14:28.885625 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:14:28.894666 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:14:28.900457 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:14:28.906970 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:14:28.907354 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:14:28.954133 jq[1931]: true Dec 13 13:14:28.985320 jq[1943]: true Dec 13 13:14:29.018010 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:14:29.020256 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:14:29.046876 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:14:29.047320 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:14:29.069968 dbus-daemon[1917]: [system] SELinux support is enabled Dec 13 13:14:29.089885 extend-filesystems[1919]: Found loop4 Dec 13 13:14:29.089885 extend-filesystems[1919]: Found loop5 Dec 13 13:14:29.089885 extend-filesystems[1919]: Found loop6 Dec 13 13:14:29.089885 extend-filesystems[1919]: Found loop7 Dec 13 13:14:29.089885 extend-filesystems[1919]: Found nvme0n1 Dec 13 13:14:29.089885 extend-filesystems[1919]: Found nvme0n1p1 Dec 13 13:14:29.089885 extend-filesystems[1919]: Found nvme0n1p2 Dec 13 13:14:29.089885 extend-filesystems[1919]: Found nvme0n1p3 Dec 13 13:14:29.089796 dbus-daemon[1917]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1848 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 13:14:29.189328 extend-filesystems[1919]: Found usr Dec 13 13:14:29.189328 extend-filesystems[1919]: Found nvme0n1p4 Dec 13 13:14:29.189328 extend-filesystems[1919]: Found nvme0n1p6 Dec 13 13:14:29.189328 extend-filesystems[1919]: Found nvme0n1p7 Dec 13 13:14:29.189328 extend-filesystems[1919]: Found nvme0n1p9 Dec 13 13:14:29.189328 extend-filesystems[1919]: Checking size of /dev/nvme0n1p9 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:28:25 UTC 2024 (1): Starting Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: ---------------------------------------------------- Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: corporation. Support and training for ntp-4 are Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: available at https://www.nwtime.org/support Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: ---------------------------------------------------- Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: proto: precision = 0.096 usec (-23) Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: basedate set to 2024-12-01 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: gps base set to 2024-12-01 (week 2343) Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Listen normally on 3 eth0 172.31.29.1:123 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Listen normally on 4 lo [::1]:123 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: bind(21) AF_INET6 fe80::420:2fff:fe99:b6b1%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: unable to create socket on eth0 (5) for fe80::420:2fff:fe99:b6b1%2#123 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: failed to init interface for address fe80::420:2fff:fe99:b6b1%2 Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: Listening on routing socket on fd #21 for interface updates Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:14:29.212439 ntpd[1921]: 13 Dec 13:14:29 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:14:29.219226 tar[1945]: linux-arm64/helm Dec 13 13:14:29.219617 update_engine[1929]: I20241213 13:14:29.105933 1929 main.cc:92] Flatcar Update Engine starting Dec 13 13:14:29.219617 update_engine[1929]: I20241213 13:14:29.121131 1929 update_check_scheduler.cc:74] Next update check in 8m31s Dec 13 13:14:29.096853 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:14:29.115329 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:14:29.105854 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:14:29.129929 ntpd[1921]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:28:25 UTC 2024 (1): Starting Dec 13 13:14:29.105910 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:14:29.129974 ntpd[1921]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:14:29.108403 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:14:29.129994 ntpd[1921]: ---------------------------------------------------- Dec 13 13:14:29.108438 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:14:29.130013 ntpd[1921]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:14:29.121015 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:14:29.130032 ntpd[1921]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:14:29.130801 (ntainerd)[1955]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:14:29.130050 ntpd[1921]: corporation. Support and training for ntp-4 are Dec 13 13:14:29.142116 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 13:14:29.130068 ntpd[1921]: available at https://www.nwtime.org/support Dec 13 13:14:29.153514 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:14:29.130086 ntpd[1921]: ---------------------------------------------------- Dec 13 13:14:29.158764 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 13:14:29.135163 ntpd[1921]: proto: precision = 0.096 usec (-23) Dec 13 13:14:29.136154 ntpd[1921]: basedate set to 2024-12-01 Dec 13 13:14:29.136184 ntpd[1921]: gps base set to 2024-12-01 (week 2343) Dec 13 13:14:29.144421 ntpd[1921]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:14:29.144500 ntpd[1921]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:14:29.146894 ntpd[1921]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:14:29.146964 ntpd[1921]: Listen normally on 3 eth0 172.31.29.1:123 Dec 13 13:14:29.147068 ntpd[1921]: Listen normally on 4 lo [::1]:123 Dec 13 13:14:29.147179 ntpd[1921]: bind(21) AF_INET6 fe80::420:2fff:fe99:b6b1%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:14:29.147276 ntpd[1921]: unable to create socket on eth0 (5) for fe80::420:2fff:fe99:b6b1%2#123 Dec 13 13:14:29.147312 ntpd[1921]: failed to init interface for address fe80::420:2fff:fe99:b6b1%2 Dec 13 13:14:29.147374 ntpd[1921]: Listening on routing socket on fd #21 for interface updates Dec 13 13:14:29.160169 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:14:29.174365 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:14:29.252374 extend-filesystems[1919]: Resized partition /dev/nvme0n1p9 Dec 13 13:14:29.267236 extend-filesystems[1985]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:14:29.274248 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 13:14:29.335994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:14:29.382534 systemd-logind[1926]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:14:29.382569 systemd-logind[1926]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 13:14:29.393390 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 13:14:29.388470 systemd-logind[1926]: New seat seat0. Dec 13 13:14:29.409522 coreos-metadata[1916]: Dec 13 13:14:29.390 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 13:14:29.409522 coreos-metadata[1916]: Dec 13 13:14:29.398 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 13:14:29.416500 coreos-metadata[1916]: Dec 13 13:14:29.415 INFO Fetch successful Dec 13 13:14:29.416500 coreos-metadata[1916]: Dec 13 13:14:29.416 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 13:14:29.416663 extend-filesystems[1985]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 13:14:29.416663 extend-filesystems[1985]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:14:29.416663 extend-filesystems[1985]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 13:14:29.448404 extend-filesystems[1919]: Resized filesystem in /dev/nvme0n1p9 Dec 13 13:14:29.419093 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.417 INFO Fetch successful Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.417 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.427 INFO Fetch successful Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.440 INFO Fetch successful Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.440 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.452 INFO Fetch failed with 404: resource not found Dec 13 13:14:29.452945 coreos-metadata[1916]: Dec 13 13:14:29.452 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 13:14:29.453351 bash[1986]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:14:29.421845 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:14:29.423537 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:14:29.436297 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:14:29.469774 coreos-metadata[1916]: Dec 13 13:14:29.463 INFO Fetch successful Dec 13 13:14:29.469774 coreos-metadata[1916]: Dec 13 13:14:29.463 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 13:14:29.469774 coreos-metadata[1916]: Dec 13 13:14:29.467 INFO Fetch successful Dec 13 13:14:29.469774 coreos-metadata[1916]: Dec 13 13:14:29.467 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 13:14:29.470691 coreos-metadata[1916]: Dec 13 13:14:29.470 INFO Fetch successful Dec 13 13:14:29.470691 coreos-metadata[1916]: Dec 13 13:14:29.470 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 13:14:29.473821 coreos-metadata[1916]: Dec 13 13:14:29.473 INFO Fetch successful Dec 13 13:14:29.473821 coreos-metadata[1916]: Dec 13 13:14:29.473 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 13:14:29.477325 coreos-metadata[1916]: Dec 13 13:14:29.474 INFO Fetch successful Dec 13 13:14:29.539594 systemd[1]: Starting sshkeys.service... Dec 13 13:14:29.603870 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:14:29.611888 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:14:29.640430 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1694) Dec 13 13:14:29.672881 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:14:29.678534 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:14:29.723995 locksmithd[1968]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:14:29.783981 containerd[1955]: time="2024-12-13T13:14:29.777163763Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:14:29.842387 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 13:14:29.842861 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 13:14:29.854724 dbus-daemon[1917]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1967 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 13:14:29.866106 containerd[1955]: time="2024-12-13T13:14:29.866028240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:14:29.917923 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.946421688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.946508616Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.946545828Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.946871916Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.946916856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.947048532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.947077608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.947456016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.947498448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.947531016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:14:29.949817 containerd[1955]: time="2024-12-13T13:14:29.947555412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:14:29.950442 containerd[1955]: time="2024-12-13T13:14:29.947749944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:14:29.950442 containerd[1955]: time="2024-12-13T13:14:29.948154428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:14:29.962769 containerd[1955]: time="2024-12-13T13:14:29.961927440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:14:29.962769 containerd[1955]: time="2024-12-13T13:14:29.961979748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:14:29.962769 containerd[1955]: time="2024-12-13T13:14:29.962274156Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:14:29.962769 containerd[1955]: time="2024-12-13T13:14:29.962384856Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:14:29.978486 containerd[1955]: time="2024-12-13T13:14:29.977685168Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:14:29.978486 containerd[1955]: time="2024-12-13T13:14:29.977797656Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:14:29.978486 containerd[1955]: time="2024-12-13T13:14:29.977833116Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:14:29.978486 containerd[1955]: time="2024-12-13T13:14:29.977931744Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:14:29.978486 containerd[1955]: time="2024-12-13T13:14:29.977965248Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:14:29.978486 containerd[1955]: time="2024-12-13T13:14:29.978296904Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:14:29.983444 containerd[1955]: time="2024-12-13T13:14:29.980670108Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:14:29.983444 containerd[1955]: time="2024-12-13T13:14:29.980954628Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:14:29.983444 containerd[1955]: time="2024-12-13T13:14:29.980992884Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:14:29.983444 containerd[1955]: time="2024-12-13T13:14:29.981026436Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:14:29.983444 containerd[1955]: time="2024-12-13T13:14:29.981060960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.983444 containerd[1955]: time="2024-12-13T13:14:29.981119088Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.983444 containerd[1955]: time="2024-12-13T13:14:29.981161328Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.983881 coreos-metadata[2024]: Dec 13 13:14:29.983 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 13:14:29.990233 coreos-metadata[2024]: Dec 13 13:14:29.986 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 13:14:29.991770 coreos-metadata[2024]: Dec 13 13:14:29.991 INFO Fetch successful Dec 13 13:14:29.991770 coreos-metadata[2024]: Dec 13 13:14:29.991 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.981196548Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993051348Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993098340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993187356Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993247536Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993296844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993331764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993369564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993400620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993431220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993461784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993489528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993519300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.997243 containerd[1955]: time="2024-12-13T13:14:29.993548472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.996384 polkitd[2089]: Started polkitd version 121 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993595908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993626088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993659664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993690096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993721872Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993768120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993799056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.993825684Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.996367344Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.996433356Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.996462036Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.996491268Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:14:29.998378 containerd[1955]: time="2024-12-13T13:14:29.996520584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:14:29.998946 containerd[1955]: time="2024-12-13T13:14:29.996552444Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:14:29.998946 containerd[1955]: time="2024-12-13T13:14:29.996583944Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:14:29.998946 containerd[1955]: time="2024-12-13T13:14:29.996616320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:14:30.006036 coreos-metadata[2024]: Dec 13 13:14:29.999 INFO Fetch successful Dec 13 13:14:30.006759 unknown[2024]: wrote ssh authorized keys file for user: core Dec 13 13:14:30.011483 containerd[1955]: time="2024-12-13T13:14:29.997194888Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:14:30.011483 containerd[1955]: time="2024-12-13T13:14:30.010360269Z" level=info msg="Connect containerd service" Dec 13 13:14:30.011483 containerd[1955]: time="2024-12-13T13:14:30.010450425Z" level=info msg="using legacy CRI server" Dec 13 13:14:30.011483 containerd[1955]: time="2024-12-13T13:14:30.010491165Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:14:30.011483 containerd[1955]: time="2024-12-13T13:14:30.010746369Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:14:30.017263 containerd[1955]: time="2024-12-13T13:14:30.013933725Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:14:30.020322 containerd[1955]: time="2024-12-13T13:14:30.019481085Z" level=info msg="Start subscribing containerd event" Dec 13 13:14:30.020322 containerd[1955]: time="2024-12-13T13:14:30.019572357Z" level=info msg="Start recovering state" Dec 13 13:14:30.020322 containerd[1955]: time="2024-12-13T13:14:30.019703013Z" level=info msg="Start event monitor" Dec 13 13:14:30.020322 containerd[1955]: time="2024-12-13T13:14:30.019727985Z" level=info msg="Start snapshots syncer" Dec 13 13:14:30.020322 containerd[1955]: time="2024-12-13T13:14:30.019749537Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:14:30.020322 containerd[1955]: time="2024-12-13T13:14:30.019769337Z" level=info msg="Start streaming server" Dec 13 13:14:30.037386 containerd[1955]: time="2024-12-13T13:14:30.034875945Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:14:30.037386 containerd[1955]: time="2024-12-13T13:14:30.035054865Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:14:30.037386 containerd[1955]: time="2024-12-13T13:14:30.035255601Z" level=info msg="containerd successfully booted in 0.263585s" Dec 13 13:14:30.043274 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:14:30.046711 polkitd[2089]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 13:14:30.046844 polkitd[2089]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 13:14:30.049700 polkitd[2089]: Finished loading, compiling and executing 2 rules Dec 13 13:14:30.056057 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 13:14:30.057188 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 13:14:30.062822 polkitd[2089]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 13:14:30.083907 update-ssh-keys[2100]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:14:30.086101 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:14:30.095082 systemd[1]: Finished sshkeys.service. Dec 13 13:14:30.112849 systemd-resolved[1849]: System hostname changed to 'ip-172-31-29-1'. Dec 13 13:14:30.112854 systemd-hostnamed[1967]: Hostname set to (transient) Dec 13 13:14:30.130691 ntpd[1921]: bind(24) AF_INET6 fe80::420:2fff:fe99:b6b1%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:14:30.131912 ntpd[1921]: 13 Dec 13:14:30 ntpd[1921]: bind(24) AF_INET6 fe80::420:2fff:fe99:b6b1%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:14:30.131912 ntpd[1921]: 13 Dec 13:14:30 ntpd[1921]: unable to create socket on eth0 (6) for fe80::420:2fff:fe99:b6b1%2#123 Dec 13 13:14:30.131912 ntpd[1921]: 13 Dec 13:14:30 ntpd[1921]: failed to init interface for address fe80::420:2fff:fe99:b6b1%2 Dec 13 13:14:30.130757 ntpd[1921]: unable to create socket on eth0 (6) for fe80::420:2fff:fe99:b6b1%2#123 Dec 13 13:14:30.130785 ntpd[1921]: failed to init interface for address fe80::420:2fff:fe99:b6b1%2 Dec 13 13:14:30.202437 systemd-networkd[1848]: eth0: Gained IPv6LL Dec 13 13:14:30.209638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:14:30.215169 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:14:30.226728 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 13:14:30.241551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:30.247850 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:14:30.377316 amazon-ssm-agent[2122]: Initializing new seelog logger Dec 13 13:14:30.377316 amazon-ssm-agent[2122]: New Seelog Logger Creation Complete Dec 13 13:14:30.377316 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.377316 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.378062 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 processing appconfig overrides Dec 13 13:14:30.382250 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.382250 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.382250 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 processing appconfig overrides Dec 13 13:14:30.382250 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.382250 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.382250 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 processing appconfig overrides Dec 13 13:14:30.382250 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO Proxy environment variables: Dec 13 13:14:30.383715 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:14:30.389588 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.390287 amazon-ssm-agent[2122]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:14:30.390544 amazon-ssm-agent[2122]: 2024/12/13 13:14:30 processing appconfig overrides Dec 13 13:14:30.482003 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO https_proxy: Dec 13 13:14:30.582437 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO http_proxy: Dec 13 13:14:30.680559 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO no_proxy: Dec 13 13:14:30.782670 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO Checking if agent identity type OnPrem can be assumed Dec 13 13:14:30.834244 sshd_keygen[1959]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:14:30.880569 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO Checking if agent identity type EC2 can be assumed Dec 13 13:14:30.937000 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:14:30.949685 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:14:30.964711 systemd[1]: Started sshd@0-172.31.29.1:22-139.178.89.65:59438.service - OpenSSH per-connection server daemon (139.178.89.65:59438). Dec 13 13:14:30.980643 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO Agent will take identity from EC2 Dec 13 13:14:30.993860 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:14:30.997336 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:14:31.011748 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:14:31.081254 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:14:31.084125 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:14:31.103836 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:14:31.112792 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:14:31.115691 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:14:31.170851 tar[1945]: linux-arm64/LICENSE Dec 13 13:14:31.172269 tar[1945]: linux-arm64/README.md Dec 13 13:14:31.182266 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:14:31.208336 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:14:31.277865 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:14:31.300341 sshd[2150]: Accepted publickey for core from 139.178.89.65 port 59438 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:31.303456 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:31.333056 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:14:31.345751 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:14:31.354861 systemd-logind[1926]: New session 1 of user core. Dec 13 13:14:31.377674 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 13:14:31.391300 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:14:31.410049 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:14:31.430884 (systemd)[2164]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:14:31.478322 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 13:14:31.577630 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 13:14:31.678083 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 13:14:31.721267 systemd[2164]: Queued start job for default target default.target. Dec 13 13:14:31.728985 systemd[2164]: Created slice app.slice - User Application Slice. Dec 13 13:14:31.729062 systemd[2164]: Reached target paths.target - Paths. Dec 13 13:14:31.729094 systemd[2164]: Reached target timers.target - Timers. Dec 13 13:14:31.743373 systemd[2164]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:14:31.770510 systemd[2164]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:14:31.770623 systemd[2164]: Reached target sockets.target - Sockets. Dec 13 13:14:31.770654 systemd[2164]: Reached target basic.target - Basic System. Dec 13 13:14:31.770752 systemd[2164]: Reached target default.target - Main User Target. Dec 13 13:14:31.770818 systemd[2164]: Startup finished in 323ms. Dec 13 13:14:31.770835 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:14:31.778351 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [Registrar] Starting registrar module Dec 13 13:14:31.780508 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:14:31.835975 amazon-ssm-agent[2122]: 2024-12-13 13:14:30 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 13:14:31.835975 amazon-ssm-agent[2122]: 2024-12-13 13:14:31 INFO [EC2Identity] EC2 registration was successful. Dec 13 13:14:31.836162 amazon-ssm-agent[2122]: 2024-12-13 13:14:31 INFO [CredentialRefresher] credentialRefresher has started Dec 13 13:14:31.836162 amazon-ssm-agent[2122]: 2024-12-13 13:14:31 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 13:14:31.836162 amazon-ssm-agent[2122]: 2024-12-13 13:14:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 13:14:31.879042 amazon-ssm-agent[2122]: 2024-12-13 13:14:31 INFO [CredentialRefresher] Next credential rotation will be in 32.14165892406667 minutes Dec 13 13:14:31.944746 systemd[1]: Started sshd@1-172.31.29.1:22-139.178.89.65:58044.service - OpenSSH per-connection server daemon (139.178.89.65:58044). Dec 13 13:14:32.131238 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 58044 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:32.134256 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:32.141147 systemd-logind[1926]: New session 2 of user core. Dec 13 13:14:32.151664 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:14:32.282760 sshd[2177]: Connection closed by 139.178.89.65 port 58044 Dec 13 13:14:32.281787 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:32.287901 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:14:32.289609 systemd[1]: sshd@1-172.31.29.1:22-139.178.89.65:58044.service: Deactivated successfully. Dec 13 13:14:32.297045 systemd-logind[1926]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:14:32.299648 systemd-logind[1926]: Removed session 2. Dec 13 13:14:32.319813 systemd[1]: Started sshd@2-172.31.29.1:22-139.178.89.65:58054.service - OpenSSH per-connection server daemon (139.178.89.65:58054). Dec 13 13:14:32.505744 sshd[2182]: Accepted publickey for core from 139.178.89.65 port 58054 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:32.508284 sshd-session[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:32.517990 systemd-logind[1926]: New session 3 of user core. Dec 13 13:14:32.532566 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:14:32.660920 sshd[2184]: Connection closed by 139.178.89.65 port 58054 Dec 13 13:14:32.663517 sshd-session[2182]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:32.669514 systemd[1]: sshd@2-172.31.29.1:22-139.178.89.65:58054.service: Deactivated successfully. Dec 13 13:14:32.673111 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:14:32.674364 systemd-logind[1926]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:14:32.676118 systemd-logind[1926]: Removed session 3. Dec 13 13:14:32.809526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:32.813277 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:14:32.816575 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:14:32.819158 systemd[1]: Startup finished in 1.076s (kernel) + 8.950s (initrd) + 9.133s (userspace) = 19.159s. Dec 13 13:14:32.850168 agetty[2158]: failed to open credentials directory Dec 13 13:14:32.852132 agetty[2157]: failed to open credentials directory Dec 13 13:14:32.868903 amazon-ssm-agent[2122]: 2024-12-13 13:14:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 13:14:32.971002 amazon-ssm-agent[2122]: 2024-12-13 13:14:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2199) started Dec 13 13:14:33.072119 amazon-ssm-agent[2122]: 2024-12-13 13:14:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 13:14:33.130684 ntpd[1921]: Listen normally on 7 eth0 [fe80::420:2fff:fe99:b6b1%2]:123 Dec 13 13:14:33.131633 ntpd[1921]: 13 Dec 13:14:33 ntpd[1921]: Listen normally on 7 eth0 [fe80::420:2fff:fe99:b6b1%2]:123 Dec 13 13:14:34.150011 kubelet[2193]: E1213 13:14:34.149924 2193 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:14:34.155281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:14:34.155634 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:14:34.156424 systemd[1]: kubelet.service: Consumed 1.354s CPU time. Dec 13 13:14:42.707725 systemd[1]: Started sshd@3-172.31.29.1:22-139.178.89.65:44170.service - OpenSSH per-connection server daemon (139.178.89.65:44170). Dec 13 13:14:42.891869 sshd[2218]: Accepted publickey for core from 139.178.89.65 port 44170 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:42.894352 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:42.903559 systemd-logind[1926]: New session 4 of user core. Dec 13 13:14:42.913512 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:14:43.043027 sshd[2220]: Connection closed by 139.178.89.65 port 44170 Dec 13 13:14:43.042751 sshd-session[2218]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:43.049131 systemd[1]: sshd@3-172.31.29.1:22-139.178.89.65:44170.service: Deactivated successfully. Dec 13 13:14:43.053800 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:14:43.055274 systemd-logind[1926]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:14:43.057129 systemd-logind[1926]: Removed session 4. Dec 13 13:14:43.081773 systemd[1]: Started sshd@4-172.31.29.1:22-139.178.89.65:44186.service - OpenSSH per-connection server daemon (139.178.89.65:44186). Dec 13 13:14:43.268197 sshd[2225]: Accepted publickey for core from 139.178.89.65 port 44186 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:43.270626 sshd-session[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:43.277763 systemd-logind[1926]: New session 5 of user core. Dec 13 13:14:43.287510 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:14:43.408567 sshd[2227]: Connection closed by 139.178.89.65 port 44186 Dec 13 13:14:43.409852 sshd-session[2225]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:43.414757 systemd[1]: sshd@4-172.31.29.1:22-139.178.89.65:44186.service: Deactivated successfully. Dec 13 13:14:43.418657 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:14:43.422526 systemd-logind[1926]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:14:43.424533 systemd-logind[1926]: Removed session 5. Dec 13 13:14:43.446393 systemd[1]: Started sshd@5-172.31.29.1:22-139.178.89.65:44200.service - OpenSSH per-connection server daemon (139.178.89.65:44200). Dec 13 13:14:43.635711 sshd[2232]: Accepted publickey for core from 139.178.89.65 port 44200 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:43.638199 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:43.646032 systemd-logind[1926]: New session 6 of user core. Dec 13 13:14:43.660496 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:14:43.787308 sshd[2234]: Connection closed by 139.178.89.65 port 44200 Dec 13 13:14:43.787107 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:43.794470 systemd[1]: sshd@5-172.31.29.1:22-139.178.89.65:44200.service: Deactivated successfully. Dec 13 13:14:43.799162 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:14:43.802328 systemd-logind[1926]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:14:43.804312 systemd-logind[1926]: Removed session 6. Dec 13 13:14:43.830762 systemd[1]: Started sshd@6-172.31.29.1:22-139.178.89.65:44204.service - OpenSSH per-connection server daemon (139.178.89.65:44204). Dec 13 13:14:44.022047 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 44204 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:44.024404 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:44.032509 systemd-logind[1926]: New session 7 of user core. Dec 13 13:14:44.042518 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:14:44.178524 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:14:44.179152 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:14:44.180573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:14:44.186581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:44.194447 sudo[2242]: pam_unix(sudo:session): session closed for user root Dec 13 13:14:44.217958 sshd[2241]: Connection closed by 139.178.89.65 port 44204 Dec 13 13:14:44.218770 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:44.225278 systemd[1]: sshd@6-172.31.29.1:22-139.178.89.65:44204.service: Deactivated successfully. Dec 13 13:14:44.229056 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:14:44.234817 systemd-logind[1926]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:14:44.237621 systemd-logind[1926]: Removed session 7. Dec 13 13:14:44.252762 systemd[1]: Started sshd@7-172.31.29.1:22-139.178.89.65:44212.service - OpenSSH per-connection server daemon (139.178.89.65:44212). Dec 13 13:14:44.460753 sshd[2250]: Accepted publickey for core from 139.178.89.65 port 44212 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:44.463411 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:44.474901 systemd-logind[1926]: New session 8 of user core. Dec 13 13:14:44.481896 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:14:44.506976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:44.517316 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:14:44.600011 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:14:44.600749 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:14:44.608531 sudo[2265]: pam_unix(sudo:session): session closed for user root Dec 13 13:14:44.616013 kubelet[2258]: E1213 13:14:44.615916 2258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:14:44.621983 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:14:44.623442 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:14:44.624158 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:14:44.624509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:14:44.649821 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:14:44.698892 augenrules[2289]: No rules Dec 13 13:14:44.701092 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:14:44.701566 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:14:44.703878 sudo[2264]: pam_unix(sudo:session): session closed for user root Dec 13 13:14:44.727717 sshd[2256]: Connection closed by 139.178.89.65 port 44212 Dec 13 13:14:44.729465 sshd-session[2250]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:44.735828 systemd[1]: sshd@7-172.31.29.1:22-139.178.89.65:44212.service: Deactivated successfully. Dec 13 13:14:44.739938 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:14:44.741202 systemd-logind[1926]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:14:44.742884 systemd-logind[1926]: Removed session 8. Dec 13 13:14:44.769751 systemd[1]: Started sshd@8-172.31.29.1:22-139.178.89.65:44214.service - OpenSSH per-connection server daemon (139.178.89.65:44214). Dec 13 13:14:44.946874 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 44214 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:14:44.949367 sshd-session[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:14:44.957729 systemd-logind[1926]: New session 9 of user core. Dec 13 13:14:44.967573 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:14:45.070153 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:14:45.071692 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:14:45.793701 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:14:45.806737 (dockerd)[2319]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:14:46.248248 dockerd[2319]: time="2024-12-13T13:14:46.246943556Z" level=info msg="Starting up" Dec 13 13:14:46.373010 systemd[1]: var-lib-docker-metacopy\x2dcheck3570279771-merged.mount: Deactivated successfully. Dec 13 13:14:46.385054 dockerd[2319]: time="2024-12-13T13:14:46.385003564Z" level=info msg="Loading containers: start." Dec 13 13:14:46.632279 kernel: Initializing XFRM netlink socket Dec 13 13:14:46.667344 (udev-worker)[2342]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:14:46.762167 systemd-networkd[1848]: docker0: Link UP Dec 13 13:14:46.798712 dockerd[2319]: time="2024-12-13T13:14:46.798562928Z" level=info msg="Loading containers: done." Dec 13 13:14:46.827196 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3785240458-merged.mount: Deactivated successfully. Dec 13 13:14:46.833252 dockerd[2319]: time="2024-12-13T13:14:46.832600790Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:14:46.833252 dockerd[2319]: time="2024-12-13T13:14:46.832738175Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:14:46.833252 dockerd[2319]: time="2024-12-13T13:14:46.832969002Z" level=info msg="Daemon has completed initialization" Dec 13 13:14:46.886078 dockerd[2319]: time="2024-12-13T13:14:46.885963175Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:14:46.886725 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:14:48.059776 containerd[1955]: time="2024-12-13T13:14:48.059416452Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:14:48.679497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175241199.mount: Deactivated successfully. Dec 13 13:14:50.079939 containerd[1955]: time="2024-12-13T13:14:50.079870957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:50.081994 containerd[1955]: time="2024-12-13T13:14:50.081907090Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864010" Dec 13 13:14:50.082945 containerd[1955]: time="2024-12-13T13:14:50.082455812Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:50.088088 containerd[1955]: time="2024-12-13T13:14:50.087989366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:50.090586 containerd[1955]: time="2024-12-13T13:14:50.090337055Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.030857128s" Dec 13 13:14:50.090586 containerd[1955]: time="2024-12-13T13:14:50.090395945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 13:14:50.131595 containerd[1955]: time="2024-12-13T13:14:50.131540687Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:14:51.652246 containerd[1955]: time="2024-12-13T13:14:51.650892470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:51.653000 containerd[1955]: time="2024-12-13T13:14:51.652931365Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900694" Dec 13 13:14:51.653889 containerd[1955]: time="2024-12-13T13:14:51.653396597Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:51.659249 containerd[1955]: time="2024-12-13T13:14:51.658941736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:51.661514 containerd[1955]: time="2024-12-13T13:14:51.661315767Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 1.529713129s" Dec 13 13:14:51.661514 containerd[1955]: time="2024-12-13T13:14:51.661369025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 13:14:51.703687 containerd[1955]: time="2024-12-13T13:14:51.703619160Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:14:52.841345 containerd[1955]: time="2024-12-13T13:14:52.840881743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:52.843056 containerd[1955]: time="2024-12-13T13:14:52.842965817Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164332" Dec 13 13:14:52.844844 containerd[1955]: time="2024-12-13T13:14:52.844769598Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:52.850498 containerd[1955]: time="2024-12-13T13:14:52.850415912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:52.852775 containerd[1955]: time="2024-12-13T13:14:52.852581062Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.148895628s" Dec 13 13:14:52.852775 containerd[1955]: time="2024-12-13T13:14:52.852637610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 13:14:52.894475 containerd[1955]: time="2024-12-13T13:14:52.894400518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:14:54.110826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033325812.mount: Deactivated successfully. Dec 13 13:14:54.702881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:14:54.704823 containerd[1955]: time="2024-12-13T13:14:54.704743249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:54.708519 containerd[1955]: time="2024-12-13T13:14:54.708444375Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Dec 13 13:14:54.710575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:54.714258 containerd[1955]: time="2024-12-13T13:14:54.713640296Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:54.719806 containerd[1955]: time="2024-12-13T13:14:54.719732092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:54.721714 containerd[1955]: time="2024-12-13T13:14:54.721655249Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.8271741s" Dec 13 13:14:54.721949 containerd[1955]: time="2024-12-13T13:14:54.721910617Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 13:14:54.777356 containerd[1955]: time="2024-12-13T13:14:54.776719664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:14:55.022346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:55.038728 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:14:55.125234 kubelet[2609]: E1213 13:14:55.125134 2609 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:14:55.129989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:14:55.130505 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:14:55.349749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201169581.mount: Deactivated successfully. Dec 13 13:14:56.558276 containerd[1955]: time="2024-12-13T13:14:56.557363706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:56.560130 containerd[1955]: time="2024-12-13T13:14:56.560041020Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 13:14:56.562659 containerd[1955]: time="2024-12-13T13:14:56.562581537Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:56.568978 containerd[1955]: time="2024-12-13T13:14:56.568876907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:56.571491 containerd[1955]: time="2024-12-13T13:14:56.571285803Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.794508715s" Dec 13 13:14:56.571491 containerd[1955]: time="2024-12-13T13:14:56.571343000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:14:56.611498 containerd[1955]: time="2024-12-13T13:14:56.611431334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:14:57.168740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007580583.mount: Deactivated successfully. Dec 13 13:14:57.182395 containerd[1955]: time="2024-12-13T13:14:57.182321765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:57.184237 containerd[1955]: time="2024-12-13T13:14:57.184141598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 13:14:57.186762 containerd[1955]: time="2024-12-13T13:14:57.186677470Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:57.199253 containerd[1955]: time="2024-12-13T13:14:57.198625229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:57.200157 containerd[1955]: time="2024-12-13T13:14:57.199832180Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 588.33743ms" Dec 13 13:14:57.200157 containerd[1955]: time="2024-12-13T13:14:57.199885126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:14:57.239018 containerd[1955]: time="2024-12-13T13:14:57.238950177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:14:57.780188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1954160249.mount: Deactivated successfully. Dec 13 13:14:59.781145 containerd[1955]: time="2024-12-13T13:14:59.781065481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:59.783483 containerd[1955]: time="2024-12-13T13:14:59.783385821Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Dec 13 13:14:59.785640 containerd[1955]: time="2024-12-13T13:14:59.785562485Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:59.791896 containerd[1955]: time="2024-12-13T13:14:59.791819232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:59.794266 containerd[1955]: time="2024-12-13T13:14:59.794188412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.55517644s" Dec 13 13:14:59.794843 containerd[1955]: time="2024-12-13T13:14:59.794433034Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 13:15:00.123171 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 13:15:05.202875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:15:05.211633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:05.527724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:05.534161 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:15:05.617376 kubelet[2789]: E1213 13:15:05.617318 2789 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:15:05.622172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:15:05.622626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:15:06.617545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:06.625736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:06.670425 systemd[1]: Reloading requested from client PID 2803 ('systemctl') (unit session-9.scope)... Dec 13 13:15:06.670659 systemd[1]: Reloading... Dec 13 13:15:06.894275 zram_generator::config[2843]: No configuration found. Dec 13 13:15:07.138522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:07.306045 systemd[1]: Reloading finished in 634 ms. Dec 13 13:15:07.401477 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:15:07.401679 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:15:07.402668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:07.412710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:07.689521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:07.710996 (kubelet)[2906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:15:07.784293 kubelet[2906]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:15:07.784293 kubelet[2906]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:15:07.784293 kubelet[2906]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:15:07.786141 kubelet[2906]: I1213 13:15:07.786063 2906 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:15:08.675846 kubelet[2906]: I1213 13:15:08.675783 2906 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:15:08.675846 kubelet[2906]: I1213 13:15:08.675833 2906 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:15:08.676333 kubelet[2906]: I1213 13:15:08.676198 2906 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:15:08.700070 kubelet[2906]: E1213 13:15:08.700008 2906 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.700629 kubelet[2906]: I1213 13:15:08.700434 2906 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:15:08.716876 kubelet[2906]: I1213 13:15:08.716829 2906 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:15:08.719434 kubelet[2906]: I1213 13:15:08.719337 2906 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:15:08.719745 kubelet[2906]: I1213 13:15:08.719427 2906 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:15:08.719941 kubelet[2906]: I1213 13:15:08.719776 2906 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:15:08.719941 kubelet[2906]: I1213 13:15:08.719798 2906 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:15:08.720081 kubelet[2906]: I1213 13:15:08.720043 2906 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:15:08.721659 kubelet[2906]: I1213 13:15:08.721605 2906 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:15:08.721659 kubelet[2906]: I1213 13:15:08.721645 2906 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:15:08.721826 kubelet[2906]: I1213 13:15:08.721762 2906 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:15:08.721826 kubelet[2906]: I1213 13:15:08.721812 2906 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:15:08.724025 kubelet[2906]: W1213 13:15:08.723678 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.724025 kubelet[2906]: E1213 13:15:08.723794 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.724025 kubelet[2906]: W1213 13:15:08.723930 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.724025 kubelet[2906]: E1213 13:15:08.723986 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.724892 kubelet[2906]: I1213 13:15:08.724861 2906 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:15:08.727001 kubelet[2906]: I1213 13:15:08.725425 2906 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:15:08.727001 kubelet[2906]: W1213 13:15:08.725531 2906 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:15:08.727400 kubelet[2906]: I1213 13:15:08.727368 2906 server.go:1264] "Started kubelet" Dec 13 13:15:08.734898 kubelet[2906]: I1213 13:15:08.734859 2906 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:15:08.736105 kubelet[2906]: E1213 13:15:08.735785 2906 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.1:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.1:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-1.1810bedc05384a1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-1,UID:ip-172-31-29-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-1,},FirstTimestamp:2024-12-13 13:15:08.727298589 +0000 UTC m=+1.009984336,LastTimestamp:2024-12-13 13:15:08.727298589 +0000 UTC m=+1.009984336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-1,}" Dec 13 13:15:08.739690 kubelet[2906]: I1213 13:15:08.739601 2906 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:15:08.741362 kubelet[2906]: I1213 13:15:08.741311 2906 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:15:08.742943 kubelet[2906]: I1213 13:15:08.742842 2906 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:15:08.743284 kubelet[2906]: I1213 13:15:08.743248 2906 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:15:08.745830 kubelet[2906]: I1213 13:15:08.745786 2906 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:15:08.745998 kubelet[2906]: I1213 13:15:08.745967 2906 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:15:08.746165 kubelet[2906]: I1213 13:15:08.746131 2906 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:15:08.749077 kubelet[2906]: W1213 13:15:08.746985 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.749077 kubelet[2906]: E1213 13:15:08.747104 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.749077 kubelet[2906]: E1213 13:15:08.748392 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="200ms" Dec 13 13:15:08.749985 kubelet[2906]: I1213 13:15:08.749940 2906 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:15:08.750151 kubelet[2906]: I1213 13:15:08.750111 2906 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:15:08.751849 kubelet[2906]: E1213 13:15:08.751798 2906 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:15:08.753967 kubelet[2906]: I1213 13:15:08.753932 2906 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:15:08.788570 kubelet[2906]: I1213 13:15:08.788500 2906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:15:08.794766 kubelet[2906]: I1213 13:15:08.794697 2906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:15:08.794895 kubelet[2906]: I1213 13:15:08.794814 2906 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:15:08.794895 kubelet[2906]: I1213 13:15:08.794857 2906 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:15:08.795020 kubelet[2906]: E1213 13:15:08.794943 2906 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:15:08.800544 kubelet[2906]: I1213 13:15:08.800501 2906 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:15:08.800704 kubelet[2906]: I1213 13:15:08.800682 2906 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:15:08.800859 kubelet[2906]: I1213 13:15:08.800838 2906 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:15:08.803724 kubelet[2906]: W1213 13:15:08.803630 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.803851 kubelet[2906]: E1213 13:15:08.803730 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:08.808456 kubelet[2906]: I1213 13:15:08.808285 2906 policy_none.go:49] "None policy: Start" Dec 13 13:15:08.810233 kubelet[2906]: I1213 13:15:08.809636 2906 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:15:08.810233 kubelet[2906]: I1213 13:15:08.809675 2906 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:15:08.821588 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:15:08.844322 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:15:08.848976 kubelet[2906]: I1213 13:15:08.848935 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-1" Dec 13 13:15:08.849960 kubelet[2906]: E1213 13:15:08.849884 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Dec 13 13:15:08.855192 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:15:08.866948 kubelet[2906]: I1213 13:15:08.865843 2906 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:15:08.866948 kubelet[2906]: I1213 13:15:08.866193 2906 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:15:08.866948 kubelet[2906]: I1213 13:15:08.866455 2906 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:15:08.870432 kubelet[2906]: E1213 13:15:08.870390 2906 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-1\" not found" Dec 13 13:15:08.895633 kubelet[2906]: I1213 13:15:08.895562 2906 topology_manager.go:215] "Topology Admit Handler" podUID="963b5446064aefb95a20a169924014fc" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-1" Dec 13 13:15:08.897920 kubelet[2906]: I1213 13:15:08.897854 2906 topology_manager.go:215] "Topology Admit Handler" podUID="7964d3e55267dee149f7bfc09237350b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-1" Dec 13 13:15:08.899969 kubelet[2906]: I1213 13:15:08.899924 2906 topology_manager.go:215] "Topology Admit Handler" podUID="28d2fb0d864dae9de8ce2679792733fa" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:08.914662 systemd[1]: Created slice kubepods-burstable-pod963b5446064aefb95a20a169924014fc.slice - libcontainer container kubepods-burstable-pod963b5446064aefb95a20a169924014fc.slice. Dec 13 13:15:08.945370 systemd[1]: Created slice kubepods-burstable-pod7964d3e55267dee149f7bfc09237350b.slice - libcontainer container kubepods-burstable-pod7964d3e55267dee149f7bfc09237350b.slice. Dec 13 13:15:08.949638 kubelet[2906]: I1213 13:15:08.949576 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:08.949774 kubelet[2906]: I1213 13:15:08.949643 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:08.949774 kubelet[2906]: I1213 13:15:08.949688 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:08.949774 kubelet[2906]: I1213 13:15:08.949731 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7964d3e55267dee149f7bfc09237350b-ca-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"7964d3e55267dee149f7bfc09237350b\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Dec 13 13:15:08.949774 kubelet[2906]: I1213 13:15:08.949767 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:08.950055 kubelet[2906]: I1213 13:15:08.949802 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:08.950055 kubelet[2906]: I1213 13:15:08.949837 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/963b5446064aefb95a20a169924014fc-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-1\" (UID: \"963b5446064aefb95a20a169924014fc\") " pod="kube-system/kube-scheduler-ip-172-31-29-1" Dec 13 13:15:08.950055 kubelet[2906]: I1213 13:15:08.949871 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7964d3e55267dee149f7bfc09237350b-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"7964d3e55267dee149f7bfc09237350b\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Dec 13 13:15:08.950055 kubelet[2906]: I1213 13:15:08.949907 2906 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7964d3e55267dee149f7bfc09237350b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"7964d3e55267dee149f7bfc09237350b\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Dec 13 13:15:08.951485 kubelet[2906]: E1213 13:15:08.951262 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="400ms" Dec 13 13:15:08.957526 systemd[1]: Created slice kubepods-burstable-pod28d2fb0d864dae9de8ce2679792733fa.slice - libcontainer container kubepods-burstable-pod28d2fb0d864dae9de8ce2679792733fa.slice. Dec 13 13:15:09.052621 kubelet[2906]: I1213 13:15:09.052557 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-1" Dec 13 13:15:09.053278 kubelet[2906]: E1213 13:15:09.053189 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Dec 13 13:15:09.238982 containerd[1955]: time="2024-12-13T13:15:09.238877146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-1,Uid:963b5446064aefb95a20a169924014fc,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:09.255264 containerd[1955]: time="2024-12-13T13:15:09.254891265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-1,Uid:7964d3e55267dee149f7bfc09237350b,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:09.263450 containerd[1955]: time="2024-12-13T13:15:09.263384453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-1,Uid:28d2fb0d864dae9de8ce2679792733fa,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:09.352457 kubelet[2906]: E1213 13:15:09.352296 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="800ms" Dec 13 13:15:09.455464 kubelet[2906]: I1213 13:15:09.455421 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-1" Dec 13 13:15:09.456003 kubelet[2906]: E1213 13:15:09.455958 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Dec 13 13:15:09.609970 kubelet[2906]: W1213 13:15:09.609753 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:09.609970 kubelet[2906]: E1213 13:15:09.609848 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.1:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-1&limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:09.613312 kubelet[2906]: W1213 13:15:09.613236 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:09.613312 kubelet[2906]: E1213 13:15:09.613326 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.1:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:09.743096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2143831069.mount: Deactivated successfully. Dec 13 13:15:09.756563 containerd[1955]: time="2024-12-13T13:15:09.756495873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:09.762978 containerd[1955]: time="2024-12-13T13:15:09.762884782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 13:15:09.764823 containerd[1955]: time="2024-12-13T13:15:09.764767383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:09.767718 containerd[1955]: time="2024-12-13T13:15:09.767490512Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:09.771140 containerd[1955]: time="2024-12-13T13:15:09.771071998Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:09.773359 containerd[1955]: time="2024-12-13T13:15:09.773279925Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:15:09.775409 containerd[1955]: time="2024-12-13T13:15:09.774963227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:15:09.777783 containerd[1955]: time="2024-12-13T13:15:09.777734103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:15:09.781199 containerd[1955]: time="2024-12-13T13:15:09.781151587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.656678ms" Dec 13 13:15:09.785479 containerd[1955]: time="2024-12-13T13:15:09.785404833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.415419ms" Dec 13 13:15:09.831203 containerd[1955]: time="2024-12-13T13:15:09.831095731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.094479ms" Dec 13 13:15:09.915296 kubelet[2906]: W1213 13:15:09.914350 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:09.915296 kubelet[2906]: E1213 13:15:09.914451 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.1:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:09.916160 kubelet[2906]: W1213 13:15:09.916113 2906 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:09.916339 kubelet[2906]: E1213 13:15:09.916317 2906 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.1:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.1:6443: connect: connection refused Dec 13 13:15:10.005308 containerd[1955]: time="2024-12-13T13:15:10.003736598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:10.005308 containerd[1955]: time="2024-12-13T13:15:10.003848734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:10.005308 containerd[1955]: time="2024-12-13T13:15:10.003884800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:10.005308 containerd[1955]: time="2024-12-13T13:15:10.004026495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:10.016610 containerd[1955]: time="2024-12-13T13:15:10.016444493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:10.016955 containerd[1955]: time="2024-12-13T13:15:10.016881211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:10.017138 containerd[1955]: time="2024-12-13T13:15:10.017079622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:10.020005 containerd[1955]: time="2024-12-13T13:15:10.019913710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:10.029776 containerd[1955]: time="2024-12-13T13:15:10.029308201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:10.029776 containerd[1955]: time="2024-12-13T13:15:10.029420313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:10.029776 containerd[1955]: time="2024-12-13T13:15:10.029458889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:10.029776 containerd[1955]: time="2024-12-13T13:15:10.029610861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:10.047648 systemd[1]: Started cri-containerd-04c174a6d359aa8bf82d68af15f76fcb7f10de30bc5b9d49781e1947b99a8b0e.scope - libcontainer container 04c174a6d359aa8bf82d68af15f76fcb7f10de30bc5b9d49781e1947b99a8b0e. Dec 13 13:15:10.080587 systemd[1]: Started cri-containerd-d2ce003e2bf93b4fb0352b72192771fd79e19ad7474b5fdd36d70aa750b0c5c5.scope - libcontainer container d2ce003e2bf93b4fb0352b72192771fd79e19ad7474b5fdd36d70aa750b0c5c5. Dec 13 13:15:10.093573 systemd[1]: Started cri-containerd-7cd6014afeeab3c015469fc52ddcc24018992590ed58fab064ad89b191521253.scope - libcontainer container 7cd6014afeeab3c015469fc52ddcc24018992590ed58fab064ad89b191521253. Dec 13 13:15:10.154107 kubelet[2906]: E1213 13:15:10.153973 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": dial tcp 172.31.29.1:6443: connect: connection refused" interval="1.6s" Dec 13 13:15:10.181397 containerd[1955]: time="2024-12-13T13:15:10.181251876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-1,Uid:28d2fb0d864dae9de8ce2679792733fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ce003e2bf93b4fb0352b72192771fd79e19ad7474b5fdd36d70aa750b0c5c5\"" Dec 13 13:15:10.201377 containerd[1955]: time="2024-12-13T13:15:10.201227423Z" level=info msg="CreateContainer within sandbox \"d2ce003e2bf93b4fb0352b72192771fd79e19ad7474b5fdd36d70aa750b0c5c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:15:10.214247 containerd[1955]: time="2024-12-13T13:15:10.213929447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-1,Uid:7964d3e55267dee149f7bfc09237350b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cd6014afeeab3c015469fc52ddcc24018992590ed58fab064ad89b191521253\"" Dec 13 13:15:10.215546 containerd[1955]: time="2024-12-13T13:15:10.215201231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-1,Uid:963b5446064aefb95a20a169924014fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"04c174a6d359aa8bf82d68af15f76fcb7f10de30bc5b9d49781e1947b99a8b0e\"" Dec 13 13:15:10.224181 containerd[1955]: time="2024-12-13T13:15:10.224116442Z" level=info msg="CreateContainer within sandbox \"04c174a6d359aa8bf82d68af15f76fcb7f10de30bc5b9d49781e1947b99a8b0e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:15:10.224581 containerd[1955]: time="2024-12-13T13:15:10.224123177Z" level=info msg="CreateContainer within sandbox \"7cd6014afeeab3c015469fc52ddcc24018992590ed58fab064ad89b191521253\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:15:10.256689 containerd[1955]: time="2024-12-13T13:15:10.256619242Z" level=info msg="CreateContainer within sandbox \"d2ce003e2bf93b4fb0352b72192771fd79e19ad7474b5fdd36d70aa750b0c5c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8\"" Dec 13 13:15:10.258919 containerd[1955]: time="2024-12-13T13:15:10.258338753Z" level=info msg="StartContainer for \"7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8\"" Dec 13 13:15:10.273741 kubelet[2906]: I1213 13:15:10.273358 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-1" Dec 13 13:15:10.274056 kubelet[2906]: E1213 13:15:10.273915 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.1:6443/api/v1/nodes\": dial tcp 172.31.29.1:6443: connect: connection refused" node="ip-172-31-29-1" Dec 13 13:15:10.279928 containerd[1955]: time="2024-12-13T13:15:10.279848390Z" level=info msg="CreateContainer within sandbox \"7cd6014afeeab3c015469fc52ddcc24018992590ed58fab064ad89b191521253\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c097d019a066492e815f796ab30e62c5390b2d34195dcf5a54ec9e507bebf1c\"" Dec 13 13:15:10.281326 containerd[1955]: time="2024-12-13T13:15:10.280930731Z" level=info msg="StartContainer for \"3c097d019a066492e815f796ab30e62c5390b2d34195dcf5a54ec9e507bebf1c\"" Dec 13 13:15:10.285544 containerd[1955]: time="2024-12-13T13:15:10.285466070Z" level=info msg="CreateContainer within sandbox \"04c174a6d359aa8bf82d68af15f76fcb7f10de30bc5b9d49781e1947b99a8b0e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d\"" Dec 13 13:15:10.287284 containerd[1955]: time="2024-12-13T13:15:10.287067118Z" level=info msg="StartContainer for \"47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d\"" Dec 13 13:15:10.315879 systemd[1]: Started cri-containerd-7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8.scope - libcontainer container 7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8. Dec 13 13:15:10.366500 systemd[1]: Started cri-containerd-3c097d019a066492e815f796ab30e62c5390b2d34195dcf5a54ec9e507bebf1c.scope - libcontainer container 3c097d019a066492e815f796ab30e62c5390b2d34195dcf5a54ec9e507bebf1c. Dec 13 13:15:10.397544 systemd[1]: Started cri-containerd-47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d.scope - libcontainer container 47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d. Dec 13 13:15:10.445887 containerd[1955]: time="2024-12-13T13:15:10.445750613Z" level=info msg="StartContainer for \"7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8\" returns successfully" Dec 13 13:15:10.485093 containerd[1955]: time="2024-12-13T13:15:10.484988611Z" level=info msg="StartContainer for \"3c097d019a066492e815f796ab30e62c5390b2d34195dcf5a54ec9e507bebf1c\" returns successfully" Dec 13 13:15:10.552275 containerd[1955]: time="2024-12-13T13:15:10.552056672Z" level=info msg="StartContainer for \"47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d\" returns successfully" Dec 13 13:15:11.877245 kubelet[2906]: I1213 13:15:11.877160 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-1" Dec 13 13:15:14.663333 update_engine[1929]: I20241213 13:15:14.663254 1929 update_attempter.cc:509] Updating boot flags... Dec 13 13:15:14.829494 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3199) Dec 13 13:15:15.337448 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3202) Dec 13 13:15:15.491700 kubelet[2906]: E1213 13:15:15.491652 2906 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-1\" not found" node="ip-172-31-29-1" Dec 13 13:15:15.508161 kubelet[2906]: I1213 13:15:15.507814 2906 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-1" Dec 13 13:15:15.539586 kubelet[2906]: E1213 13:15:15.539445 2906 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-1.1810bedc05384a1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-1,UID:ip-172-31-29-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-1,},FirstTimestamp:2024-12-13 13:15:08.727298589 +0000 UTC m=+1.009984336,LastTimestamp:2024-12-13 13:15:08.727298589 +0000 UTC m=+1.009984336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-1,}" Dec 13 13:15:15.676781 kubelet[2906]: E1213 13:15:15.676027 2906 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-1.1810bedc06adcdee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-1,UID:ip-172-31-29-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-29-1,},FirstTimestamp:2024-12-13 13:15:08.751777262 +0000 UTC m=+1.034463033,LastTimestamp:2024-12-13 13:15:08.751777262 +0000 UTC m=+1.034463033,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-1,}" Dec 13 13:15:15.727083 kubelet[2906]: I1213 13:15:15.726731 2906 apiserver.go:52] "Watching apiserver" Dec 13 13:15:15.757840 kubelet[2906]: E1213 13:15:15.757436 2906 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-1.1810bedc08eb21a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-1,UID:ip-172-31-29-1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-29-1 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-29-1,},FirstTimestamp:2024-12-13 13:15:08.78935082 +0000 UTC m=+1.072036567,LastTimestamp:2024-12-13 13:15:08.78935082 +0000 UTC m=+1.072036567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-1,}" Dec 13 13:15:15.846697 kubelet[2906]: I1213 13:15:15.846591 2906 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:15:17.654964 systemd[1]: Reloading requested from client PID 3368 ('systemctl') (unit session-9.scope)... Dec 13 13:15:17.654991 systemd[1]: Reloading... Dec 13 13:15:17.824402 zram_generator::config[3411]: No configuration found. Dec 13 13:15:18.054584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:18.253052 systemd[1]: Reloading finished in 597 ms. Dec 13 13:15:18.326000 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:18.343752 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:15:18.344203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:18.344316 systemd[1]: kubelet.service: Consumed 1.771s CPU time, 112.7M memory peak, 0B memory swap peak. Dec 13 13:15:18.359380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:18.651202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:18.668840 (kubelet)[3468]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:15:18.768090 kubelet[3468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:15:18.768090 kubelet[3468]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:15:18.768090 kubelet[3468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:15:18.768090 kubelet[3468]: I1213 13:15:18.766065 3468 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:15:18.786928 kubelet[3468]: I1213 13:15:18.786878 3468 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:15:18.786928 kubelet[3468]: I1213 13:15:18.786922 3468 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:15:18.788321 kubelet[3468]: I1213 13:15:18.787403 3468 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:15:18.790427 kubelet[3468]: I1213 13:15:18.790366 3468 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:15:18.793057 kubelet[3468]: I1213 13:15:18.792993 3468 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:15:18.805181 sudo[3481]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:15:18.806444 sudo[3481]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:15:18.806536 kubelet[3468]: I1213 13:15:18.805890 3468 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:15:18.806831 kubelet[3468]: I1213 13:15:18.806782 3468 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:15:18.807349 kubelet[3468]: I1213 13:15:18.806960 3468 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:15:18.807818 kubelet[3468]: I1213 13:15:18.807557 3468 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:15:18.807818 kubelet[3468]: I1213 13:15:18.807587 3468 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:15:18.807818 kubelet[3468]: I1213 13:15:18.807652 3468 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:15:18.811299 kubelet[3468]: I1213 13:15:18.809252 3468 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:15:18.811299 kubelet[3468]: I1213 13:15:18.809312 3468 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:15:18.811299 kubelet[3468]: I1213 13:15:18.809379 3468 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:15:18.811299 kubelet[3468]: I1213 13:15:18.809408 3468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:15:18.812726 kubelet[3468]: I1213 13:15:18.812675 3468 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:15:18.813009 kubelet[3468]: I1213 13:15:18.812973 3468 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:15:18.821406 kubelet[3468]: I1213 13:15:18.819164 3468 server.go:1264] "Started kubelet" Dec 13 13:15:18.829033 kubelet[3468]: I1213 13:15:18.828984 3468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:15:18.843328 kubelet[3468]: I1213 13:15:18.842323 3468 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:15:18.861265 kubelet[3468]: I1213 13:15:18.859371 3468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:15:18.861265 kubelet[3468]: I1213 13:15:18.859816 3468 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:15:18.865979 kubelet[3468]: I1213 13:15:18.865304 3468 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:15:18.872261 kubelet[3468]: I1213 13:15:18.871515 3468 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:15:18.877274 kubelet[3468]: I1213 13:15:18.876078 3468 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:15:18.878285 kubelet[3468]: I1213 13:15:18.878129 3468 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:15:18.891674 kubelet[3468]: E1213 13:15:18.887852 3468 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:15:18.891674 kubelet[3468]: I1213 13:15:18.888350 3468 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:15:18.891674 kubelet[3468]: I1213 13:15:18.888525 3468 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:15:18.904990 kubelet[3468]: I1213 13:15:18.904466 3468 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:15:18.927733 kubelet[3468]: I1213 13:15:18.925597 3468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:15:18.933375 kubelet[3468]: I1213 13:15:18.931830 3468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:15:18.933375 kubelet[3468]: I1213 13:15:18.931903 3468 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:15:18.933375 kubelet[3468]: I1213 13:15:18.931941 3468 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:15:18.933375 kubelet[3468]: E1213 13:15:18.932073 3468 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:15:18.978084 kubelet[3468]: I1213 13:15:18.978023 3468 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-1" Dec 13 13:15:19.001699 kubelet[3468]: I1213 13:15:18.998662 3468 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-1" Dec 13 13:15:19.001699 kubelet[3468]: I1213 13:15:18.999021 3468 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-1" Dec 13 13:15:19.034880 kubelet[3468]: E1213 13:15:19.034795 3468 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:15:19.060958 kubelet[3468]: I1213 13:15:19.060920 3468 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:15:19.061839 kubelet[3468]: I1213 13:15:19.061322 3468 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:15:19.061839 kubelet[3468]: I1213 13:15:19.061394 3468 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:15:19.061839 kubelet[3468]: I1213 13:15:19.061656 3468 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:15:19.061839 kubelet[3468]: I1213 13:15:19.061677 3468 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:15:19.061839 kubelet[3468]: I1213 13:15:19.061713 3468 policy_none.go:49] "None policy: Start" Dec 13 13:15:19.064242 kubelet[3468]: I1213 13:15:19.063371 3468 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:15:19.064242 kubelet[3468]: I1213 13:15:19.063424 3468 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:15:19.064242 kubelet[3468]: I1213 13:15:19.063787 3468 state_mem.go:75] "Updated machine memory state" Dec 13 13:15:19.074193 kubelet[3468]: I1213 13:15:19.074155 3468 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:15:19.078287 kubelet[3468]: I1213 13:15:19.078166 3468 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:15:19.078942 kubelet[3468]: I1213 13:15:19.078905 3468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:15:19.236239 kubelet[3468]: I1213 13:15:19.235229 3468 topology_manager.go:215] "Topology Admit Handler" podUID="7964d3e55267dee149f7bfc09237350b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-1" Dec 13 13:15:19.236239 kubelet[3468]: I1213 13:15:19.235412 3468 topology_manager.go:215] "Topology Admit Handler" podUID="28d2fb0d864dae9de8ce2679792733fa" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:19.236239 kubelet[3468]: I1213 13:15:19.235675 3468 topology_manager.go:215] "Topology Admit Handler" podUID="963b5446064aefb95a20a169924014fc" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-1" Dec 13 13:15:19.251935 kubelet[3468]: E1213 13:15:19.251245 3468 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-1\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:19.281935 kubelet[3468]: I1213 13:15:19.281294 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7964d3e55267dee149f7bfc09237350b-ca-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"7964d3e55267dee149f7bfc09237350b\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Dec 13 13:15:19.281935 kubelet[3468]: I1213 13:15:19.281362 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7964d3e55267dee149f7bfc09237350b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"7964d3e55267dee149f7bfc09237350b\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Dec 13 13:15:19.281935 kubelet[3468]: I1213 13:15:19.281408 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:19.281935 kubelet[3468]: I1213 13:15:19.281446 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:19.281935 kubelet[3468]: I1213 13:15:19.281482 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/963b5446064aefb95a20a169924014fc-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-1\" (UID: \"963b5446064aefb95a20a169924014fc\") " pod="kube-system/kube-scheduler-ip-172-31-29-1" Dec 13 13:15:19.282334 kubelet[3468]: I1213 13:15:19.281516 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7964d3e55267dee149f7bfc09237350b-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-1\" (UID: \"7964d3e55267dee149f7bfc09237350b\") " pod="kube-system/kube-apiserver-ip-172-31-29-1" Dec 13 13:15:19.282334 kubelet[3468]: I1213 13:15:19.281564 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:19.282334 kubelet[3468]: I1213 13:15:19.281598 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:19.282334 kubelet[3468]: I1213 13:15:19.281633 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/28d2fb0d864dae9de8ce2679792733fa-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-1\" (UID: \"28d2fb0d864dae9de8ce2679792733fa\") " pod="kube-system/kube-controller-manager-ip-172-31-29-1" Dec 13 13:15:19.745823 sudo[3481]: pam_unix(sudo:session): session closed for user root Dec 13 13:15:19.809977 kubelet[3468]: I1213 13:15:19.809910 3468 apiserver.go:52] "Watching apiserver" Dec 13 13:15:19.876388 kubelet[3468]: I1213 13:15:19.876305 3468 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:15:19.929018 kubelet[3468]: I1213 13:15:19.928019 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-1" podStartSLOduration=0.927994704 podStartE2EDuration="927.994704ms" podCreationTimestamp="2024-12-13 13:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:15:19.912310486 +0000 UTC m=+1.235152870" watchObservedRunningTime="2024-12-13 13:15:19.927994704 +0000 UTC m=+1.250837064" Dec 13 13:15:19.929246 kubelet[3468]: I1213 13:15:19.928668 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-1" podStartSLOduration=0.928649427 podStartE2EDuration="928.649427ms" podCreationTimestamp="2024-12-13 13:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:15:19.928589937 +0000 UTC m=+1.251432309" watchObservedRunningTime="2024-12-13 13:15:19.928649427 +0000 UTC m=+1.251491799" Dec 13 13:15:20.006200 kubelet[3468]: I1213 13:15:20.004359 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-1" podStartSLOduration=3.004340246 podStartE2EDuration="3.004340246s" podCreationTimestamp="2024-12-13 13:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:15:19.954682001 +0000 UTC m=+1.277524349" watchObservedRunningTime="2024-12-13 13:15:20.004340246 +0000 UTC m=+1.327182618" Dec 13 13:15:22.668415 sudo[2300]: pam_unix(sudo:session): session closed for user root Dec 13 13:15:22.692085 sshd[2299]: Connection closed by 139.178.89.65 port 44214 Dec 13 13:15:22.693074 sshd-session[2297]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:22.699364 systemd[1]: sshd@8-172.31.29.1:22-139.178.89.65:44214.service: Deactivated successfully. Dec 13 13:15:22.704465 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:15:22.705184 systemd[1]: session-9.scope: Consumed 10.949s CPU time, 187.9M memory peak, 0B memory swap peak. Dec 13 13:15:22.708399 systemd-logind[1926]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:15:22.711115 systemd-logind[1926]: Removed session 9. Dec 13 13:15:33.062941 kubelet[3468]: I1213 13:15:33.062693 3468 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:15:33.065575 containerd[1955]: time="2024-12-13T13:15:33.065507061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:15:33.066528 kubelet[3468]: I1213 13:15:33.066424 3468 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:15:34.068116 kubelet[3468]: I1213 13:15:34.067949 3468 topology_manager.go:215] "Topology Admit Handler" podUID="96616e09-0e4e-4b60-886f-873373a0fd0b" podNamespace="kube-system" podName="kube-proxy-b6sqp" Dec 13 13:15:34.090175 systemd[1]: Created slice kubepods-besteffort-pod96616e09_0e4e_4b60_886f_873373a0fd0b.slice - libcontainer container kubepods-besteffort-pod96616e09_0e4e_4b60_886f_873373a0fd0b.slice. Dec 13 13:15:34.126384 kubelet[3468]: I1213 13:15:34.124049 3468 topology_manager.go:215] "Topology Admit Handler" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" podNamespace="kube-system" podName="cilium-bfxz7" Dec 13 13:15:34.141364 systemd[1]: Created slice kubepods-burstable-pod87a88278_f98c_4be1_a66d_0af03149fc84.slice - libcontainer container kubepods-burstable-pod87a88278_f98c_4be1_a66d_0af03149fc84.slice. Dec 13 13:15:34.147154 kubelet[3468]: W1213 13:15:34.147100 3468 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:34.147406 kubelet[3468]: E1213 13:15:34.147382 3468 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:34.148763 kubelet[3468]: W1213 13:15:34.148612 3468 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:34.148763 kubelet[3468]: E1213 13:15:34.148678 3468 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:34.148763 kubelet[3468]: W1213 13:15:34.148612 3468 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:34.148763 kubelet[3468]: E1213 13:15:34.148719 3468 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:34.176082 kubelet[3468]: I1213 13:15:34.175815 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96616e09-0e4e-4b60-886f-873373a0fd0b-xtables-lock\") pod \"kube-proxy-b6sqp\" (UID: \"96616e09-0e4e-4b60-886f-873373a0fd0b\") " pod="kube-system/kube-proxy-b6sqp" Dec 13 13:15:34.176082 kubelet[3468]: I1213 13:15:34.175887 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-cgroup\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.176082 kubelet[3468]: I1213 13:15:34.175925 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-config-path\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.176082 kubelet[3468]: I1213 13:15:34.175965 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96616e09-0e4e-4b60-886f-873373a0fd0b-kube-proxy\") pod \"kube-proxy-b6sqp\" (UID: \"96616e09-0e4e-4b60-886f-873373a0fd0b\") " pod="kube-system/kube-proxy-b6sqp" Dec 13 13:15:34.176082 kubelet[3468]: I1213 13:15:34.176004 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96616e09-0e4e-4b60-886f-873373a0fd0b-lib-modules\") pod \"kube-proxy-b6sqp\" (UID: \"96616e09-0e4e-4b60-886f-873373a0fd0b\") " pod="kube-system/kube-proxy-b6sqp" Dec 13 13:15:34.176500 kubelet[3468]: I1213 13:15:34.176040 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-kernel\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.176942 kubelet[3468]: I1213 13:15:34.176648 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqccb\" (UniqueName: \"kubernetes.io/projected/96616e09-0e4e-4b60-886f-873373a0fd0b-kube-api-access-rqccb\") pod \"kube-proxy-b6sqp\" (UID: \"96616e09-0e4e-4b60-886f-873373a0fd0b\") " pod="kube-system/kube-proxy-b6sqp" Dec 13 13:15:34.176942 kubelet[3468]: I1213 13:15:34.176739 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-hostproc\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.176942 kubelet[3468]: I1213 13:15:34.176807 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-etc-cni-netd\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.176942 kubelet[3468]: I1213 13:15:34.176874 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-bpf-maps\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177467 kubelet[3468]: I1213 13:15:34.176914 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cni-path\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177467 kubelet[3468]: I1213 13:15:34.177309 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-lib-modules\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177467 kubelet[3468]: I1213 13:15:34.177354 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-run\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177467 kubelet[3468]: I1213 13:15:34.177415 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-xtables-lock\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177986 kubelet[3468]: I1213 13:15:34.177739 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87a88278-f98c-4be1-a66d-0af03149fc84-clustermesh-secrets\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177986 kubelet[3468]: I1213 13:15:34.177828 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-net\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177986 kubelet[3468]: I1213 13:15:34.177866 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-hubble-tls\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.177986 kubelet[3468]: I1213 13:15:34.177930 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx64w\" (UniqueName: \"kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-kube-api-access-lx64w\") pod \"cilium-bfxz7\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " pod="kube-system/cilium-bfxz7" Dec 13 13:15:34.214255 kubelet[3468]: I1213 13:15:34.214174 3468 topology_manager.go:215] "Topology Admit Handler" podUID="522cc9b2-fe6e-4e03-8233-adbdcb02a303" podNamespace="kube-system" podName="cilium-operator-599987898-26dx2" Dec 13 13:15:34.232025 systemd[1]: Created slice kubepods-besteffort-pod522cc9b2_fe6e_4e03_8233_adbdcb02a303.slice - libcontainer container kubepods-besteffort-pod522cc9b2_fe6e_4e03_8233_adbdcb02a303.slice. Dec 13 13:15:34.279179 kubelet[3468]: I1213 13:15:34.278997 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/522cc9b2-fe6e-4e03-8233-adbdcb02a303-cilium-config-path\") pod \"cilium-operator-599987898-26dx2\" (UID: \"522cc9b2-fe6e-4e03-8233-adbdcb02a303\") " pod="kube-system/cilium-operator-599987898-26dx2" Dec 13 13:15:34.279359 kubelet[3468]: I1213 13:15:34.279189 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kfpb\" (UniqueName: \"kubernetes.io/projected/522cc9b2-fe6e-4e03-8233-adbdcb02a303-kube-api-access-9kfpb\") pod \"cilium-operator-599987898-26dx2\" (UID: \"522cc9b2-fe6e-4e03-8233-adbdcb02a303\") " pod="kube-system/cilium-operator-599987898-26dx2" Dec 13 13:15:34.410714 containerd[1955]: time="2024-12-13T13:15:34.409493600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6sqp,Uid:96616e09-0e4e-4b60-886f-873373a0fd0b,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:34.491033 containerd[1955]: time="2024-12-13T13:15:34.490583829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:34.491033 containerd[1955]: time="2024-12-13T13:15:34.490673994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:34.491033 containerd[1955]: time="2024-12-13T13:15:34.490699219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:34.491033 containerd[1955]: time="2024-12-13T13:15:34.490834094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:34.542541 systemd[1]: Started cri-containerd-9de85aaebab2eb2ccd633dfb59a8f248e3ee7ff517328e5ca6e3cfd2fc2d3209.scope - libcontainer container 9de85aaebab2eb2ccd633dfb59a8f248e3ee7ff517328e5ca6e3cfd2fc2d3209. Dec 13 13:15:34.581658 containerd[1955]: time="2024-12-13T13:15:34.581477546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b6sqp,Uid:96616e09-0e4e-4b60-886f-873373a0fd0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9de85aaebab2eb2ccd633dfb59a8f248e3ee7ff517328e5ca6e3cfd2fc2d3209\"" Dec 13 13:15:34.589596 containerd[1955]: time="2024-12-13T13:15:34.589500747Z" level=info msg="CreateContainer within sandbox \"9de85aaebab2eb2ccd633dfb59a8f248e3ee7ff517328e5ca6e3cfd2fc2d3209\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:15:34.621640 containerd[1955]: time="2024-12-13T13:15:34.621463277Z" level=info msg="CreateContainer within sandbox \"9de85aaebab2eb2ccd633dfb59a8f248e3ee7ff517328e5ca6e3cfd2fc2d3209\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f12572ef42ecf58a66186d1087b6e2da703603f7b96964f67aa2cbab8d7db044\"" Dec 13 13:15:34.623842 containerd[1955]: time="2024-12-13T13:15:34.622385398Z" level=info msg="StartContainer for \"f12572ef42ecf58a66186d1087b6e2da703603f7b96964f67aa2cbab8d7db044\"" Dec 13 13:15:34.675545 systemd[1]: Started cri-containerd-f12572ef42ecf58a66186d1087b6e2da703603f7b96964f67aa2cbab8d7db044.scope - libcontainer container f12572ef42ecf58a66186d1087b6e2da703603f7b96964f67aa2cbab8d7db044. Dec 13 13:15:34.733850 containerd[1955]: time="2024-12-13T13:15:34.733795700Z" level=info msg="StartContainer for \"f12572ef42ecf58a66186d1087b6e2da703603f7b96964f67aa2cbab8d7db044\" returns successfully" Dec 13 13:15:35.348484 containerd[1955]: time="2024-12-13T13:15:35.348426822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bfxz7,Uid:87a88278-f98c-4be1-a66d-0af03149fc84,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:35.404680 containerd[1955]: time="2024-12-13T13:15:35.404545916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:35.404850 containerd[1955]: time="2024-12-13T13:15:35.404712535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:35.404850 containerd[1955]: time="2024-12-13T13:15:35.404798906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:35.405281 containerd[1955]: time="2024-12-13T13:15:35.405017199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:35.441578 systemd[1]: Started cri-containerd-b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a.scope - libcontainer container b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a. Dec 13 13:15:35.445633 containerd[1955]: time="2024-12-13T13:15:35.445416045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-26dx2,Uid:522cc9b2-fe6e-4e03-8233-adbdcb02a303,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:35.513205 containerd[1955]: time="2024-12-13T13:15:35.512825798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bfxz7,Uid:87a88278-f98c-4be1-a66d-0af03149fc84,Namespace:kube-system,Attempt:0,} returns sandbox id \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\"" Dec 13 13:15:35.517676 containerd[1955]: time="2024-12-13T13:15:35.516428942Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:15:35.527378 containerd[1955]: time="2024-12-13T13:15:35.527187315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:35.527800 containerd[1955]: time="2024-12-13T13:15:35.527738270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:35.527989 containerd[1955]: time="2024-12-13T13:15:35.527944329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:35.528458 containerd[1955]: time="2024-12-13T13:15:35.528386558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:35.567545 systemd[1]: Started cri-containerd-6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d.scope - libcontainer container 6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d. Dec 13 13:15:35.630885 containerd[1955]: time="2024-12-13T13:15:35.630498537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-26dx2,Uid:522cc9b2-fe6e-4e03-8233-adbdcb02a303,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\"" Dec 13 13:15:41.186172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231301162.mount: Deactivated successfully. Dec 13 13:15:43.773478 containerd[1955]: time="2024-12-13T13:15:43.773395220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:15:43.775584 containerd[1955]: time="2024-12-13T13:15:43.775244816Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651530" Dec 13 13:15:43.777818 containerd[1955]: time="2024-12-13T13:15:43.777742592Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:15:43.781286 containerd[1955]: time="2024-12-13T13:15:43.781113901Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.264617269s" Dec 13 13:15:43.781651 containerd[1955]: time="2024-12-13T13:15:43.781173631Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 13:15:43.784624 containerd[1955]: time="2024-12-13T13:15:43.784393484Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:15:43.793437 containerd[1955]: time="2024-12-13T13:15:43.792728277Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:15:43.822427 containerd[1955]: time="2024-12-13T13:15:43.822366253Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\"" Dec 13 13:15:43.825001 containerd[1955]: time="2024-12-13T13:15:43.824911921Z" level=info msg="StartContainer for \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\"" Dec 13 13:15:43.881589 systemd[1]: run-containerd-runc-k8s.io-358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d-runc.ifTr04.mount: Deactivated successfully. Dec 13 13:15:43.895546 systemd[1]: Started cri-containerd-358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d.scope - libcontainer container 358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d. Dec 13 13:15:43.946882 containerd[1955]: time="2024-12-13T13:15:43.946812178Z" level=info msg="StartContainer for \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\" returns successfully" Dec 13 13:15:43.963742 systemd[1]: cri-containerd-358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d.scope: Deactivated successfully. Dec 13 13:15:44.098756 kubelet[3468]: I1213 13:15:44.096676 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b6sqp" podStartSLOduration=10.096655355 podStartE2EDuration="10.096655355s" podCreationTimestamp="2024-12-13 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:15:35.031031521 +0000 UTC m=+16.353873894" watchObservedRunningTime="2024-12-13 13:15:44.096655355 +0000 UTC m=+25.419497715" Dec 13 13:15:44.813174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d-rootfs.mount: Deactivated successfully. Dec 13 13:15:45.003425 containerd[1955]: time="2024-12-13T13:15:45.003315669Z" level=info msg="shim disconnected" id=358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d namespace=k8s.io Dec 13 13:15:45.003425 containerd[1955]: time="2024-12-13T13:15:45.003392880Z" level=warning msg="cleaning up after shim disconnected" id=358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d namespace=k8s.io Dec 13 13:15:45.003425 containerd[1955]: time="2024-12-13T13:15:45.003415583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:15:45.026092 containerd[1955]: time="2024-12-13T13:15:45.026006517Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:15:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:15:45.080718 containerd[1955]: time="2024-12-13T13:15:45.079606140Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:15:45.126441 containerd[1955]: time="2024-12-13T13:15:45.126309084Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\"" Dec 13 13:15:45.128314 containerd[1955]: time="2024-12-13T13:15:45.127497294Z" level=info msg="StartContainer for \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\"" Dec 13 13:15:45.187559 systemd[1]: Started cri-containerd-52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50.scope - libcontainer container 52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50. Dec 13 13:15:45.234567 containerd[1955]: time="2024-12-13T13:15:45.234490288Z" level=info msg="StartContainer for \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\" returns successfully" Dec 13 13:15:45.257066 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:15:45.257919 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:15:45.258595 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:15:45.269652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:15:45.270128 systemd[1]: cri-containerd-52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50.scope: Deactivated successfully. Dec 13 13:15:45.316879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:15:45.319373 containerd[1955]: time="2024-12-13T13:15:45.319011136Z" level=info msg="shim disconnected" id=52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50 namespace=k8s.io Dec 13 13:15:45.319373 containerd[1955]: time="2024-12-13T13:15:45.319085381Z" level=warning msg="cleaning up after shim disconnected" id=52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50 namespace=k8s.io Dec 13 13:15:45.319373 containerd[1955]: time="2024-12-13T13:15:45.319107244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:15:45.812645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50-rootfs.mount: Deactivated successfully. Dec 13 13:15:46.089019 containerd[1955]: time="2024-12-13T13:15:46.088709246Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:15:46.124462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404988125.mount: Deactivated successfully. Dec 13 13:15:46.130447 containerd[1955]: time="2024-12-13T13:15:46.130384558Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\"" Dec 13 13:15:46.131298 containerd[1955]: time="2024-12-13T13:15:46.131238172Z" level=info msg="StartContainer for \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\"" Dec 13 13:15:46.203544 systemd[1]: Started cri-containerd-3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4.scope - libcontainer container 3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4. Dec 13 13:15:46.260816 systemd[1]: cri-containerd-3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4.scope: Deactivated successfully. Dec 13 13:15:46.261892 containerd[1955]: time="2024-12-13T13:15:46.260908088Z" level=info msg="StartContainer for \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\" returns successfully" Dec 13 13:15:46.311071 containerd[1955]: time="2024-12-13T13:15:46.310991501Z" level=info msg="shim disconnected" id=3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4 namespace=k8s.io Dec 13 13:15:46.311734 containerd[1955]: time="2024-12-13T13:15:46.311354131Z" level=warning msg="cleaning up after shim disconnected" id=3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4 namespace=k8s.io Dec 13 13:15:46.311734 containerd[1955]: time="2024-12-13T13:15:46.311383797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:15:46.813045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4-rootfs.mount: Deactivated successfully. Dec 13 13:15:47.097026 containerd[1955]: time="2024-12-13T13:15:47.096178864Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:15:47.134917 containerd[1955]: time="2024-12-13T13:15:47.133874211Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\"" Dec 13 13:15:47.136964 containerd[1955]: time="2024-12-13T13:15:47.135659527Z" level=info msg="StartContainer for \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\"" Dec 13 13:15:47.192631 systemd[1]: Started cri-containerd-36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f.scope - libcontainer container 36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f. Dec 13 13:15:47.238606 systemd[1]: cri-containerd-36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f.scope: Deactivated successfully. Dec 13 13:15:47.244612 containerd[1955]: time="2024-12-13T13:15:47.244418568Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87a88278_f98c_4be1_a66d_0af03149fc84.slice/cri-containerd-36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f.scope/memory.events\": no such file or directory" Dec 13 13:15:47.245776 containerd[1955]: time="2024-12-13T13:15:47.245566066Z" level=info msg="StartContainer for \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\" returns successfully" Dec 13 13:15:47.287534 containerd[1955]: time="2024-12-13T13:15:47.287448625Z" level=info msg="shim disconnected" id=36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f namespace=k8s.io Dec 13 13:15:47.287534 containerd[1955]: time="2024-12-13T13:15:47.287526063Z" level=warning msg="cleaning up after shim disconnected" id=36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f namespace=k8s.io Dec 13 13:15:47.287903 containerd[1955]: time="2024-12-13T13:15:47.287547062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:15:47.814336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f-rootfs.mount: Deactivated successfully. Dec 13 13:15:48.102681 containerd[1955]: time="2024-12-13T13:15:48.101722708Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:15:48.145929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348410906.mount: Deactivated successfully. Dec 13 13:15:48.150607 containerd[1955]: time="2024-12-13T13:15:48.150530232Z" level=info msg="CreateContainer within sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\"" Dec 13 13:15:48.152259 containerd[1955]: time="2024-12-13T13:15:48.152160719Z" level=info msg="StartContainer for \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\"" Dec 13 13:15:48.214558 systemd[1]: Started cri-containerd-412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91.scope - libcontainer container 412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91. Dec 13 13:15:48.293992 containerd[1955]: time="2024-12-13T13:15:48.293814567Z" level=info msg="StartContainer for \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\" returns successfully" Dec 13 13:15:48.582128 kubelet[3468]: I1213 13:15:48.582039 3468 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:15:48.645859 kubelet[3468]: I1213 13:15:48.645752 3468 topology_manager.go:215] "Topology Admit Handler" podUID="c064206f-5bf7-4ac7-b466-b629bbd658a2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w6jzx" Dec 13 13:15:48.678593 systemd[1]: Created slice kubepods-burstable-podc064206f_5bf7_4ac7_b466_b629bbd658a2.slice - libcontainer container kubepods-burstable-podc064206f_5bf7_4ac7_b466_b629bbd658a2.slice. Dec 13 13:15:48.685387 kubelet[3468]: I1213 13:15:48.685324 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c064206f-5bf7-4ac7-b466-b629bbd658a2-config-volume\") pod \"coredns-7db6d8ff4d-w6jzx\" (UID: \"c064206f-5bf7-4ac7-b466-b629bbd658a2\") " pod="kube-system/coredns-7db6d8ff4d-w6jzx" Dec 13 13:15:48.685552 kubelet[3468]: I1213 13:15:48.685393 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l7k9\" (UniqueName: \"kubernetes.io/projected/c064206f-5bf7-4ac7-b466-b629bbd658a2-kube-api-access-5l7k9\") pod \"coredns-7db6d8ff4d-w6jzx\" (UID: \"c064206f-5bf7-4ac7-b466-b629bbd658a2\") " pod="kube-system/coredns-7db6d8ff4d-w6jzx" Dec 13 13:15:48.697113 kubelet[3468]: W1213 13:15:48.697041 3468 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:48.697113 kubelet[3468]: E1213 13:15:48.697100 3468 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-29-1" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-1' and this object Dec 13 13:15:48.701708 kubelet[3468]: I1213 13:15:48.699473 3468 topology_manager.go:215] "Topology Admit Handler" podUID="23e931cd-8beb-4bf0-9182-60ad20c0ac53" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5hnzk" Dec 13 13:15:48.739655 systemd[1]: Created slice kubepods-burstable-pod23e931cd_8beb_4bf0_9182_60ad20c0ac53.slice - libcontainer container kubepods-burstable-pod23e931cd_8beb_4bf0_9182_60ad20c0ac53.slice. Dec 13 13:15:48.786889 kubelet[3468]: I1213 13:15:48.786773 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e931cd-8beb-4bf0-9182-60ad20c0ac53-config-volume\") pod \"coredns-7db6d8ff4d-5hnzk\" (UID: \"23e931cd-8beb-4bf0-9182-60ad20c0ac53\") " pod="kube-system/coredns-7db6d8ff4d-5hnzk" Dec 13 13:15:48.786889 kubelet[3468]: I1213 13:15:48.786854 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqw27\" (UniqueName: \"kubernetes.io/projected/23e931cd-8beb-4bf0-9182-60ad20c0ac53-kube-api-access-rqw27\") pod \"coredns-7db6d8ff4d-5hnzk\" (UID: \"23e931cd-8beb-4bf0-9182-60ad20c0ac53\") " pod="kube-system/coredns-7db6d8ff4d-5hnzk" Dec 13 13:15:48.816761 systemd[1]: run-containerd-runc-k8s.io-412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91-runc.nXP7my.mount: Deactivated successfully. Dec 13 13:15:49.808320 containerd[1955]: time="2024-12-13T13:15:49.807924614Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:15:49.810888 containerd[1955]: time="2024-12-13T13:15:49.810389855Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138310" Dec 13 13:15:49.813501 containerd[1955]: time="2024-12-13T13:15:49.813406254Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:15:49.818674 containerd[1955]: time="2024-12-13T13:15:49.818474107Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.034019608s" Dec 13 13:15:49.818674 containerd[1955]: time="2024-12-13T13:15:49.818531015Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 13:15:49.827400 containerd[1955]: time="2024-12-13T13:15:49.826178981Z" level=info msg="CreateContainer within sandbox \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:15:49.860173 containerd[1955]: time="2024-12-13T13:15:49.859975571Z" level=info msg="CreateContainer within sandbox \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\"" Dec 13 13:15:49.861963 containerd[1955]: time="2024-12-13T13:15:49.861819981Z" level=info msg="StartContainer for \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\"" Dec 13 13:15:49.916965 containerd[1955]: time="2024-12-13T13:15:49.916713358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w6jzx,Uid:c064206f-5bf7-4ac7-b466-b629bbd658a2,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:49.932534 systemd[1]: Started cri-containerd-188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f.scope - libcontainer container 188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f. Dec 13 13:15:49.950187 containerd[1955]: time="2024-12-13T13:15:49.949666131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hnzk,Uid:23e931cd-8beb-4bf0-9182-60ad20c0ac53,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:50.066634 containerd[1955]: time="2024-12-13T13:15:50.066479061Z" level=info msg="StartContainer for \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\" returns successfully" Dec 13 13:15:50.141847 kubelet[3468]: I1213 13:15:50.141733 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bfxz7" podStartSLOduration=7.873357186 podStartE2EDuration="16.141710518s" podCreationTimestamp="2024-12-13 13:15:34 +0000 UTC" firstStartedPulling="2024-12-13 13:15:35.515685951 +0000 UTC m=+16.838528311" lastFinishedPulling="2024-12-13 13:15:43.784039235 +0000 UTC m=+25.106881643" observedRunningTime="2024-12-13 13:15:49.177177298 +0000 UTC m=+30.500019682" watchObservedRunningTime="2024-12-13 13:15:50.141710518 +0000 UTC m=+31.464552878" Dec 13 13:15:53.689303 systemd-networkd[1848]: cilium_host: Link UP Dec 13 13:15:53.689733 systemd-networkd[1848]: cilium_net: Link UP Dec 13 13:15:53.692283 systemd-networkd[1848]: cilium_net: Gained carrier Dec 13 13:15:53.692779 systemd-networkd[1848]: cilium_host: Gained carrier Dec 13 13:15:53.702637 (udev-worker)[4303]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:53.703608 (udev-worker)[4302]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:53.859539 systemd-networkd[1848]: cilium_host: Gained IPv6LL Dec 13 13:15:53.860763 (udev-worker)[4310]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:53.872309 systemd-networkd[1848]: cilium_vxlan: Link UP Dec 13 13:15:53.872339 systemd-networkd[1848]: cilium_vxlan: Gained carrier Dec 13 13:15:54.355256 kernel: NET: Registered PF_ALG protocol family Dec 13 13:15:54.426882 systemd-networkd[1848]: cilium_net: Gained IPv6LL Dec 13 13:15:55.515043 systemd-networkd[1848]: cilium_vxlan: Gained IPv6LL Dec 13 13:15:55.681148 (udev-worker)[4312]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:55.684903 systemd-networkd[1848]: lxc_health: Link UP Dec 13 13:15:55.703630 systemd-networkd[1848]: lxc_health: Gained carrier Dec 13 13:15:56.034058 systemd-networkd[1848]: lxca575df6e3162: Link UP Dec 13 13:15:56.039371 kernel: eth0: renamed from tmp38e90 Dec 13 13:15:56.045953 systemd-networkd[1848]: lxca575df6e3162: Gained carrier Dec 13 13:15:56.110431 systemd-networkd[1848]: lxcac1097e08296: Link UP Dec 13 13:15:56.122286 kernel: eth0: renamed from tmp08353 Dec 13 13:15:56.141247 systemd-networkd[1848]: lxcac1097e08296: Gained carrier Dec 13 13:15:56.511769 systemd[1]: Started sshd@9-172.31.29.1:22-139.178.89.65:49282.service - OpenSSH per-connection server daemon (139.178.89.65:49282). Dec 13 13:15:56.738269 sshd[4646]: Accepted publickey for core from 139.178.89.65 port 49282 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:56.740165 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:56.755199 systemd-logind[1926]: New session 10 of user core. Dec 13 13:15:56.759629 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:15:57.117195 sshd[4653]: Connection closed by 139.178.89.65 port 49282 Dec 13 13:15:57.118149 sshd-session[4646]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:57.125730 systemd-logind[1926]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:15:57.126511 systemd[1]: sshd@9-172.31.29.1:22-139.178.89.65:49282.service: Deactivated successfully. Dec 13 13:15:57.133101 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:15:57.138737 systemd-logind[1926]: Removed session 10. Dec 13 13:15:57.394863 kubelet[3468]: I1213 13:15:57.394082 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-26dx2" podStartSLOduration=9.207533646 podStartE2EDuration="23.394061496s" podCreationTimestamp="2024-12-13 13:15:34 +0000 UTC" firstStartedPulling="2024-12-13 13:15:35.633609556 +0000 UTC m=+16.956451916" lastFinishedPulling="2024-12-13 13:15:49.820137406 +0000 UTC m=+31.142979766" observedRunningTime="2024-12-13 13:15:50.143753555 +0000 UTC m=+31.466595915" watchObservedRunningTime="2024-12-13 13:15:57.394061496 +0000 UTC m=+38.716903856" Dec 13 13:15:57.434508 systemd-networkd[1848]: lxc_health: Gained IPv6LL Dec 13 13:15:57.498459 systemd-networkd[1848]: lxcac1097e08296: Gained IPv6LL Dec 13 13:15:57.946497 systemd-networkd[1848]: lxca575df6e3162: Gained IPv6LL Dec 13 13:16:00.130804 ntpd[1921]: Listen normally on 8 cilium_host 192.168.0.181:123 Dec 13 13:16:00.130942 ntpd[1921]: Listen normally on 9 cilium_net [fe80::847e:1cff:fe90:c95a%4]:123 Dec 13 13:16:00.131450 ntpd[1921]: 13 Dec 13:16:00 ntpd[1921]: Listen normally on 8 cilium_host 192.168.0.181:123 Dec 13 13:16:00.131450 ntpd[1921]: 13 Dec 13:16:00 ntpd[1921]: Listen normally on 9 cilium_net [fe80::847e:1cff:fe90:c95a%4]:123 Dec 13 13:16:00.131450 ntpd[1921]: 13 Dec 13:16:00 ntpd[1921]: Listen normally on 10 cilium_host [fe80::30c8:e9ff:fe17:fbaf%5]:123 Dec 13 13:16:00.131450 ntpd[1921]: 13 Dec 13:16:00 ntpd[1921]: Listen normally on 11 cilium_vxlan [fe80::7c07:81ff:fe98:f57b%6]:123 Dec 13 13:16:00.131450 ntpd[1921]: 13 Dec 13:16:00 ntpd[1921]: Listen normally on 12 lxc_health [fe80::980b:b9ff:fecc:76e2%8]:123 Dec 13 13:16:00.131450 ntpd[1921]: 13 Dec 13:16:00 ntpd[1921]: Listen normally on 13 lxca575df6e3162 [fe80::a8dc:ebff:fe02:224a%10]:123 Dec 13 13:16:00.131450 ntpd[1921]: 13 Dec 13:16:00 ntpd[1921]: Listen normally on 14 lxcac1097e08296 [fe80::cced:94ff:feff:5d6f%12]:123 Dec 13 13:16:00.131025 ntpd[1921]: Listen normally on 10 cilium_host [fe80::30c8:e9ff:fe17:fbaf%5]:123 Dec 13 13:16:00.131091 ntpd[1921]: Listen normally on 11 cilium_vxlan [fe80::7c07:81ff:fe98:f57b%6]:123 Dec 13 13:16:00.131166 ntpd[1921]: Listen normally on 12 lxc_health [fe80::980b:b9ff:fecc:76e2%8]:123 Dec 13 13:16:00.131329 ntpd[1921]: Listen normally on 13 lxca575df6e3162 [fe80::a8dc:ebff:fe02:224a%10]:123 Dec 13 13:16:00.131409 ntpd[1921]: Listen normally on 14 lxcac1097e08296 [fe80::cced:94ff:feff:5d6f%12]:123 Dec 13 13:16:02.161655 systemd[1]: Started sshd@10-172.31.29.1:22-139.178.89.65:54906.service - OpenSSH per-connection server daemon (139.178.89.65:54906). Dec 13 13:16:02.366869 sshd[4679]: Accepted publickey for core from 139.178.89.65 port 54906 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:02.369793 sshd-session[4679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:02.380922 systemd-logind[1926]: New session 11 of user core. Dec 13 13:16:02.386834 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:16:02.670939 sshd[4681]: Connection closed by 139.178.89.65 port 54906 Dec 13 13:16:02.673578 sshd-session[4679]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:02.680150 systemd-logind[1926]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:16:02.686755 systemd[1]: sshd@10-172.31.29.1:22-139.178.89.65:54906.service: Deactivated successfully. Dec 13 13:16:02.696923 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:16:02.701184 systemd-logind[1926]: Removed session 11. Dec 13 13:16:04.870318 containerd[1955]: time="2024-12-13T13:16:04.869650983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:04.870318 containerd[1955]: time="2024-12-13T13:16:04.869809462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:04.870318 containerd[1955]: time="2024-12-13T13:16:04.869839513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:04.870318 containerd[1955]: time="2024-12-13T13:16:04.870015149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:04.946974 systemd[1]: Started cri-containerd-38e90c8ac7c5d86f9770e69f42a41f23693fdbd0e0863df879972fcd917e3520.scope - libcontainer container 38e90c8ac7c5d86f9770e69f42a41f23693fdbd0e0863df879972fcd917e3520. Dec 13 13:16:04.984229 containerd[1955]: time="2024-12-13T13:16:04.982490779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:04.984229 containerd[1955]: time="2024-12-13T13:16:04.982616314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:04.984229 containerd[1955]: time="2024-12-13T13:16:04.982652692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:04.985436 containerd[1955]: time="2024-12-13T13:16:04.984778607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:05.063592 systemd[1]: Started cri-containerd-08353e3f69c5c159659e69a9d0cb1ba64903e15f063d83425ea40d8111afb506.scope - libcontainer container 08353e3f69c5c159659e69a9d0cb1ba64903e15f063d83425ea40d8111afb506. Dec 13 13:16:05.096954 containerd[1955]: time="2024-12-13T13:16:05.096848098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w6jzx,Uid:c064206f-5bf7-4ac7-b466-b629bbd658a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"38e90c8ac7c5d86f9770e69f42a41f23693fdbd0e0863df879972fcd917e3520\"" Dec 13 13:16:05.104053 containerd[1955]: time="2024-12-13T13:16:05.103572935Z" level=info msg="CreateContainer within sandbox \"38e90c8ac7c5d86f9770e69f42a41f23693fdbd0e0863df879972fcd917e3520\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:16:05.140143 containerd[1955]: time="2024-12-13T13:16:05.139726831Z" level=info msg="CreateContainer within sandbox \"38e90c8ac7c5d86f9770e69f42a41f23693fdbd0e0863df879972fcd917e3520\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ae4ef702727b59211efd66bf7f1033b54455dd3564049b19b15db4546239ac3\"" Dec 13 13:16:05.143303 containerd[1955]: time="2024-12-13T13:16:05.143014530Z" level=info msg="StartContainer for \"5ae4ef702727b59211efd66bf7f1033b54455dd3564049b19b15db4546239ac3\"" Dec 13 13:16:05.217527 systemd[1]: Started cri-containerd-5ae4ef702727b59211efd66bf7f1033b54455dd3564049b19b15db4546239ac3.scope - libcontainer container 5ae4ef702727b59211efd66bf7f1033b54455dd3564049b19b15db4546239ac3. Dec 13 13:16:05.230036 containerd[1955]: time="2024-12-13T13:16:05.229961947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5hnzk,Uid:23e931cd-8beb-4bf0-9182-60ad20c0ac53,Namespace:kube-system,Attempt:0,} returns sandbox id \"08353e3f69c5c159659e69a9d0cb1ba64903e15f063d83425ea40d8111afb506\"" Dec 13 13:16:05.241813 containerd[1955]: time="2024-12-13T13:16:05.241742582Z" level=info msg="CreateContainer within sandbox \"08353e3f69c5c159659e69a9d0cb1ba64903e15f063d83425ea40d8111afb506\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:16:05.288152 containerd[1955]: time="2024-12-13T13:16:05.288039051Z" level=info msg="CreateContainer within sandbox \"08353e3f69c5c159659e69a9d0cb1ba64903e15f063d83425ea40d8111afb506\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1004527ac78eea2d0c893dd12baecfb697abede854d4ab875f8db778b242d792\"" Dec 13 13:16:05.290282 containerd[1955]: time="2024-12-13T13:16:05.290003929Z" level=info msg="StartContainer for \"1004527ac78eea2d0c893dd12baecfb697abede854d4ab875f8db778b242d792\"" Dec 13 13:16:05.344096 containerd[1955]: time="2024-12-13T13:16:05.343425263Z" level=info msg="StartContainer for \"5ae4ef702727b59211efd66bf7f1033b54455dd3564049b19b15db4546239ac3\" returns successfully" Dec 13 13:16:05.370562 systemd[1]: Started cri-containerd-1004527ac78eea2d0c893dd12baecfb697abede854d4ab875f8db778b242d792.scope - libcontainer container 1004527ac78eea2d0c893dd12baecfb697abede854d4ab875f8db778b242d792. Dec 13 13:16:05.441682 containerd[1955]: time="2024-12-13T13:16:05.440854326Z" level=info msg="StartContainer for \"1004527ac78eea2d0c893dd12baecfb697abede854d4ab875f8db778b242d792\" returns successfully" Dec 13 13:16:06.216667 kubelet[3468]: I1213 13:16:06.215388 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w6jzx" podStartSLOduration=32.215365906 podStartE2EDuration="32.215365906s" podCreationTimestamp="2024-12-13 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:16:06.21399428 +0000 UTC m=+47.536836628" watchObservedRunningTime="2024-12-13 13:16:06.215365906 +0000 UTC m=+47.538208266" Dec 13 13:16:06.241595 kubelet[3468]: I1213 13:16:06.241495 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5hnzk" podStartSLOduration=32.241469555 podStartE2EDuration="32.241469555s" podCreationTimestamp="2024-12-13 13:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:16:06.238337382 +0000 UTC m=+47.561179754" watchObservedRunningTime="2024-12-13 13:16:06.241469555 +0000 UTC m=+47.564311915" Dec 13 13:16:07.719743 systemd[1]: Started sshd@11-172.31.29.1:22-139.178.89.65:54914.service - OpenSSH per-connection server daemon (139.178.89.65:54914). Dec 13 13:16:07.901781 sshd[4868]: Accepted publickey for core from 139.178.89.65 port 54914 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:07.904541 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:07.913800 systemd-logind[1926]: New session 12 of user core. Dec 13 13:16:07.921548 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:16:08.172935 sshd[4870]: Connection closed by 139.178.89.65 port 54914 Dec 13 13:16:08.174179 sshd-session[4868]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:08.181566 systemd[1]: sshd@11-172.31.29.1:22-139.178.89.65:54914.service: Deactivated successfully. Dec 13 13:16:08.185988 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:16:08.188061 systemd-logind[1926]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:16:08.190590 systemd-logind[1926]: Removed session 12. Dec 13 13:16:13.220947 systemd[1]: Started sshd@12-172.31.29.1:22-139.178.89.65:58714.service - OpenSSH per-connection server daemon (139.178.89.65:58714). Dec 13 13:16:13.409948 sshd[4885]: Accepted publickey for core from 139.178.89.65 port 58714 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:13.415728 sshd-session[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:13.427064 systemd-logind[1926]: New session 13 of user core. Dec 13 13:16:13.432487 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:16:13.681477 sshd[4887]: Connection closed by 139.178.89.65 port 58714 Dec 13 13:16:13.682578 sshd-session[4885]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:13.689108 systemd[1]: sshd@12-172.31.29.1:22-139.178.89.65:58714.service: Deactivated successfully. Dec 13 13:16:13.694083 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:16:13.697318 systemd-logind[1926]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:16:13.699138 systemd-logind[1926]: Removed session 13. Dec 13 13:16:18.728702 systemd[1]: Started sshd@13-172.31.29.1:22-139.178.89.65:40564.service - OpenSSH per-connection server daemon (139.178.89.65:40564). Dec 13 13:16:18.923759 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 40564 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:18.926393 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:18.936741 systemd-logind[1926]: New session 14 of user core. Dec 13 13:16:18.945536 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:16:19.196984 sshd[4901]: Connection closed by 139.178.89.65 port 40564 Dec 13 13:16:19.198020 sshd-session[4899]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:19.203931 systemd[1]: sshd@13-172.31.29.1:22-139.178.89.65:40564.service: Deactivated successfully. Dec 13 13:16:19.207507 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:16:19.210985 systemd-logind[1926]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:16:19.214114 systemd-logind[1926]: Removed session 14. Dec 13 13:16:19.237737 systemd[1]: Started sshd@14-172.31.29.1:22-139.178.89.65:40580.service - OpenSSH per-connection server daemon (139.178.89.65:40580). Dec 13 13:16:19.437794 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 40580 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:19.440276 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:19.448321 systemd-logind[1926]: New session 15 of user core. Dec 13 13:16:19.455786 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:16:19.786913 sshd[4917]: Connection closed by 139.178.89.65 port 40580 Dec 13 13:16:19.788337 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:19.799049 systemd[1]: sshd@14-172.31.29.1:22-139.178.89.65:40580.service: Deactivated successfully. Dec 13 13:16:19.806687 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:16:19.812242 systemd-logind[1926]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:16:19.846768 systemd[1]: Started sshd@15-172.31.29.1:22-139.178.89.65:40592.service - OpenSSH per-connection server daemon (139.178.89.65:40592). Dec 13 13:16:19.849511 systemd-logind[1926]: Removed session 15. Dec 13 13:16:20.067080 sshd[4926]: Accepted publickey for core from 139.178.89.65 port 40592 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:20.070460 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:20.079059 systemd-logind[1926]: New session 16 of user core. Dec 13 13:16:20.089514 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:16:20.347275 sshd[4928]: Connection closed by 139.178.89.65 port 40592 Dec 13 13:16:20.348759 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:20.356140 systemd[1]: sshd@15-172.31.29.1:22-139.178.89.65:40592.service: Deactivated successfully. Dec 13 13:16:20.359934 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:16:20.362764 systemd-logind[1926]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:16:20.364514 systemd-logind[1926]: Removed session 16. Dec 13 13:16:25.390750 systemd[1]: Started sshd@16-172.31.29.1:22-139.178.89.65:40604.service - OpenSSH per-connection server daemon (139.178.89.65:40604). Dec 13 13:16:25.582474 sshd[4939]: Accepted publickey for core from 139.178.89.65 port 40604 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:25.585049 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:25.593367 systemd-logind[1926]: New session 17 of user core. Dec 13 13:16:25.600496 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:16:25.843301 sshd[4941]: Connection closed by 139.178.89.65 port 40604 Dec 13 13:16:25.844132 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:25.849462 systemd[1]: sshd@16-172.31.29.1:22-139.178.89.65:40604.service: Deactivated successfully. Dec 13 13:16:25.853657 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:16:25.858423 systemd-logind[1926]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:16:25.860910 systemd-logind[1926]: Removed session 17. Dec 13 13:16:30.884736 systemd[1]: Started sshd@17-172.31.29.1:22-139.178.89.65:56082.service - OpenSSH per-connection server daemon (139.178.89.65:56082). Dec 13 13:16:31.073986 sshd[4952]: Accepted publickey for core from 139.178.89.65 port 56082 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:31.076676 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:31.087099 systemd-logind[1926]: New session 18 of user core. Dec 13 13:16:31.095466 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:16:31.342108 sshd[4954]: Connection closed by 139.178.89.65 port 56082 Dec 13 13:16:31.341983 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:31.347867 systemd[1]: sshd@17-172.31.29.1:22-139.178.89.65:56082.service: Deactivated successfully. Dec 13 13:16:31.352605 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:16:31.356459 systemd-logind[1926]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:16:31.359998 systemd-logind[1926]: Removed session 18. Dec 13 13:16:36.390723 systemd[1]: Started sshd@18-172.31.29.1:22-139.178.89.65:56092.service - OpenSSH per-connection server daemon (139.178.89.65:56092). Dec 13 13:16:36.582256 sshd[4968]: Accepted publickey for core from 139.178.89.65 port 56092 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:36.584984 sshd-session[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:36.594848 systemd-logind[1926]: New session 19 of user core. Dec 13 13:16:36.601518 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:16:36.844569 sshd[4970]: Connection closed by 139.178.89.65 port 56092 Dec 13 13:16:36.845783 sshd-session[4968]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:36.850616 systemd[1]: sshd@18-172.31.29.1:22-139.178.89.65:56092.service: Deactivated successfully. Dec 13 13:16:36.855720 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:16:36.859044 systemd-logind[1926]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:16:36.861011 systemd-logind[1926]: Removed session 19. Dec 13 13:16:36.887748 systemd[1]: Started sshd@19-172.31.29.1:22-139.178.89.65:56098.service - OpenSSH per-connection server daemon (139.178.89.65:56098). Dec 13 13:16:37.071170 sshd[4981]: Accepted publickey for core from 139.178.89.65 port 56098 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:37.073675 sshd-session[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:37.082635 systemd-logind[1926]: New session 20 of user core. Dec 13 13:16:37.087464 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:16:37.385235 sshd[4983]: Connection closed by 139.178.89.65 port 56098 Dec 13 13:16:37.386178 sshd-session[4981]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:37.391478 systemd[1]: sshd@19-172.31.29.1:22-139.178.89.65:56098.service: Deactivated successfully. Dec 13 13:16:37.394834 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:16:37.399177 systemd-logind[1926]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:16:37.401785 systemd-logind[1926]: Removed session 20. Dec 13 13:16:37.426472 systemd[1]: Started sshd@20-172.31.29.1:22-139.178.89.65:56108.service - OpenSSH per-connection server daemon (139.178.89.65:56108). Dec 13 13:16:37.610602 sshd[4991]: Accepted publickey for core from 139.178.89.65 port 56108 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:37.613493 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:37.622265 systemd-logind[1926]: New session 21 of user core. Dec 13 13:16:37.629621 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:16:40.237438 sshd[4993]: Connection closed by 139.178.89.65 port 56108 Dec 13 13:16:40.238049 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:40.246667 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:16:40.250562 systemd[1]: sshd@20-172.31.29.1:22-139.178.89.65:56108.service: Deactivated successfully. Dec 13 13:16:40.263283 systemd-logind[1926]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:16:40.284729 systemd[1]: Started sshd@21-172.31.29.1:22-139.178.89.65:40918.service - OpenSSH per-connection server daemon (139.178.89.65:40918). Dec 13 13:16:40.287449 systemd-logind[1926]: Removed session 21. Dec 13 13:16:40.479597 sshd[5009]: Accepted publickey for core from 139.178.89.65 port 40918 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:40.482105 sshd-session[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:40.495892 systemd-logind[1926]: New session 22 of user core. Dec 13 13:16:40.505531 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:16:40.991930 sshd[5011]: Connection closed by 139.178.89.65 port 40918 Dec 13 13:16:40.992526 sshd-session[5009]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:41.000123 systemd-logind[1926]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:16:41.001055 systemd[1]: sshd@21-172.31.29.1:22-139.178.89.65:40918.service: Deactivated successfully. Dec 13 13:16:41.004873 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:16:41.008304 systemd-logind[1926]: Removed session 22. Dec 13 13:16:41.033781 systemd[1]: Started sshd@22-172.31.29.1:22-139.178.89.65:40922.service - OpenSSH per-connection server daemon (139.178.89.65:40922). Dec 13 13:16:41.221841 sshd[5019]: Accepted publickey for core from 139.178.89.65 port 40922 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:41.224482 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:41.232519 systemd-logind[1926]: New session 23 of user core. Dec 13 13:16:41.239486 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:16:41.482463 sshd[5021]: Connection closed by 139.178.89.65 port 40922 Dec 13 13:16:41.482329 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:41.489765 systemd[1]: sshd@22-172.31.29.1:22-139.178.89.65:40922.service: Deactivated successfully. Dec 13 13:16:41.498415 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:16:41.502664 systemd-logind[1926]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:16:41.506744 systemd-logind[1926]: Removed session 23. Dec 13 13:16:46.523756 systemd[1]: Started sshd@23-172.31.29.1:22-139.178.89.65:40938.service - OpenSSH per-connection server daemon (139.178.89.65:40938). Dec 13 13:16:46.716614 sshd[5032]: Accepted publickey for core from 139.178.89.65 port 40938 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:46.719130 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:46.726315 systemd-logind[1926]: New session 24 of user core. Dec 13 13:16:46.736500 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:16:46.974950 sshd[5034]: Connection closed by 139.178.89.65 port 40938 Dec 13 13:16:46.975842 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:46.981594 systemd[1]: sshd@23-172.31.29.1:22-139.178.89.65:40938.service: Deactivated successfully. Dec 13 13:16:46.985568 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:16:46.988800 systemd-logind[1926]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:16:46.991487 systemd-logind[1926]: Removed session 24. Dec 13 13:16:52.022771 systemd[1]: Started sshd@24-172.31.29.1:22-139.178.89.65:35324.service - OpenSSH per-connection server daemon (139.178.89.65:35324). Dec 13 13:16:52.211774 sshd[5048]: Accepted publickey for core from 139.178.89.65 port 35324 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:52.214482 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:52.227020 systemd-logind[1926]: New session 25 of user core. Dec 13 13:16:52.233971 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:16:52.475495 sshd[5050]: Connection closed by 139.178.89.65 port 35324 Dec 13 13:16:52.476547 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:52.482497 systemd-logind[1926]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:16:52.482910 systemd[1]: sshd@24-172.31.29.1:22-139.178.89.65:35324.service: Deactivated successfully. Dec 13 13:16:52.489556 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:16:52.498009 systemd-logind[1926]: Removed session 25. Dec 13 13:16:57.521377 systemd[1]: Started sshd@25-172.31.29.1:22-139.178.89.65:35328.service - OpenSSH per-connection server daemon (139.178.89.65:35328). Dec 13 13:16:57.705398 sshd[5060]: Accepted publickey for core from 139.178.89.65 port 35328 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:16:57.707937 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:57.716717 systemd-logind[1926]: New session 26 of user core. Dec 13 13:16:57.721483 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:16:57.963326 sshd[5062]: Connection closed by 139.178.89.65 port 35328 Dec 13 13:16:57.964139 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:57.971125 systemd-logind[1926]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:16:57.971587 systemd[1]: sshd@25-172.31.29.1:22-139.178.89.65:35328.service: Deactivated successfully. Dec 13 13:16:57.976949 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:16:57.979897 systemd-logind[1926]: Removed session 26. Dec 13 13:17:03.003767 systemd[1]: Started sshd@26-172.31.29.1:22-139.178.89.65:43188.service - OpenSSH per-connection server daemon (139.178.89.65:43188). Dec 13 13:17:03.191171 sshd[5073]: Accepted publickey for core from 139.178.89.65 port 43188 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:17:03.194416 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:17:03.201738 systemd-logind[1926]: New session 27 of user core. Dec 13 13:17:03.208472 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:17:03.454163 sshd[5075]: Connection closed by 139.178.89.65 port 43188 Dec 13 13:17:03.455142 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Dec 13 13:17:03.462197 systemd[1]: sshd@26-172.31.29.1:22-139.178.89.65:43188.service: Deactivated successfully. Dec 13 13:17:03.466771 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:17:03.468743 systemd-logind[1926]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:17:03.470713 systemd-logind[1926]: Removed session 27. Dec 13 13:17:03.493413 systemd[1]: Started sshd@27-172.31.29.1:22-139.178.89.65:43196.service - OpenSSH per-connection server daemon (139.178.89.65:43196). Dec 13 13:17:03.680133 sshd[5086]: Accepted publickey for core from 139.178.89.65 port 43196 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:17:03.682982 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:17:03.691202 systemd-logind[1926]: New session 28 of user core. Dec 13 13:17:03.697454 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:17:05.892993 containerd[1955]: time="2024-12-13T13:17:05.891900586Z" level=info msg="StopContainer for \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\" with timeout 30 (s)" Dec 13 13:17:05.895249 containerd[1955]: time="2024-12-13T13:17:05.894884770Z" level=info msg="Stop container \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\" with signal terminated" Dec 13 13:17:05.925481 systemd[1]: cri-containerd-188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f.scope: Deactivated successfully. Dec 13 13:17:05.937808 containerd[1955]: time="2024-12-13T13:17:05.937694950Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:17:05.954811 containerd[1955]: time="2024-12-13T13:17:05.954634954Z" level=info msg="StopContainer for \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\" with timeout 2 (s)" Dec 13 13:17:05.955626 containerd[1955]: time="2024-12-13T13:17:05.955389502Z" level=info msg="Stop container \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\" with signal terminated" Dec 13 13:17:05.975446 systemd-networkd[1848]: lxc_health: Link DOWN Dec 13 13:17:05.975462 systemd-networkd[1848]: lxc_health: Lost carrier Dec 13 13:17:05.985552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f-rootfs.mount: Deactivated successfully. Dec 13 13:17:06.002785 containerd[1955]: time="2024-12-13T13:17:06.002056302Z" level=info msg="shim disconnected" id=188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f namespace=k8s.io Dec 13 13:17:06.002785 containerd[1955]: time="2024-12-13T13:17:06.002139378Z" level=warning msg="cleaning up after shim disconnected" id=188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f namespace=k8s.io Dec 13 13:17:06.002785 containerd[1955]: time="2024-12-13T13:17:06.002161710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:06.014511 systemd[1]: cri-containerd-412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91.scope: Deactivated successfully. Dec 13 13:17:06.014973 systemd[1]: cri-containerd-412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91.scope: Consumed 14.607s CPU time. Dec 13 13:17:06.043949 containerd[1955]: time="2024-12-13T13:17:06.043867482Z" level=info msg="StopContainer for \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\" returns successfully" Dec 13 13:17:06.045316 containerd[1955]: time="2024-12-13T13:17:06.045024066Z" level=info msg="StopPodSandbox for \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\"" Dec 13 13:17:06.045316 containerd[1955]: time="2024-12-13T13:17:06.045106650Z" level=info msg="Container to stop \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.049522 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d-shm.mount: Deactivated successfully. Dec 13 13:17:06.070001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91-rootfs.mount: Deactivated successfully. Dec 13 13:17:06.072281 systemd[1]: cri-containerd-6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d.scope: Deactivated successfully. Dec 13 13:17:06.086778 containerd[1955]: time="2024-12-13T13:17:06.086407255Z" level=info msg="shim disconnected" id=412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91 namespace=k8s.io Dec 13 13:17:06.087140 containerd[1955]: time="2024-12-13T13:17:06.086860603Z" level=warning msg="cleaning up after shim disconnected" id=412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91 namespace=k8s.io Dec 13 13:17:06.087140 containerd[1955]: time="2024-12-13T13:17:06.086886103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:06.111272 containerd[1955]: time="2024-12-13T13:17:06.111150775Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:17:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:17:06.117283 containerd[1955]: time="2024-12-13T13:17:06.117154963Z" level=info msg="StopContainer for \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\" returns successfully" Dec 13 13:17:06.118514 containerd[1955]: time="2024-12-13T13:17:06.118024303Z" level=info msg="StopPodSandbox for \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\"" Dec 13 13:17:06.118514 containerd[1955]: time="2024-12-13T13:17:06.118097719Z" level=info msg="Container to stop \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.118514 containerd[1955]: time="2024-12-13T13:17:06.118122127Z" level=info msg="Container to stop \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.118514 containerd[1955]: time="2024-12-13T13:17:06.118142539Z" level=info msg="Container to stop \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.118514 containerd[1955]: time="2024-12-13T13:17:06.118164031Z" level=info msg="Container to stop \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.118514 containerd[1955]: time="2024-12-13T13:17:06.118184263Z" level=info msg="Container to stop \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:17:06.123425 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a-shm.mount: Deactivated successfully. Dec 13 13:17:06.136426 systemd[1]: cri-containerd-b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a.scope: Deactivated successfully. Dec 13 13:17:06.147385 containerd[1955]: time="2024-12-13T13:17:06.147142099Z" level=info msg="shim disconnected" id=6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d namespace=k8s.io Dec 13 13:17:06.147385 containerd[1955]: time="2024-12-13T13:17:06.147272203Z" level=warning msg="cleaning up after shim disconnected" id=6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d namespace=k8s.io Dec 13 13:17:06.147385 containerd[1955]: time="2024-12-13T13:17:06.147294247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:06.185354 containerd[1955]: time="2024-12-13T13:17:06.185300683Z" level=info msg="TearDown network for sandbox \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" successfully" Dec 13 13:17:06.185630 containerd[1955]: time="2024-12-13T13:17:06.185601907Z" level=info msg="StopPodSandbox for \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" returns successfully" Dec 13 13:17:06.193832 containerd[1955]: time="2024-12-13T13:17:06.193734607Z" level=info msg="shim disconnected" id=b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a namespace=k8s.io Dec 13 13:17:06.194467 containerd[1955]: time="2024-12-13T13:17:06.194406391Z" level=warning msg="cleaning up after shim disconnected" id=b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a namespace=k8s.io Dec 13 13:17:06.194834 containerd[1955]: time="2024-12-13T13:17:06.194644447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:06.226833 containerd[1955]: time="2024-12-13T13:17:06.226760143Z" level=info msg="TearDown network for sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" successfully" Dec 13 13:17:06.226833 containerd[1955]: time="2024-12-13T13:17:06.226815463Z" level=info msg="StopPodSandbox for \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" returns successfully" Dec 13 13:17:06.302283 kubelet[3468]: I1213 13:17:06.301669 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kfpb\" (UniqueName: \"kubernetes.io/projected/522cc9b2-fe6e-4e03-8233-adbdcb02a303-kube-api-access-9kfpb\") pod \"522cc9b2-fe6e-4e03-8233-adbdcb02a303\" (UID: \"522cc9b2-fe6e-4e03-8233-adbdcb02a303\") " Dec 13 13:17:06.302283 kubelet[3468]: I1213 13:17:06.301813 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.302283 kubelet[3468]: I1213 13:17:06.301874 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-kernel\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.302283 kubelet[3468]: I1213 13:17:06.301914 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-cgroup\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.302283 kubelet[3468]: I1213 13:17:06.301954 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-config-path\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.302283 kubelet[3468]: I1213 13:17:06.302000 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-hubble-tls\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303118 kubelet[3468]: I1213 13:17:06.302039 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-hostproc\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303118 kubelet[3468]: I1213 13:17:06.302070 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-xtables-lock\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303118 kubelet[3468]: I1213 13:17:06.302128 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx64w\" (UniqueName: \"kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-kube-api-access-lx64w\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303118 kubelet[3468]: I1213 13:17:06.302167 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cni-path\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303118 kubelet[3468]: I1213 13:17:06.302199 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-lib-modules\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303118 kubelet[3468]: I1213 13:17:06.302319 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87a88278-f98c-4be1-a66d-0af03149fc84-clustermesh-secrets\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303474 kubelet[3468]: I1213 13:17:06.302359 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-bpf-maps\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303474 kubelet[3468]: I1213 13:17:06.302400 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/522cc9b2-fe6e-4e03-8233-adbdcb02a303-cilium-config-path\") pod \"522cc9b2-fe6e-4e03-8233-adbdcb02a303\" (UID: \"522cc9b2-fe6e-4e03-8233-adbdcb02a303\") " Dec 13 13:17:06.303474 kubelet[3468]: I1213 13:17:06.302436 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-run\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.303474 kubelet[3468]: I1213 13:17:06.302501 3468 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-kernel\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.303474 kubelet[3468]: I1213 13:17:06.302541 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.303474 kubelet[3468]: I1213 13:17:06.302582 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.307522 kubelet[3468]: I1213 13:17:06.306542 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cni-path" (OuterVolumeSpecName: "cni-path") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.307522 kubelet[3468]: I1213 13:17:06.306606 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.307522 kubelet[3468]: I1213 13:17:06.307302 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-hostproc" (OuterVolumeSpecName: "hostproc") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.307522 kubelet[3468]: I1213 13:17:06.307363 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.310292 kubelet[3468]: I1213 13:17:06.309947 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.315571 kubelet[3468]: I1213 13:17:06.315491 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/522cc9b2-fe6e-4e03-8233-adbdcb02a303-kube-api-access-9kfpb" (OuterVolumeSpecName: "kube-api-access-9kfpb") pod "522cc9b2-fe6e-4e03-8233-adbdcb02a303" (UID: "522cc9b2-fe6e-4e03-8233-adbdcb02a303"). InnerVolumeSpecName "kube-api-access-9kfpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:17:06.318672 kubelet[3468]: I1213 13:17:06.318450 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a88278-f98c-4be1-a66d-0af03149fc84-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:17:06.318672 kubelet[3468]: I1213 13:17:06.318461 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:17:06.319062 kubelet[3468]: I1213 13:17:06.318922 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:17:06.320818 kubelet[3468]: I1213 13:17:06.320744 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-kube-api-access-lx64w" (OuterVolumeSpecName: "kube-api-access-lx64w") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "kube-api-access-lx64w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:17:06.323432 kubelet[3468]: I1213 13:17:06.323345 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/522cc9b2-fe6e-4e03-8233-adbdcb02a303-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "522cc9b2-fe6e-4e03-8233-adbdcb02a303" (UID: "522cc9b2-fe6e-4e03-8233-adbdcb02a303"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:17:06.356231 kubelet[3468]: I1213 13:17:06.356173 3468 scope.go:117] "RemoveContainer" containerID="188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f" Dec 13 13:17:06.363245 containerd[1955]: time="2024-12-13T13:17:06.361914476Z" level=info msg="RemoveContainer for \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\"" Dec 13 13:17:06.374181 systemd[1]: Removed slice kubepods-besteffort-pod522cc9b2_fe6e_4e03_8233_adbdcb02a303.slice - libcontainer container kubepods-besteffort-pod522cc9b2_fe6e_4e03_8233_adbdcb02a303.slice. Dec 13 13:17:06.378786 containerd[1955]: time="2024-12-13T13:17:06.377847632Z" level=info msg="RemoveContainer for \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\" returns successfully" Dec 13 13:17:06.379837 kubelet[3468]: I1213 13:17:06.379799 3468 scope.go:117] "RemoveContainer" containerID="188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f" Dec 13 13:17:06.380902 containerd[1955]: time="2024-12-13T13:17:06.380834552Z" level=error msg="ContainerStatus for \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\": not found" Dec 13 13:17:06.381114 kubelet[3468]: E1213 13:17:06.381072 3468 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\": not found" containerID="188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f" Dec 13 13:17:06.382540 kubelet[3468]: I1213 13:17:06.381127 3468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f"} err="failed to get container status \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\": rpc error: code = NotFound desc = an error occurred when try to find container \"188e63a5c0ab34aa3be991dfbe383317229f9494f9b85d1260f4a5f92b0c229f\": not found" Dec 13 13:17:06.382540 kubelet[3468]: I1213 13:17:06.381395 3468 scope.go:117] "RemoveContainer" containerID="412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91" Dec 13 13:17:06.385820 containerd[1955]: time="2024-12-13T13:17:06.385707920Z" level=info msg="RemoveContainer for \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\"" Dec 13 13:17:06.394143 containerd[1955]: time="2024-12-13T13:17:06.393997316Z" level=info msg="RemoveContainer for \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\" returns successfully" Dec 13 13:17:06.395495 kubelet[3468]: I1213 13:17:06.395327 3468 scope.go:117] "RemoveContainer" containerID="36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f" Dec 13 13:17:06.402373 containerd[1955]: time="2024-12-13T13:17:06.400103684Z" level=info msg="RemoveContainer for \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\"" Dec 13 13:17:06.402816 kubelet[3468]: I1213 13:17:06.402681 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-etc-cni-netd\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.402816 kubelet[3468]: I1213 13:17:06.402754 3468 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-net\") pod \"87a88278-f98c-4be1-a66d-0af03149fc84\" (UID: \"87a88278-f98c-4be1-a66d-0af03149fc84\") " Dec 13 13:17:06.402816 kubelet[3468]: I1213 13:17:06.402763 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.404017 kubelet[3468]: I1213 13:17:06.402834 3468 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cni-path\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404017 kubelet[3468]: I1213 13:17:06.402858 3468 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-lib-modules\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404017 kubelet[3468]: I1213 13:17:06.402879 3468 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87a88278-f98c-4be1-a66d-0af03149fc84-clustermesh-secrets\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404017 kubelet[3468]: I1213 13:17:06.402905 3468 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-bpf-maps\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404017 kubelet[3468]: I1213 13:17:06.402903 3468 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "87a88278-f98c-4be1-a66d-0af03149fc84" (UID: "87a88278-f98c-4be1-a66d-0af03149fc84"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:17:06.404017 kubelet[3468]: I1213 13:17:06.402924 3468 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/522cc9b2-fe6e-4e03-8233-adbdcb02a303-cilium-config-path\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404017 kubelet[3468]: I1213 13:17:06.402945 3468 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-run\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404408 kubelet[3468]: I1213 13:17:06.402967 3468 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9kfpb\" (UniqueName: \"kubernetes.io/projected/522cc9b2-fe6e-4e03-8233-adbdcb02a303-kube-api-access-9kfpb\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404408 kubelet[3468]: I1213 13:17:06.402988 3468 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-config-path\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404408 kubelet[3468]: I1213 13:17:06.403008 3468 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-cilium-cgroup\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404408 kubelet[3468]: I1213 13:17:06.403027 3468 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-hostproc\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404408 kubelet[3468]: I1213 13:17:06.403045 3468 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-hubble-tls\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404408 kubelet[3468]: I1213 13:17:06.403064 3468 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lx64w\" (UniqueName: \"kubernetes.io/projected/87a88278-f98c-4be1-a66d-0af03149fc84-kube-api-access-lx64w\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.404408 kubelet[3468]: I1213 13:17:06.403083 3468 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-xtables-lock\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.408222 containerd[1955]: time="2024-12-13T13:17:06.408150044Z" level=info msg="RemoveContainer for \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\" returns successfully" Dec 13 13:17:06.408883 kubelet[3468]: I1213 13:17:06.408829 3468 scope.go:117] "RemoveContainer" containerID="3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4" Dec 13 13:17:06.410997 containerd[1955]: time="2024-12-13T13:17:06.410805884Z" level=info msg="RemoveContainer for \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\"" Dec 13 13:17:06.416680 containerd[1955]: time="2024-12-13T13:17:06.416624336Z" level=info msg="RemoveContainer for \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\" returns successfully" Dec 13 13:17:06.417247 kubelet[3468]: I1213 13:17:06.417076 3468 scope.go:117] "RemoveContainer" containerID="52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50" Dec 13 13:17:06.418984 containerd[1955]: time="2024-12-13T13:17:06.418924568Z" level=info msg="RemoveContainer for \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\"" Dec 13 13:17:06.425087 containerd[1955]: time="2024-12-13T13:17:06.425029556Z" level=info msg="RemoveContainer for \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\" returns successfully" Dec 13 13:17:06.425603 kubelet[3468]: I1213 13:17:06.425433 3468 scope.go:117] "RemoveContainer" containerID="358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d" Dec 13 13:17:06.427918 containerd[1955]: time="2024-12-13T13:17:06.427808984Z" level=info msg="RemoveContainer for \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\"" Dec 13 13:17:06.434302 containerd[1955]: time="2024-12-13T13:17:06.434131580Z" level=info msg="RemoveContainer for \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\" returns successfully" Dec 13 13:17:06.435307 kubelet[3468]: I1213 13:17:06.434688 3468 scope.go:117] "RemoveContainer" containerID="412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91" Dec 13 13:17:06.435307 kubelet[3468]: E1213 13:17:06.435196 3468 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\": not found" containerID="412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91" Dec 13 13:17:06.435307 kubelet[3468]: I1213 13:17:06.435277 3468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91"} err="failed to get container status \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\": rpc error: code = NotFound desc = an error occurred when try to find container \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\": not found" Dec 13 13:17:06.435307 kubelet[3468]: I1213 13:17:06.435313 3468 scope.go:117] "RemoveContainer" containerID="36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f" Dec 13 13:17:06.436477 containerd[1955]: time="2024-12-13T13:17:06.435008600Z" level=error msg="ContainerStatus for \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"412ec8dcba9ab94572b4b2f32b5d08c00fb6aa8c08c48d83751244abc026fa91\": not found" Dec 13 13:17:06.436477 containerd[1955]: time="2024-12-13T13:17:06.435669464Z" level=error msg="ContainerStatus for \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\": not found" Dec 13 13:17:06.436606 kubelet[3468]: E1213 13:17:06.435893 3468 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\": not found" containerID="36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f" Dec 13 13:17:06.436606 kubelet[3468]: I1213 13:17:06.435932 3468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f"} err="failed to get container status \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"36489ecc293f01cfbf16518abc0bbf9cb20b40dac2f04d6a9d79e16bdf332e6f\": not found" Dec 13 13:17:06.436606 kubelet[3468]: I1213 13:17:06.435961 3468 scope.go:117] "RemoveContainer" containerID="3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4" Dec 13 13:17:06.436944 containerd[1955]: time="2024-12-13T13:17:06.436276424Z" level=error msg="ContainerStatus for \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\": not found" Dec 13 13:17:06.437161 kubelet[3468]: E1213 13:17:06.437111 3468 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\": not found" containerID="3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4" Dec 13 13:17:06.437308 kubelet[3468]: I1213 13:17:06.437157 3468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4"} err="failed to get container status \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e9bcf8985b820e15171d94ca8b514be81f37d15b44193a5180611a0a2c6f6c4\": not found" Dec 13 13:17:06.437308 kubelet[3468]: I1213 13:17:06.437191 3468 scope.go:117] "RemoveContainer" containerID="52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50" Dec 13 13:17:06.438040 containerd[1955]: time="2024-12-13T13:17:06.437856548Z" level=error msg="ContainerStatus for \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\": not found" Dec 13 13:17:06.438333 kubelet[3468]: E1213 13:17:06.438277 3468 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\": not found" containerID="52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50" Dec 13 13:17:06.438444 kubelet[3468]: I1213 13:17:06.438329 3468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50"} err="failed to get container status \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\": rpc error: code = NotFound desc = an error occurred when try to find container \"52a3c908dd17af7de06404d71e3ec0f931a856c1187a331af367f2f096d45d50\": not found" Dec 13 13:17:06.438444 kubelet[3468]: I1213 13:17:06.438364 3468 scope.go:117] "RemoveContainer" containerID="358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d" Dec 13 13:17:06.439355 containerd[1955]: time="2024-12-13T13:17:06.439138520Z" level=error msg="ContainerStatus for \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\": not found" Dec 13 13:17:06.439648 kubelet[3468]: E1213 13:17:06.439609 3468 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\": not found" containerID="358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d" Dec 13 13:17:06.439791 kubelet[3468]: I1213 13:17:06.439756 3468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d"} err="failed to get container status \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\": rpc error: code = NotFound desc = an error occurred when try to find container \"358a6a8384c08b1da143c2e1b08e94ed7f7df5dca68b4467d179be703c280b7d\": not found" Dec 13 13:17:06.503752 kubelet[3468]: I1213 13:17:06.503667 3468 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-etc-cni-netd\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.503752 kubelet[3468]: I1213 13:17:06.503713 3468 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87a88278-f98c-4be1-a66d-0af03149fc84-host-proc-sys-net\") on node \"ip-172-31-29-1\" DevicePath \"\"" Dec 13 13:17:06.680090 systemd[1]: Removed slice kubepods-burstable-pod87a88278_f98c_4be1_a66d_0af03149fc84.slice - libcontainer container kubepods-burstable-pod87a88278_f98c_4be1_a66d_0af03149fc84.slice. Dec 13 13:17:06.680336 systemd[1]: kubepods-burstable-pod87a88278_f98c_4be1_a66d_0af03149fc84.slice: Consumed 14.753s CPU time. Dec 13 13:17:06.886562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d-rootfs.mount: Deactivated successfully. Dec 13 13:17:06.886725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a-rootfs.mount: Deactivated successfully. Dec 13 13:17:06.886852 systemd[1]: var-lib-kubelet-pods-87a88278\x2df98c\x2d4be1\x2da66d\x2d0af03149fc84-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:17:06.887402 systemd[1]: var-lib-kubelet-pods-87a88278\x2df98c\x2d4be1\x2da66d\x2d0af03149fc84-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:17:06.887713 systemd[1]: var-lib-kubelet-pods-522cc9b2\x2dfe6e\x2d4e03\x2d8233\x2dadbdcb02a303-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9kfpb.mount: Deactivated successfully. Dec 13 13:17:06.887858 systemd[1]: var-lib-kubelet-pods-87a88278\x2df98c\x2d4be1\x2da66d\x2d0af03149fc84-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlx64w.mount: Deactivated successfully. Dec 13 13:17:06.938649 kubelet[3468]: I1213 13:17:06.938319 3468 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="522cc9b2-fe6e-4e03-8233-adbdcb02a303" path="/var/lib/kubelet/pods/522cc9b2-fe6e-4e03-8233-adbdcb02a303/volumes" Dec 13 13:17:06.941371 kubelet[3468]: I1213 13:17:06.940726 3468 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" path="/var/lib/kubelet/pods/87a88278-f98c-4be1-a66d-0af03149fc84/volumes" Dec 13 13:17:07.820176 sshd[5088]: Connection closed by 139.178.89.65 port 43196 Dec 13 13:17:07.821180 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Dec 13 13:17:07.828489 systemd[1]: sshd@27-172.31.29.1:22-139.178.89.65:43196.service: Deactivated successfully. Dec 13 13:17:07.832247 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:17:07.833310 systemd[1]: session-28.scope: Consumed 1.434s CPU time. Dec 13 13:17:07.835175 systemd-logind[1926]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:17:07.837657 systemd-logind[1926]: Removed session 28. Dec 13 13:17:07.870788 systemd[1]: Started sshd@28-172.31.29.1:22-139.178.89.65:43198.service - OpenSSH per-connection server daemon (139.178.89.65:43198). Dec 13 13:17:08.046865 sshd[5248]: Accepted publickey for core from 139.178.89.65 port 43198 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:17:08.050037 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:17:08.057988 systemd-logind[1926]: New session 29 of user core. Dec 13 13:17:08.068483 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 13:17:08.130728 ntpd[1921]: Deleting interface #12 lxc_health, fe80::980b:b9ff:fecc:76e2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Dec 13 13:17:08.131371 ntpd[1921]: 13 Dec 13:17:08 ntpd[1921]: Deleting interface #12 lxc_health, fe80::980b:b9ff:fecc:76e2%8#123, interface stats: received=0, sent=0, dropped=0, active_time=68 secs Dec 13 13:17:09.117772 kubelet[3468]: E1213 13:17:09.117703 3468 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:17:09.774743 sshd[5250]: Connection closed by 139.178.89.65 port 43198 Dec 13 13:17:09.775547 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Dec 13 13:17:09.783814 systemd[1]: sshd@28-172.31.29.1:22-139.178.89.65:43198.service: Deactivated successfully. Dec 13 13:17:09.789131 kubelet[3468]: I1213 13:17:09.786507 3468 topology_manager.go:215] "Topology Admit Handler" podUID="7160a3a0-4b85-415c-9361-fc3b2342d046" podNamespace="kube-system" podName="cilium-cdxlv" Dec 13 13:17:09.789131 kubelet[3468]: E1213 13:17:09.786619 3468 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" containerName="cilium-agent" Dec 13 13:17:09.789131 kubelet[3468]: E1213 13:17:09.786638 3468 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="522cc9b2-fe6e-4e03-8233-adbdcb02a303" containerName="cilium-operator" Dec 13 13:17:09.789131 kubelet[3468]: E1213 13:17:09.786656 3468 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" containerName="mount-cgroup" Dec 13 13:17:09.789131 kubelet[3468]: E1213 13:17:09.786671 3468 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" containerName="apply-sysctl-overwrites" Dec 13 13:17:09.789131 kubelet[3468]: E1213 13:17:09.786686 3468 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" containerName="clean-cilium-state" Dec 13 13:17:09.789131 kubelet[3468]: E1213 13:17:09.786702 3468 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" containerName="mount-bpf-fs" Dec 13 13:17:09.789131 kubelet[3468]: I1213 13:17:09.786742 3468 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a88278-f98c-4be1-a66d-0af03149fc84" containerName="cilium-agent" Dec 13 13:17:09.789131 kubelet[3468]: I1213 13:17:09.786759 3468 memory_manager.go:354] "RemoveStaleState removing state" podUID="522cc9b2-fe6e-4e03-8233-adbdcb02a303" containerName="cilium-operator" Dec 13 13:17:09.789776 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 13:17:09.791375 systemd[1]: session-29.scope: Consumed 1.512s CPU time. Dec 13 13:17:09.797327 systemd-logind[1926]: Session 29 logged out. Waiting for processes to exit. Dec 13 13:17:09.819104 systemd[1]: Started sshd@29-172.31.29.1:22-139.178.89.65:39668.service - OpenSSH per-connection server daemon (139.178.89.65:39668). Dec 13 13:17:09.822501 systemd-logind[1926]: Removed session 29. Dec 13 13:17:09.849258 systemd[1]: Created slice kubepods-burstable-pod7160a3a0_4b85_415c_9361_fc3b2342d046.slice - libcontainer container kubepods-burstable-pod7160a3a0_4b85_415c_9361_fc3b2342d046.slice. Dec 13 13:17:09.927333 kubelet[3468]: I1213 13:17:09.927164 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7160a3a0-4b85-415c-9361-fc3b2342d046-clustermesh-secrets\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.929138 kubelet[3468]: I1213 13:17:09.928423 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-host-proc-sys-kernel\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.929138 kubelet[3468]: I1213 13:17:09.928601 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7160a3a0-4b85-415c-9361-fc3b2342d046-cilium-config-path\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.929138 kubelet[3468]: I1213 13:17:09.928696 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7160a3a0-4b85-415c-9361-fc3b2342d046-hubble-tls\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.929138 kubelet[3468]: I1213 13:17:09.928812 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7160a3a0-4b85-415c-9361-fc3b2342d046-cilium-ipsec-secrets\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.929138 kubelet[3468]: I1213 13:17:09.928955 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-cilium-run\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.929138 kubelet[3468]: I1213 13:17:09.928993 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-bpf-maps\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.930341 kubelet[3468]: I1213 13:17:09.929425 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-host-proc-sys-net\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.930341 kubelet[3468]: I1213 13:17:09.929703 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q29vd\" (UniqueName: \"kubernetes.io/projected/7160a3a0-4b85-415c-9361-fc3b2342d046-kube-api-access-q29vd\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.930341 kubelet[3468]: I1213 13:17:09.930006 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-hostproc\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.931106 kubelet[3468]: I1213 13:17:09.930649 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-etc-cni-netd\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.931625 kubelet[3468]: I1213 13:17:09.931381 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-lib-modules\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.932075 kubelet[3468]: I1213 13:17:09.931741 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-cilium-cgroup\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.932397 kubelet[3468]: I1213 13:17:09.932014 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-cni-path\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:09.932397 kubelet[3468]: I1213 13:17:09.932187 3468 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7160a3a0-4b85-415c-9361-fc3b2342d046-xtables-lock\") pod \"cilium-cdxlv\" (UID: \"7160a3a0-4b85-415c-9361-fc3b2342d046\") " pod="kube-system/cilium-cdxlv" Dec 13 13:17:10.049342 sshd[5259]: Accepted publickey for core from 139.178.89.65 port 39668 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:17:10.056491 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:17:10.128789 systemd-logind[1926]: New session 30 of user core. Dec 13 13:17:10.131506 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 13:17:10.158307 containerd[1955]: time="2024-12-13T13:17:10.158123603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdxlv,Uid:7160a3a0-4b85-415c-9361-fc3b2342d046,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:10.200435 containerd[1955]: time="2024-12-13T13:17:10.200237939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:10.200603 containerd[1955]: time="2024-12-13T13:17:10.200468639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:10.201023 containerd[1955]: time="2024-12-13T13:17:10.200582075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:10.201023 containerd[1955]: time="2024-12-13T13:17:10.200892011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:10.243540 systemd[1]: Started cri-containerd-041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6.scope - libcontainer container 041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6. Dec 13 13:17:10.259709 sshd[5265]: Connection closed by 139.178.89.65 port 39668 Dec 13 13:17:10.260598 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Dec 13 13:17:10.273165 systemd[1]: sshd@29-172.31.29.1:22-139.178.89.65:39668.service: Deactivated successfully. Dec 13 13:17:10.283242 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 13:17:10.288558 systemd-logind[1926]: Session 30 logged out. Waiting for processes to exit. Dec 13 13:17:10.310676 systemd[1]: Started sshd@30-172.31.29.1:22-139.178.89.65:39684.service - OpenSSH per-connection server daemon (139.178.89.65:39684). Dec 13 13:17:10.311925 systemd-logind[1926]: Removed session 30. Dec 13 13:17:10.316606 containerd[1955]: time="2024-12-13T13:17:10.316516728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cdxlv,Uid:7160a3a0-4b85-415c-9361-fc3b2342d046,Namespace:kube-system,Attempt:0,} returns sandbox id \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\"" Dec 13 13:17:10.325915 containerd[1955]: time="2024-12-13T13:17:10.325646040Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:17:10.349689 containerd[1955]: time="2024-12-13T13:17:10.349626276Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d\"" Dec 13 13:17:10.352039 containerd[1955]: time="2024-12-13T13:17:10.351807708Z" level=info msg="StartContainer for \"b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d\"" Dec 13 13:17:10.420446 systemd[1]: Started cri-containerd-b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d.scope - libcontainer container b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d. Dec 13 13:17:10.478382 containerd[1955]: time="2024-12-13T13:17:10.478301412Z" level=info msg="StartContainer for \"b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d\" returns successfully" Dec 13 13:17:10.494255 systemd[1]: cri-containerd-b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d.scope: Deactivated successfully. Dec 13 13:17:10.516259 sshd[5312]: Accepted publickey for core from 139.178.89.65 port 39684 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:17:10.517954 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:17:10.528109 systemd-logind[1926]: New session 31 of user core. Dec 13 13:17:10.534491 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 13 13:17:10.563890 containerd[1955]: time="2024-12-13T13:17:10.563307685Z" level=info msg="shim disconnected" id=b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d namespace=k8s.io Dec 13 13:17:10.564404 containerd[1955]: time="2024-12-13T13:17:10.563734465Z" level=warning msg="cleaning up after shim disconnected" id=b630e7afd240923b68dfc63e921aa73c90fe5cb75cb99c552ae4123dbb9ad79d namespace=k8s.io Dec 13 13:17:10.564404 containerd[1955]: time="2024-12-13T13:17:10.564147421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:11.404693 containerd[1955]: time="2024-12-13T13:17:11.404620129Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:17:11.435766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259691231.mount: Deactivated successfully. Dec 13 13:17:11.439913 containerd[1955]: time="2024-12-13T13:17:11.439858633Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c\"" Dec 13 13:17:11.440954 containerd[1955]: time="2024-12-13T13:17:11.440837029Z" level=info msg="StartContainer for \"d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c\"" Dec 13 13:17:11.508515 systemd[1]: Started cri-containerd-d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c.scope - libcontainer container d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c. Dec 13 13:17:11.558517 containerd[1955]: time="2024-12-13T13:17:11.558406010Z" level=info msg="StartContainer for \"d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c\" returns successfully" Dec 13 13:17:11.570967 systemd[1]: cri-containerd-d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c.scope: Deactivated successfully. Dec 13 13:17:11.615252 containerd[1955]: time="2024-12-13T13:17:11.615076430Z" level=info msg="shim disconnected" id=d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c namespace=k8s.io Dec 13 13:17:11.615252 containerd[1955]: time="2024-12-13T13:17:11.615152606Z" level=warning msg="cleaning up after shim disconnected" id=d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c namespace=k8s.io Dec 13 13:17:11.615252 containerd[1955]: time="2024-12-13T13:17:11.615172238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:11.826398 kubelet[3468]: I1213 13:17:11.826281 3468 setters.go:580] "Node became not ready" node="ip-172-31-29-1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:17:11Z","lastTransitionTime":"2024-12-13T13:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:17:12.061393 systemd[1]: run-containerd-runc-k8s.io-d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c-runc.fbT9lM.mount: Deactivated successfully. Dec 13 13:17:12.061577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6b1bd81d626e3bff38aae25381e981123bc7639eb3e0892f9b29938d84d6e6c-rootfs.mount: Deactivated successfully. Dec 13 13:17:12.416794 containerd[1955]: time="2024-12-13T13:17:12.416685518Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:17:12.454712 containerd[1955]: time="2024-12-13T13:17:12.454462694Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e\"" Dec 13 13:17:12.460685 containerd[1955]: time="2024-12-13T13:17:12.458662802Z" level=info msg="StartContainer for \"38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e\"" Dec 13 13:17:12.521592 systemd[1]: Started cri-containerd-38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e.scope - libcontainer container 38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e. Dec 13 13:17:12.577555 containerd[1955]: time="2024-12-13T13:17:12.577376859Z" level=info msg="StartContainer for \"38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e\" returns successfully" Dec 13 13:17:12.587935 systemd[1]: cri-containerd-38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e.scope: Deactivated successfully. Dec 13 13:17:12.641961 containerd[1955]: time="2024-12-13T13:17:12.641643015Z" level=info msg="shim disconnected" id=38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e namespace=k8s.io Dec 13 13:17:12.641961 containerd[1955]: time="2024-12-13T13:17:12.641719539Z" level=warning msg="cleaning up after shim disconnected" id=38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e namespace=k8s.io Dec 13 13:17:12.641961 containerd[1955]: time="2024-12-13T13:17:12.641758455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:13.061465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38ee2cb8639b3d67e1dcb5fd2c538addc9983cce09a46b73beb0460892e86a3e-rootfs.mount: Deactivated successfully. Dec 13 13:17:13.423837 containerd[1955]: time="2024-12-13T13:17:13.423464895Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:17:13.456069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779681692.mount: Deactivated successfully. Dec 13 13:17:13.458003 containerd[1955]: time="2024-12-13T13:17:13.457624647Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a\"" Dec 13 13:17:13.461344 containerd[1955]: time="2024-12-13T13:17:13.459282699Z" level=info msg="StartContainer for \"7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a\"" Dec 13 13:17:13.523535 systemd[1]: Started cri-containerd-7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a.scope - libcontainer container 7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a. Dec 13 13:17:13.570895 systemd[1]: cri-containerd-7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a.scope: Deactivated successfully. Dec 13 13:17:13.576728 containerd[1955]: time="2024-12-13T13:17:13.576666628Z" level=info msg="StartContainer for \"7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a\" returns successfully" Dec 13 13:17:13.638303 containerd[1955]: time="2024-12-13T13:17:13.638177368Z" level=info msg="shim disconnected" id=7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a namespace=k8s.io Dec 13 13:17:13.638579 containerd[1955]: time="2024-12-13T13:17:13.638302888Z" level=warning msg="cleaning up after shim disconnected" id=7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a namespace=k8s.io Dec 13 13:17:13.638579 containerd[1955]: time="2024-12-13T13:17:13.638347408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:14.061528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c3a019e137c50639e968965e8cc02741e821fc1e845cbdb746c9370b4808c5a-rootfs.mount: Deactivated successfully. Dec 13 13:17:14.119823 kubelet[3468]: E1213 13:17:14.119756 3468 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:17:14.429512 containerd[1955]: time="2024-12-13T13:17:14.429282796Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:17:14.462287 containerd[1955]: time="2024-12-13T13:17:14.462195160Z" level=info msg="CreateContainer within sandbox \"041110ce52364724fdeaf84dbd6edba248aba8d10c29e4a86b6b22d7a767d7e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac\"" Dec 13 13:17:14.466184 containerd[1955]: time="2024-12-13T13:17:14.465397240Z" level=info msg="StartContainer for \"46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac\"" Dec 13 13:17:14.528519 systemd[1]: Started cri-containerd-46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac.scope - libcontainer container 46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac. Dec 13 13:17:14.588500 containerd[1955]: time="2024-12-13T13:17:14.588424385Z" level=info msg="StartContainer for \"46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac\" returns successfully" Dec 13 13:17:15.411256 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 13:17:15.469419 kubelet[3468]: I1213 13:17:15.469304 3468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cdxlv" podStartSLOduration=6.469280477 podStartE2EDuration="6.469280477s" podCreationTimestamp="2024-12-13 13:17:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:17:15.468646433 +0000 UTC m=+116.791488889" watchObservedRunningTime="2024-12-13 13:17:15.469280477 +0000 UTC m=+116.792122873" Dec 13 13:17:17.019452 systemd[1]: run-containerd-runc-k8s.io-46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac-runc.utr1RL.mount: Deactivated successfully. Dec 13 13:17:18.891816 containerd[1955]: time="2024-12-13T13:17:18.891740206Z" level=info msg="StopPodSandbox for \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\"" Dec 13 13:17:18.892405 containerd[1955]: time="2024-12-13T13:17:18.891892090Z" level=info msg="TearDown network for sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" successfully" Dec 13 13:17:18.892405 containerd[1955]: time="2024-12-13T13:17:18.891916654Z" level=info msg="StopPodSandbox for \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" returns successfully" Dec 13 13:17:18.895427 containerd[1955]: time="2024-12-13T13:17:18.893036218Z" level=info msg="RemovePodSandbox for \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\"" Dec 13 13:17:18.895427 containerd[1955]: time="2024-12-13T13:17:18.893092930Z" level=info msg="Forcibly stopping sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\"" Dec 13 13:17:18.895427 containerd[1955]: time="2024-12-13T13:17:18.893196226Z" level=info msg="TearDown network for sandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" successfully" Dec 13 13:17:18.899730 containerd[1955]: time="2024-12-13T13:17:18.899517154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:17:18.899730 containerd[1955]: time="2024-12-13T13:17:18.899610490Z" level=info msg="RemovePodSandbox \"b162718f6f8b6dc759713774e1d159ad0f98b39331ad0b13e709f3e2d830ac8a\" returns successfully" Dec 13 13:17:18.900973 containerd[1955]: time="2024-12-13T13:17:18.900625234Z" level=info msg="StopPodSandbox for \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\"" Dec 13 13:17:18.900973 containerd[1955]: time="2024-12-13T13:17:18.900763438Z" level=info msg="TearDown network for sandbox \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" successfully" Dec 13 13:17:18.900973 containerd[1955]: time="2024-12-13T13:17:18.900787174Z" level=info msg="StopPodSandbox for \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" returns successfully" Dec 13 13:17:18.903173 containerd[1955]: time="2024-12-13T13:17:18.902154802Z" level=info msg="RemovePodSandbox for \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\"" Dec 13 13:17:18.903173 containerd[1955]: time="2024-12-13T13:17:18.902245714Z" level=info msg="Forcibly stopping sandbox \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\"" Dec 13 13:17:18.903173 containerd[1955]: time="2024-12-13T13:17:18.902351578Z" level=info msg="TearDown network for sandbox \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" successfully" Dec 13 13:17:18.910387 containerd[1955]: time="2024-12-13T13:17:18.910327150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:17:18.910630 containerd[1955]: time="2024-12-13T13:17:18.910599010Z" level=info msg="RemovePodSandbox \"6bac3af5df4b882ca8f4bd660cd7cb4ce3e2d551aad62a126b891bd05c9c250d\" returns successfully" Dec 13 13:17:19.766320 (udev-worker)[6121]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:17:19.769859 systemd-networkd[1848]: lxc_health: Link UP Dec 13 13:17:19.780508 (udev-worker)[6122]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:17:19.795427 systemd-networkd[1848]: lxc_health: Gained carrier Dec 13 13:17:20.955459 systemd-networkd[1848]: lxc_health: Gained IPv6LL Dec 13 13:17:21.645856 systemd[1]: run-containerd-runc-k8s.io-46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac-runc.0BI99H.mount: Deactivated successfully. Dec 13 13:17:23.130928 ntpd[1921]: Listen normally on 15 lxc_health [fe80::30f6:28ff:fe7c:168c%14]:123 Dec 13 13:17:23.131514 ntpd[1921]: 13 Dec 13:17:23 ntpd[1921]: Listen normally on 15 lxc_health [fe80::30f6:28ff:fe7c:168c%14]:123 Dec 13 13:17:24.004171 systemd[1]: run-containerd-runc-k8s.io-46c0d3527e1b11c734e051ae5b2100bd160f0fee82a22620814b5bc285d33cac-runc.aMONGP.mount: Deactivated successfully. Dec 13 13:17:26.389447 sshd[5367]: Connection closed by 139.178.89.65 port 39684 Dec 13 13:17:26.390295 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Dec 13 13:17:26.397838 systemd-logind[1926]: Session 31 logged out. Waiting for processes to exit. Dec 13 13:17:26.399896 systemd[1]: sshd@30-172.31.29.1:22-139.178.89.65:39684.service: Deactivated successfully. Dec 13 13:17:26.408193 systemd[1]: session-31.scope: Deactivated successfully. Dec 13 13:17:26.415890 systemd-logind[1926]: Removed session 31. Dec 13 13:17:40.192037 systemd[1]: cri-containerd-7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8.scope: Deactivated successfully. Dec 13 13:17:40.192582 systemd[1]: cri-containerd-7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8.scope: Consumed 5.354s CPU time, 22.3M memory peak, 0B memory swap peak. Dec 13 13:17:40.231760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8-rootfs.mount: Deactivated successfully. Dec 13 13:17:40.249477 containerd[1955]: time="2024-12-13T13:17:40.249179932Z" level=info msg="shim disconnected" id=7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8 namespace=k8s.io Dec 13 13:17:40.249477 containerd[1955]: time="2024-12-13T13:17:40.249538360Z" level=warning msg="cleaning up after shim disconnected" id=7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8 namespace=k8s.io Dec 13 13:17:40.250849 containerd[1955]: time="2024-12-13T13:17:40.249561076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:40.503313 kubelet[3468]: I1213 13:17:40.502606 3468 scope.go:117] "RemoveContainer" containerID="7190c598e4740c76c8cd41610c5e2549629cfeddff591518b03e7c0bfb78f1b8" Dec 13 13:17:40.508851 containerd[1955]: time="2024-12-13T13:17:40.508783350Z" level=info msg="CreateContainer within sandbox \"d2ce003e2bf93b4fb0352b72192771fd79e19ad7474b5fdd36d70aa750b0c5c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 13:17:40.534747 containerd[1955]: time="2024-12-13T13:17:40.534571302Z" level=info msg="CreateContainer within sandbox \"d2ce003e2bf93b4fb0352b72192771fd79e19ad7474b5fdd36d70aa750b0c5c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f7549cdd4236b4c26c02e53f1c9ce6da6d383854a99e60744cf062313eb87740\"" Dec 13 13:17:40.535328 containerd[1955]: time="2024-12-13T13:17:40.535269726Z" level=info msg="StartContainer for \"f7549cdd4236b4c26c02e53f1c9ce6da6d383854a99e60744cf062313eb87740\"" Dec 13 13:17:40.589562 systemd[1]: Started cri-containerd-f7549cdd4236b4c26c02e53f1c9ce6da6d383854a99e60744cf062313eb87740.scope - libcontainer container f7549cdd4236b4c26c02e53f1c9ce6da6d383854a99e60744cf062313eb87740. Dec 13 13:17:40.658773 containerd[1955]: time="2024-12-13T13:17:40.658686822Z" level=info msg="StartContainer for \"f7549cdd4236b4c26c02e53f1c9ce6da6d383854a99e60744cf062313eb87740\" returns successfully" Dec 13 13:17:41.181240 kubelet[3468]: E1213 13:17:41.179809 3468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": context deadline exceeded" Dec 13 13:17:46.276914 systemd[1]: cri-containerd-47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d.scope: Deactivated successfully. Dec 13 13:17:46.278850 systemd[1]: cri-containerd-47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d.scope: Consumed 3.115s CPU time, 18.1M memory peak, 0B memory swap peak. Dec 13 13:17:46.318066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d-rootfs.mount: Deactivated successfully. Dec 13 13:17:46.331887 containerd[1955]: time="2024-12-13T13:17:46.331812562Z" level=info msg="shim disconnected" id=47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d namespace=k8s.io Dec 13 13:17:46.332549 containerd[1955]: time="2024-12-13T13:17:46.332441158Z" level=warning msg="cleaning up after shim disconnected" id=47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d namespace=k8s.io Dec 13 13:17:46.332549 containerd[1955]: time="2024-12-13T13:17:46.332472994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:46.527853 kubelet[3468]: I1213 13:17:46.526762 3468 scope.go:117] "RemoveContainer" containerID="47994ff390da6fde9569c7630d1a3543a82121b657f8d7b615efac30f6d8ef8d" Dec 13 13:17:46.530747 containerd[1955]: time="2024-12-13T13:17:46.530604683Z" level=info msg="CreateContainer within sandbox \"04c174a6d359aa8bf82d68af15f76fcb7f10de30bc5b9d49781e1947b99a8b0e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 13:17:46.566542 containerd[1955]: time="2024-12-13T13:17:46.566408784Z" level=info msg="CreateContainer within sandbox \"04c174a6d359aa8bf82d68af15f76fcb7f10de30bc5b9d49781e1947b99a8b0e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"48c472ffd5bc8a81993ede363b300298f0d928c543d6fd6843f802c7d96b4f6f\"" Dec 13 13:17:46.568228 containerd[1955]: time="2024-12-13T13:17:46.567037776Z" level=info msg="StartContainer for \"48c472ffd5bc8a81993ede363b300298f0d928c543d6fd6843f802c7d96b4f6f\"" Dec 13 13:17:46.619516 systemd[1]: Started cri-containerd-48c472ffd5bc8a81993ede363b300298f0d928c543d6fd6843f802c7d96b4f6f.scope - libcontainer container 48c472ffd5bc8a81993ede363b300298f0d928c543d6fd6843f802c7d96b4f6f. Dec 13 13:17:46.685757 containerd[1955]: time="2024-12-13T13:17:46.685593252Z" level=info msg="StartContainer for \"48c472ffd5bc8a81993ede363b300298f0d928c543d6fd6843f802c7d96b4f6f\" returns successfully" Dec 13 13:17:51.180195 kubelet[3468]: E1213 13:17:51.180070 3468 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-1?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"