Feb 13 19:48:40.238168 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:48:40.238212 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:48:40.238236 kernel: KASLR disabled due to lack of seed Feb 13 19:48:40.238252 kernel: efi: EFI v2.7 by EDK II Feb 13 19:48:40.238268 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:48:40.238283 kernel: ACPI: Early table checksum verification disabled Feb 13 19:48:40.238349 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:48:40.238366 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:48:40.238382 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:48:40.238398 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:48:40.238420 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:48:40.238435 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:48:40.238450 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:48:40.238466 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:48:40.238484 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:48:40.238505 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:48:40.238522 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:48:40.238538 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:48:40.238555 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:48:40.238571 kernel: printk: bootconsole [uart0] enabled Feb 13 19:48:40.238587 kernel: NUMA: Failed to initialise from firmware Feb 13 19:48:40.238603 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:40.238620 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:48:40.238636 kernel: Zone ranges: Feb 13 19:48:40.238652 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:48:40.238668 kernel: DMA32 empty Feb 13 19:48:40.238688 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:48:40.238705 kernel: Movable zone start for each node Feb 13 19:48:40.238721 kernel: Early memory node ranges Feb 13 19:48:40.238737 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:48:40.238753 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:48:40.238769 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:48:40.238785 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:48:40.238801 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:48:40.238817 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:48:40.238833 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:48:40.238849 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:48:40.238866 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:40.238886 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:48:40.238903 kernel: psci: probing for conduit method from ACPI. Feb 13 19:48:40.238926 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:48:40.238943 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:48:40.238961 kernel: psci: Trusted OS migration not required Feb 13 19:48:40.238981 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:48:40.238999 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:48:40.239016 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:48:40.239033 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:48:40.239051 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:48:40.239068 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:48:40.239085 kernel: CPU features: detected: Spectre-v2 Feb 13 19:48:40.239102 kernel: CPU features: detected: Spectre-v3a Feb 13 19:48:40.239119 kernel: CPU features: detected: Spectre-BHB Feb 13 19:48:40.239136 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:48:40.239153 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:48:40.239175 kernel: alternatives: applying boot alternatives Feb 13 19:48:40.239195 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:40.239213 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:48:40.239230 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:48:40.239247 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:48:40.239264 kernel: Fallback order for Node 0: 0 Feb 13 19:48:40.239281 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:48:40.239323 kernel: Policy zone: Normal Feb 13 19:48:40.239342 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:48:40.239359 kernel: software IO TLB: area num 2. Feb 13 19:48:40.239376 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:48:40.239400 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:48:40.239418 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:48:40.239435 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:48:40.239453 kernel: rcu: RCU event tracing is enabled. Feb 13 19:48:40.239470 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:48:40.239488 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:48:40.239505 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:48:40.239522 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:48:40.239539 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:48:40.239556 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:48:40.239573 kernel: GICv3: 96 SPIs implemented Feb 13 19:48:40.239595 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:48:40.239612 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:48:40.239629 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:48:40.239646 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:48:40.239663 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:48:40.239680 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:48:40.239697 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:48:40.239714 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:48:40.239731 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:48:40.239749 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:48:40.239766 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:48:40.239783 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:48:40.239805 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:48:40.239822 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:48:40.239858 kernel: Console: colour dummy device 80x25 Feb 13 19:48:40.239877 kernel: printk: console [tty1] enabled Feb 13 19:48:40.239894 kernel: ACPI: Core revision 20230628 Feb 13 19:48:40.239912 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:48:40.239929 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:48:40.239947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:48:40.239964 kernel: landlock: Up and running. Feb 13 19:48:40.239988 kernel: SELinux: Initializing. Feb 13 19:48:40.240005 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:40.240023 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:40.240041 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:40.240058 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:40.240076 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:48:40.240093 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:48:40.240110 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:48:40.240128 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:48:40.240149 kernel: Remapping and enabling EFI services. Feb 13 19:48:40.240167 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:48:40.240184 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:48:40.240202 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:48:40.240219 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:48:40.240237 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:48:40.240254 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:48:40.240271 kernel: SMP: Total of 2 processors activated. Feb 13 19:48:40.242975 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:48:40.243022 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:48:40.243041 kernel: CPU features: detected: CRC32 instructions Feb 13 19:48:40.243060 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:48:40.243090 kernel: alternatives: applying system-wide alternatives Feb 13 19:48:40.243112 kernel: devtmpfs: initialized Feb 13 19:48:40.243131 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:48:40.243150 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:48:40.243168 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:48:40.243186 kernel: SMBIOS 3.0.0 present. Feb 13 19:48:40.243204 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:48:40.243227 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:48:40.243245 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:48:40.243264 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:48:40.243282 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:48:40.243325 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:48:40.243345 kernel: audit: type=2000 audit(0.302:1): state=initialized audit_enabled=0 res=1 Feb 13 19:48:40.243364 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:48:40.243389 kernel: cpuidle: using governor menu Feb 13 19:48:40.243407 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:48:40.243425 kernel: ASID allocator initialised with 65536 entries Feb 13 19:48:40.243443 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:48:40.243461 kernel: Serial: AMBA PL011 UART driver Feb 13 19:48:40.243480 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:48:40.243498 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:48:40.243516 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:48:40.243534 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:48:40.243557 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:48:40.243575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:48:40.243593 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:48:40.243612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:48:40.243630 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:48:40.243648 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:48:40.243666 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:48:40.243684 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:48:40.243702 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:48:40.243725 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:48:40.243743 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:48:40.243761 kernel: ACPI: Interpreter enabled Feb 13 19:48:40.243779 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:48:40.243797 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:48:40.243816 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:48:40.244146 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:48:40.244390 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:48:40.244678 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:48:40.244889 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:48:40.245095 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:48:40.245121 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:48:40.245140 kernel: acpiphp: Slot [1] registered Feb 13 19:48:40.245158 kernel: acpiphp: Slot [2] registered Feb 13 19:48:40.245176 kernel: acpiphp: Slot [3] registered Feb 13 19:48:40.245194 kernel: acpiphp: Slot [4] registered Feb 13 19:48:40.245219 kernel: acpiphp: Slot [5] registered Feb 13 19:48:40.245238 kernel: acpiphp: Slot [6] registered Feb 13 19:48:40.245256 kernel: acpiphp: Slot [7] registered Feb 13 19:48:40.245274 kernel: acpiphp: Slot [8] registered Feb 13 19:48:40.245420 kernel: acpiphp: Slot [9] registered Feb 13 19:48:40.245445 kernel: acpiphp: Slot [10] registered Feb 13 19:48:40.245464 kernel: acpiphp: Slot [11] registered Feb 13 19:48:40.245484 kernel: acpiphp: Slot [12] registered Feb 13 19:48:40.245502 kernel: acpiphp: Slot [13] registered Feb 13 19:48:40.245520 kernel: acpiphp: Slot [14] registered Feb 13 19:48:40.245546 kernel: acpiphp: Slot [15] registered Feb 13 19:48:40.245564 kernel: acpiphp: Slot [16] registered Feb 13 19:48:40.245582 kernel: acpiphp: Slot [17] registered Feb 13 19:48:40.245601 kernel: acpiphp: Slot [18] registered Feb 13 19:48:40.245619 kernel: acpiphp: Slot [19] registered Feb 13 19:48:40.245637 kernel: acpiphp: Slot [20] registered Feb 13 19:48:40.245655 kernel: acpiphp: Slot [21] registered Feb 13 19:48:40.245673 kernel: acpiphp: Slot [22] registered Feb 13 19:48:40.245691 kernel: acpiphp: Slot [23] registered Feb 13 19:48:40.245714 kernel: acpiphp: Slot [24] registered Feb 13 19:48:40.245733 kernel: acpiphp: Slot [25] registered Feb 13 19:48:40.245751 kernel: acpiphp: Slot [26] registered Feb 13 19:48:40.245769 kernel: acpiphp: Slot [27] registered Feb 13 19:48:40.245787 kernel: acpiphp: Slot [28] registered Feb 13 19:48:40.245806 kernel: acpiphp: Slot [29] registered Feb 13 19:48:40.245825 kernel: acpiphp: Slot [30] registered Feb 13 19:48:40.245843 kernel: acpiphp: Slot [31] registered Feb 13 19:48:40.245861 kernel: PCI host bridge to bus 0000:00 Feb 13 19:48:40.246104 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:48:40.246389 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:48:40.246589 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:40.246806 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:48:40.247044 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:48:40.247270 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:48:40.247520 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:48:40.247764 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:48:40.247999 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:48:40.248210 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:40.248470 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:48:40.248695 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:48:40.248906 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:48:40.249124 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:48:40.249836 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:40.250061 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:48:40.250272 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:48:40.250517 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:48:40.250727 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:48:40.250941 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:48:40.251147 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:48:40.251357 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:48:40.251543 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:40.251569 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:48:40.251588 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:48:40.251607 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:48:40.251626 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:48:40.251644 kernel: iommu: Default domain type: Translated Feb 13 19:48:40.251663 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:48:40.251688 kernel: efivars: Registered efivars operations Feb 13 19:48:40.251706 kernel: vgaarb: loaded Feb 13 19:48:40.251724 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:48:40.251742 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:48:40.251760 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:48:40.251779 kernel: pnp: PnP ACPI init Feb 13 19:48:40.252034 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:48:40.252063 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:48:40.252087 kernel: NET: Registered PF_INET protocol family Feb 13 19:48:40.252107 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:48:40.252126 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:48:40.252144 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:48:40.252163 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:48:40.252181 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:48:40.252200 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:48:40.252218 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:40.252237 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:40.252260 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:48:40.252278 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:48:40.252314 kernel: kvm [1]: HYP mode not available Feb 13 19:48:40.252336 kernel: Initialise system trusted keyrings Feb 13 19:48:40.252355 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:48:40.252373 kernel: Key type asymmetric registered Feb 13 19:48:40.252392 kernel: Asymmetric key parser 'x509' registered Feb 13 19:48:40.252410 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:48:40.252429 kernel: io scheduler mq-deadline registered Feb 13 19:48:40.252454 kernel: io scheduler kyber registered Feb 13 19:48:40.252473 kernel: io scheduler bfq registered Feb 13 19:48:40.252702 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:48:40.252731 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:48:40.252750 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:48:40.252768 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:48:40.252787 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:48:40.252805 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:48:40.252831 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:48:40.253053 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:48:40.253080 kernel: printk: console [ttyS0] disabled Feb 13 19:48:40.253100 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:48:40.253118 kernel: printk: console [ttyS0] enabled Feb 13 19:48:40.253136 kernel: printk: bootconsole [uart0] disabled Feb 13 19:48:40.253155 kernel: thunder_xcv, ver 1.0 Feb 13 19:48:40.253173 kernel: thunder_bgx, ver 1.0 Feb 13 19:48:40.253191 kernel: nicpf, ver 1.0 Feb 13 19:48:40.253214 kernel: nicvf, ver 1.0 Feb 13 19:48:40.253484 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:48:40.253690 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:48:39 UTC (1739476119) Feb 13 19:48:40.253719 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:48:40.253738 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:48:40.253757 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:48:40.253776 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:48:40.253794 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:48:40.253820 kernel: Segment Routing with IPv6 Feb 13 19:48:40.253839 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:48:40.253857 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:48:40.253875 kernel: Key type dns_resolver registered Feb 13 19:48:40.253894 kernel: registered taskstats version 1 Feb 13 19:48:40.253912 kernel: Loading compiled-in X.509 certificates Feb 13 19:48:40.253931 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:48:40.253949 kernel: Key type .fscrypt registered Feb 13 19:48:40.253968 kernel: Key type fscrypt-provisioning registered Feb 13 19:48:40.253991 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:48:40.254010 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:48:40.254028 kernel: ima: No architecture policies found Feb 13 19:48:40.254046 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:48:40.254064 kernel: clk: Disabling unused clocks Feb 13 19:48:40.254083 kernel: Freeing unused kernel memory: 39360K Feb 13 19:48:40.254101 kernel: Run /init as init process Feb 13 19:48:40.254119 kernel: with arguments: Feb 13 19:48:40.254137 kernel: /init Feb 13 19:48:40.254155 kernel: with environment: Feb 13 19:48:40.254177 kernel: HOME=/ Feb 13 19:48:40.254196 kernel: TERM=linux Feb 13 19:48:40.254214 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:48:40.254237 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:40.254262 systemd[1]: Detected virtualization amazon. Feb 13 19:48:40.254282 systemd[1]: Detected architecture arm64. Feb 13 19:48:40.254399 systemd[1]: Running in initrd. Feb 13 19:48:40.254427 systemd[1]: No hostname configured, using default hostname. Feb 13 19:48:40.254448 systemd[1]: Hostname set to . Feb 13 19:48:40.254469 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:40.254490 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:48:40.254510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:40.254581 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:40.254791 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:48:40.254817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:40.254844 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:48:40.254866 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:48:40.254889 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:48:40.254910 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:48:40.254930 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:40.254950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:40.254970 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:40.254994 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:40.255015 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:40.255034 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:40.255055 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:40.255075 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:40.255095 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:48:40.255116 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:48:40.255136 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:40.255156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:40.255181 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:40.255201 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:40.255221 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:48:40.255241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:40.255261 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:48:40.255281 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:48:40.255834 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:40.256143 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:40.256487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:40.256533 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:40.256559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:40.256579 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:48:40.256648 systemd-journald[249]: Collecting audit messages is disabled. Feb 13 19:48:40.256700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:48:40.256721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:48:40.256740 kernel: Bridge firewalling registered Feb 13 19:48:40.256765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:40.256785 systemd-journald[249]: Journal started Feb 13 19:48:40.256823 systemd-journald[249]: Runtime Journal (/run/log/journal/ec2cdd2c7fe25a71ff47b886571ab5ae) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:40.201399 systemd-modules-load[251]: Inserted module 'overlay' Feb 13 19:48:40.246657 systemd-modules-load[251]: Inserted module 'br_netfilter' Feb 13 19:48:40.264449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:40.267899 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:40.279655 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:40.292264 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:40.295739 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:40.297264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:48:40.301443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:40.331816 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:40.342965 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:40.357636 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:48:40.366884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:40.381774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:40.389391 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:40.437329 dracut-cmdline[283]: dracut-dracut-053 Feb 13 19:48:40.441382 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:40.480972 systemd-resolved[285]: Positive Trust Anchors: Feb 13 19:48:40.483452 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:40.486532 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:40.602342 kernel: SCSI subsystem initialized Feb 13 19:48:40.610348 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:48:40.624415 kernel: iscsi: registered transport (tcp) Feb 13 19:48:40.649124 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:48:40.649233 kernel: QLogic iSCSI HBA Driver Feb 13 19:48:40.727337 kernel: random: crng init done Feb 13 19:48:40.727761 systemd-resolved[285]: Defaulting to hostname 'linux'. Feb 13 19:48:40.731751 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:40.736160 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:40.756478 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:40.766735 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:48:40.808892 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:48:40.808968 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:48:40.810698 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:48:40.876336 kernel: raid6: neonx8 gen() 6689 MB/s Feb 13 19:48:40.893345 kernel: raid6: neonx4 gen() 6509 MB/s Feb 13 19:48:40.910352 kernel: raid6: neonx2 gen() 5337 MB/s Feb 13 19:48:40.927365 kernel: raid6: neonx1 gen() 3900 MB/s Feb 13 19:48:40.944358 kernel: raid6: int64x8 gen() 3767 MB/s Feb 13 19:48:40.961356 kernel: raid6: int64x4 gen() 3651 MB/s Feb 13 19:48:40.978360 kernel: raid6: int64x2 gen() 3534 MB/s Feb 13 19:48:40.996163 kernel: raid6: int64x1 gen() 2740 MB/s Feb 13 19:48:40.996235 kernel: raid6: using algorithm neonx8 gen() 6689 MB/s Feb 13 19:48:41.014157 kernel: raid6: .... xor() 4847 MB/s, rmw enabled Feb 13 19:48:41.014248 kernel: raid6: using neon recovery algorithm Feb 13 19:48:41.023510 kernel: xor: measuring software checksum speed Feb 13 19:48:41.023621 kernel: 8regs : 11021 MB/sec Feb 13 19:48:41.024649 kernel: 32regs : 11923 MB/sec Feb 13 19:48:41.025886 kernel: arm64_neon : 9552 MB/sec Feb 13 19:48:41.025961 kernel: xor: using function: 32regs (11923 MB/sec) Feb 13 19:48:41.117346 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:48:41.136333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:41.148686 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:41.182002 systemd-udevd[468]: Using default interface naming scheme 'v255'. Feb 13 19:48:41.191871 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:41.203573 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:48:41.242902 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Feb 13 19:48:41.316284 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:41.326707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:41.460852 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:41.475624 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:48:41.519421 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:41.522084 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:41.525105 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:41.527513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:41.545506 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:48:41.597012 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:41.642222 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:48:41.642313 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:48:41.690204 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:48:41.690533 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:48:41.691226 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:70:b2:f7:d8:77 Feb 13 19:48:41.696275 (udev-worker)[513]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:41.703960 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:48:41.704043 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:48:41.715332 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:48:41.721328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:41.721579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:41.726241 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:41.736381 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:48:41.736443 kernel: GPT:9289727 != 16777215 Feb 13 19:48:41.736472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:41.741486 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:48:41.744532 kernel: GPT:9289727 != 16777215 Feb 13 19:48:41.744579 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:48:41.744627 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:41.737632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:41.744648 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:41.757971 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:41.786148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:41.800226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:41.845549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:41.914863 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (525) Feb 13 19:48:41.944363 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (536) Feb 13 19:48:42.006522 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:48:42.036335 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:48:42.066447 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:42.071416 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:42.096091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:42.115660 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:48:42.128550 disk-uuid[660]: Primary Header is updated. Feb 13 19:48:42.128550 disk-uuid[660]: Secondary Entries is updated. Feb 13 19:48:42.128550 disk-uuid[660]: Secondary Header is updated. Feb 13 19:48:42.138403 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:42.148335 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:42.158312 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:43.157335 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:43.158830 disk-uuid[661]: The operation has completed successfully. Feb 13 19:48:43.347119 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:48:43.347393 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:48:43.402615 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:48:43.420487 sh[1005]: Success Feb 13 19:48:43.449364 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:48:43.595157 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:48:43.600327 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:48:43.611618 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:48:43.655256 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:48:43.655368 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:43.655398 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:48:43.658458 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:48:43.658538 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:48:43.774354 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:48:43.802721 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:48:43.807430 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:48:43.819599 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:48:43.829699 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:48:43.866444 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:43.866578 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:43.867880 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:43.876577 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:43.901238 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:48:43.903843 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:43.917110 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:48:43.931851 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:48:44.083498 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:44.097740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:44.169731 systemd-networkd[1210]: lo: Link UP Feb 13 19:48:44.169764 systemd-networkd[1210]: lo: Gained carrier Feb 13 19:48:44.175751 systemd-networkd[1210]: Enumeration completed Feb 13 19:48:44.177479 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:44.182166 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:44.182188 systemd-networkd[1210]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:44.183232 systemd[1]: Reached target network.target - Network. Feb 13 19:48:44.196002 systemd-networkd[1210]: eth0: Link UP Feb 13 19:48:44.196021 systemd-networkd[1210]: eth0: Gained carrier Feb 13 19:48:44.196039 systemd-networkd[1210]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:44.209463 systemd-networkd[1210]: eth0: DHCPv4 address 172.31.28.108/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:44.367752 ignition[1124]: Ignition 2.19.0 Feb 13 19:48:44.367802 ignition[1124]: Stage: fetch-offline Feb 13 19:48:44.368710 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:44.369851 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:44.376755 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:44.371868 ignition[1124]: Ignition finished successfully Feb 13 19:48:44.401931 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:48:44.428406 ignition[1220]: Ignition 2.19.0 Feb 13 19:48:44.428997 ignition[1220]: Stage: fetch Feb 13 19:48:44.429996 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:44.430029 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:44.430212 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:44.444215 ignition[1220]: PUT result: OK Feb 13 19:48:44.447989 ignition[1220]: parsed url from cmdline: "" Feb 13 19:48:44.448010 ignition[1220]: no config URL provided Feb 13 19:48:44.448029 ignition[1220]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:48:44.448062 ignition[1220]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:48:44.448106 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:44.450504 ignition[1220]: PUT result: OK Feb 13 19:48:44.450609 ignition[1220]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:48:44.454746 ignition[1220]: GET result: OK Feb 13 19:48:44.454957 ignition[1220]: parsing config with SHA512: fe7e7e4c859e8f5be48f809bb6a918402d2f143e583cade9233a39e66d67cf0357a9d305cff964283643183cdca635884aeb08ccd669621faac6852abee87e05 Feb 13 19:48:44.474056 unknown[1220]: fetched base config from "system" Feb 13 19:48:44.474331 unknown[1220]: fetched base config from "system" Feb 13 19:48:44.477073 ignition[1220]: fetch: fetch complete Feb 13 19:48:44.474357 unknown[1220]: fetched user config from "aws" Feb 13 19:48:44.477096 ignition[1220]: fetch: fetch passed Feb 13 19:48:44.477212 ignition[1220]: Ignition finished successfully Feb 13 19:48:44.486636 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:48:44.495869 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:48:44.545807 ignition[1226]: Ignition 2.19.0 Feb 13 19:48:44.545846 ignition[1226]: Stage: kargs Feb 13 19:48:44.546969 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:44.547000 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:44.547180 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:44.549714 ignition[1226]: PUT result: OK Feb 13 19:48:44.560656 ignition[1226]: kargs: kargs passed Feb 13 19:48:44.560786 ignition[1226]: Ignition finished successfully Feb 13 19:48:44.576392 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:48:44.593670 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:48:44.630505 ignition[1232]: Ignition 2.19.0 Feb 13 19:48:44.630536 ignition[1232]: Stage: disks Feb 13 19:48:44.632413 ignition[1232]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:44.632447 ignition[1232]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:44.632719 ignition[1232]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:44.634441 ignition[1232]: PUT result: OK Feb 13 19:48:44.644871 ignition[1232]: disks: disks passed Feb 13 19:48:44.645029 ignition[1232]: Ignition finished successfully Feb 13 19:48:44.651428 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:48:44.654025 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:44.656353 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:48:44.660323 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:44.662264 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:44.664393 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:44.687755 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:48:44.731310 systemd-fsck[1240]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:48:44.735519 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:48:44.754680 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:48:44.842383 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:48:44.843469 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:48:44.847238 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:44.866477 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:44.871597 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:48:44.878134 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:48:44.878224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:48:44.878275 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:44.904346 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1259) Feb 13 19:48:44.907110 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:48:44.914045 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:44.914143 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:44.915492 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:44.919747 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:48:44.931347 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:44.935281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:45.300101 initrd-setup-root[1283]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:48:45.324500 initrd-setup-root[1290]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:48:45.347110 initrd-setup-root[1297]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:48:45.359495 initrd-setup-root[1304]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:48:45.734903 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:45.747540 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:48:45.762362 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:48:45.786338 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:45.787242 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:48:45.829413 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:48:45.844469 ignition[1372]: INFO : Ignition 2.19.0 Feb 13 19:48:45.847628 ignition[1372]: INFO : Stage: mount Feb 13 19:48:45.847628 ignition[1372]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.847628 ignition[1372]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.847628 ignition[1372]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.855799 ignition[1372]: INFO : PUT result: OK Feb 13 19:48:45.861445 ignition[1372]: INFO : mount: mount passed Feb 13 19:48:45.864049 ignition[1372]: INFO : Ignition finished successfully Feb 13 19:48:45.868783 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:48:45.880512 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:48:45.916250 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:45.941847 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1383) Feb 13 19:48:45.941934 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:45.943600 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:45.944771 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:45.950337 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:45.954517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:45.999398 ignition[1399]: INFO : Ignition 2.19.0 Feb 13 19:48:45.999398 ignition[1399]: INFO : Stage: files Feb 13 19:48:46.002700 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:46.002700 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:46.002700 ignition[1399]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:46.008933 ignition[1399]: INFO : PUT result: OK Feb 13 19:48:46.013827 ignition[1399]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:48:46.016502 ignition[1399]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:48:46.016502 ignition[1399]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:48:46.042743 ignition[1399]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:48:46.045327 ignition[1399]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:48:46.045327 ignition[1399]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:48:46.044617 unknown[1399]: wrote ssh authorized keys file for user: core Feb 13 19:48:46.049737 systemd-networkd[1210]: eth0: Gained IPv6LL Feb 13 19:48:46.059430 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:48:46.066912 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:48:46.244138 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:48:46.428448 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:48:46.432433 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:48:46.432433 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:48:46.922599 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:48:47.076625 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:47.080251 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:48:47.482570 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:48:47.869391 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:47.869391 ignition[1399]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:47.877682 ignition[1399]: INFO : files: files passed Feb 13 19:48:47.877682 ignition[1399]: INFO : Ignition finished successfully Feb 13 19:48:47.879366 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:48:47.902704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:48:47.910553 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:48:47.936170 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:48:47.936969 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:48:47.956900 initrd-setup-root-after-ignition[1428]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:47.956900 initrd-setup-root-after-ignition[1428]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:47.966237 initrd-setup-root-after-ignition[1432]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:47.974429 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:47.977812 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:48:47.996672 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:48:48.064002 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:48:48.065035 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:48:48.071257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:48:48.075489 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:48:48.079482 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:48:48.087883 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:48:48.128709 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:48.140690 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:48:48.172488 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:48.177306 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:48.180094 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:48:48.186131 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:48:48.186674 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:48.191185 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:48:48.193894 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:48:48.197581 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:48:48.200966 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:48.203691 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:48.212986 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:48:48.215172 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:48.221767 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:48:48.224109 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:48:48.227314 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:48:48.232705 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:48:48.233151 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:48.242349 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:48.244822 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:48.247955 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:48:48.254014 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:48.260934 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:48:48.261197 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:48.267404 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:48:48.268007 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:48.275895 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:48:48.276467 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:48:48.287648 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:48:48.290816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:48:48.291228 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:48.303756 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:48:48.307709 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:48:48.308074 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:48.312205 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:48:48.312505 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:48.328107 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:48:48.331640 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:48:48.363640 ignition[1452]: INFO : Ignition 2.19.0 Feb 13 19:48:48.370332 ignition[1452]: INFO : Stage: umount Feb 13 19:48:48.370332 ignition[1452]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:48.370332 ignition[1452]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:48.370332 ignition[1452]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:48.370332 ignition[1452]: INFO : PUT result: OK Feb 13 19:48:48.369966 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:48:48.384999 ignition[1452]: INFO : umount: umount passed Feb 13 19:48:48.387432 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:48:48.389559 ignition[1452]: INFO : Ignition finished successfully Feb 13 19:48:48.391489 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:48:48.398373 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:48:48.399503 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:48:48.405924 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:48:48.406234 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:48:48.411876 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:48:48.412016 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:48:48.414078 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:48:48.414214 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:48:48.416235 systemd[1]: Stopped target network.target - Network. Feb 13 19:48:48.417966 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:48:48.418107 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:48.420830 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:48:48.424203 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:48:48.440639 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:48.443933 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:48:48.448429 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:48:48.450353 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:48:48.450445 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:48.452411 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:48:48.452494 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:48.454523 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:48:48.454628 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:48:48.456686 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:48:48.456781 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:48.458897 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:48:48.458992 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:48.463120 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:48:48.465604 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:48:48.472669 systemd-networkd[1210]: eth0: DHCPv6 lease lost Feb 13 19:48:48.479035 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:48:48.481541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:48:48.502985 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:48:48.503327 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:48:48.511201 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:48:48.511384 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:48.522766 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:48:48.531708 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:48:48.531975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:48.535517 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:48:48.535641 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:48.538488 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:48:48.538587 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:48.541455 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:48:48.541619 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:48.546840 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:48.590018 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:48:48.591116 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:48.601950 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:48:48.602065 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:48.605518 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:48:48.605598 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:48.607754 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:48:48.607951 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:48.617672 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:48:48.617779 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:48.619844 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:48.619929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:48.644703 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:48:48.647256 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:48:48.647397 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:48.649820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:48.649904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:48.655608 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:48:48.655868 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:48:48.671398 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:48:48.673459 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:48:48.684747 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:48:48.715682 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:48:48.733806 systemd[1]: Switching root. Feb 13 19:48:48.787129 systemd-journald[249]: Journal stopped Feb 13 19:48:51.778589 systemd-journald[249]: Received SIGTERM from PID 1 (systemd). Feb 13 19:48:51.778747 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:48:51.778801 kernel: SELinux: policy capability open_perms=1 Feb 13 19:48:51.778833 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:48:51.778866 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:48:51.778897 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:48:51.778929 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:48:51.778960 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:48:51.779002 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:48:51.779039 kernel: audit: type=1403 audit(1739476129.580:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:48:51.779071 systemd[1]: Successfully loaded SELinux policy in 82.570ms. Feb 13 19:48:51.779120 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.927ms. Feb 13 19:48:51.779156 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:51.779188 systemd[1]: Detected virtualization amazon. Feb 13 19:48:51.779219 systemd[1]: Detected architecture arm64. Feb 13 19:48:51.779257 systemd[1]: Detected first boot. Feb 13 19:48:51.783073 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:51.783141 zram_generator::config[1495]: No configuration found. Feb 13 19:48:51.783197 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:48:51.783232 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:48:51.783266 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:48:51.785518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:48:51.785654 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:48:51.785689 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:48:51.785724 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:48:51.785765 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:48:51.785821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:48:51.785858 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:48:51.785895 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:48:51.785933 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:48:51.785964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:51.786045 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:51.786087 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:48:51.786126 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:48:51.786168 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:48:51.786204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:51.786235 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:48:51.786265 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:51.786363 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:48:51.786400 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:48:51.786431 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:51.786466 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:48:51.786503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:51.786538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:51.786568 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:51.786603 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:51.786635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:48:51.786666 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:48:51.786696 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:51.786728 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:51.787573 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:51.787613 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:48:51.787657 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:48:51.787688 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:48:51.787722 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:48:51.787755 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:48:51.787802 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:48:51.787842 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:48:51.787879 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:48:51.787913 systemd[1]: Reached target machines.target - Containers. Feb 13 19:48:51.787956 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:48:51.787988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:51.788019 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:51.788054 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:48:51.788085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:51.788122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:51.788152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:51.788184 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:48:51.791450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:51.791529 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:48:51.791564 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:48:51.791596 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:48:51.791627 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:48:51.791657 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:48:51.791687 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:51.791718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:51.791748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:48:51.791811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:48:51.791847 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:51.791881 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:48:51.791913 systemd[1]: Stopped verity-setup.service. Feb 13 19:48:51.791948 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:48:51.791978 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:48:51.792009 kernel: loop: module loaded Feb 13 19:48:51.792042 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:48:51.792076 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:48:51.792115 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:48:51.792147 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:48:51.792177 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:51.792208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:48:51.792240 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:48:51.792278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:51.792346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:51.792384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:51.792417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:51.792453 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:51.792485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:51.792516 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:51.792549 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:48:51.792697 systemd-journald[1580]: Collecting audit messages is disabled. Feb 13 19:48:51.792800 kernel: fuse: init (API version 7.39) Feb 13 19:48:51.792844 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:48:51.792883 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:48:51.792936 systemd-journald[1580]: Journal started Feb 13 19:48:51.793004 systemd-journald[1580]: Runtime Journal (/run/log/journal/ec2cdd2c7fe25a71ff47b886571ab5ae) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:51.061195 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:48:51.137543 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:48:51.811960 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:48:51.812061 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:48:51.812105 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:51.138693 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:48:51.826964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:48:51.827089 kernel: ACPI: bus type drm_connector registered Feb 13 19:48:51.846780 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:48:51.867699 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:48:51.873394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:51.888524 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:48:51.892330 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:51.906182 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:48:51.915336 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:51.925958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:51.955226 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:48:51.969534 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:51.974440 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:48:51.978884 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:51.981419 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:51.985126 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:48:51.986945 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:48:51.991699 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:48:51.998608 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:48:52.003567 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:48:52.044093 kernel: loop0: detected capacity change from 0 to 189592 Feb 13 19:48:52.069885 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:48:52.091320 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:48:52.096689 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:48:52.112605 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:48:52.122679 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:48:52.141938 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:48:52.147600 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:48:52.173441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:52.192421 kernel: loop1: detected capacity change from 0 to 52536 Feb 13 19:48:52.201724 systemd-journald[1580]: Time spent on flushing to /var/log/journal/ec2cdd2c7fe25a71ff47b886571ab5ae is 75.032ms for 917 entries. Feb 13 19:48:52.201724 systemd-journald[1580]: System Journal (/var/log/journal/ec2cdd2c7fe25a71ff47b886571ab5ae) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:48:52.300794 systemd-journald[1580]: Received client request to flush runtime journal. Feb 13 19:48:52.300920 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 19:48:52.208635 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:48:52.210665 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:48:52.238918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:52.250740 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:48:52.313913 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:48:52.324794 udevadm[1640]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:48:52.343589 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:48:52.358886 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:52.447475 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 19:48:52.452124 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Feb 13 19:48:52.452175 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Feb 13 19:48:52.474478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:52.556365 kernel: loop4: detected capacity change from 0 to 189592 Feb 13 19:48:52.597897 kernel: loop5: detected capacity change from 0 to 52536 Feb 13 19:48:52.620355 kernel: loop6: detected capacity change from 0 to 114328 Feb 13 19:48:52.642828 kernel: loop7: detected capacity change from 0 to 114432 Feb 13 19:48:52.659829 (sd-merge)[1650]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:48:52.662706 (sd-merge)[1650]: Merged extensions into '/usr'. Feb 13 19:48:52.684437 systemd[1]: Reloading requested from client PID 1605 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:48:52.684463 systemd[1]: Reloading... Feb 13 19:48:52.827979 zram_generator::config[1672]: No configuration found. Feb 13 19:48:53.213033 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:53.371722 systemd[1]: Reloading finished in 686 ms. Feb 13 19:48:53.429954 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:48:53.433354 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:48:53.453625 systemd[1]: Starting ensure-sysext.service... Feb 13 19:48:53.462731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:53.472368 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:53.505927 systemd[1]: Reloading requested from client PID 1728 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:48:53.505988 systemd[1]: Reloading... Feb 13 19:48:53.581739 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:48:53.583942 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:48:53.588620 systemd-udevd[1730]: Using default interface naming scheme 'v255'. Feb 13 19:48:53.590893 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:48:53.591752 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Feb 13 19:48:53.591993 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Feb 13 19:48:53.609901 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:53.611794 systemd-tmpfiles[1729]: Skipping /boot Feb 13 19:48:53.649921 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:53.650203 systemd-tmpfiles[1729]: Skipping /boot Feb 13 19:48:53.784699 zram_generator::config[1765]: No configuration found. Feb 13 19:48:53.990015 (udev-worker)[1772]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:54.006268 ldconfig[1601]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:48:54.185343 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1808) Feb 13 19:48:54.289094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:54.455468 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:48:54.455672 systemd[1]: Reloading finished in 948 ms. Feb 13 19:48:54.501035 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:54.505395 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:48:54.521600 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:54.597015 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:48:54.606090 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:48:54.608832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:54.615003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:54.623889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:54.631935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:54.635872 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:54.641947 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:48:54.653966 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:54.668595 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:54.674084 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:48:54.718614 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:54.720431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:54.734875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:54.735210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:54.754610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:54.754965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:54.785807 systemd[1]: Finished ensure-sysext.service. Feb 13 19:48:54.821765 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:48:54.831669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:54.842510 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:48:54.846842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:54.851704 augenrules[1955]: No rules Feb 13 19:48:54.854651 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:48:54.866641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:54.868842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:54.875736 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:48:54.878643 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:54.878834 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:54.878930 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:48:54.896642 lvm[1956]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:54.909792 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:48:54.918940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:54.924924 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:48:54.937190 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:48:54.949576 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:48:54.967465 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:54.967875 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:54.977501 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:48:54.980905 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:55.000606 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:48:55.025536 lvm[1970]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:55.050236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:48:55.054572 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:48:55.060917 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:48:55.063833 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:48:55.075015 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:48:55.113017 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:48:55.240244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:55.246764 systemd-networkd[1932]: lo: Link UP Feb 13 19:48:55.246790 systemd-networkd[1932]: lo: Gained carrier Feb 13 19:48:55.248969 systemd-resolved[1935]: Positive Trust Anchors: Feb 13 19:48:55.249519 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:55.249591 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:55.249892 systemd-networkd[1932]: Enumeration completed Feb 13 19:48:55.250104 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:55.252842 systemd-networkd[1932]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:55.252867 systemd-networkd[1932]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:55.256277 systemd-networkd[1932]: eth0: Link UP Feb 13 19:48:55.256651 systemd-networkd[1932]: eth0: Gained carrier Feb 13 19:48:55.256687 systemd-networkd[1932]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:55.264740 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:48:55.271453 systemd-networkd[1932]: eth0: DHCPv4 address 172.31.28.108/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:55.273387 systemd-resolved[1935]: Defaulting to hostname 'linux'. Feb 13 19:48:55.283162 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:55.285913 systemd[1]: Reached target network.target - Network. Feb 13 19:48:55.288522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:55.290907 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:55.293184 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:48:55.295732 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:48:55.298506 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:48:55.301015 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:48:55.303467 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:48:55.305937 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:48:55.306003 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:55.307867 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:55.311279 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:48:55.316504 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:48:55.329997 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:48:55.333815 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:48:55.336697 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:55.338996 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:55.341496 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:55.341589 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:55.359128 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:48:55.367451 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:48:55.376181 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:48:55.391552 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:48:55.402185 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:48:55.406051 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:48:55.417126 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:48:55.427762 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:48:55.428272 jq[1994]: false Feb 13 19:48:55.450140 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:48:55.458083 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:48:55.477657 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:48:55.485694 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:48:55.500668 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:48:55.504779 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:48:55.505541 dbus-daemon[1993]: [system] SELinux support is enabled Feb 13 19:48:55.505878 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:48:55.513869 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:48:55.524886 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:48:55.525003 dbus-daemon[1993]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1932 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:55.528859 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:48:55.547245 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:48:55.548893 jq[2006]: true Feb 13 19:48:55.547675 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:48:55.575665 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:48:55.575740 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:48:55.578733 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:48:55.578788 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:48:55.581714 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:48:55.585933 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:48:55.586781 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:48:55.608714 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:48:55.652523 tar[2008]: linux-arm64/helm Feb 13 19:48:55.719707 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:48:55.727024 jq[2011]: true Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: ---------------------------------------------------- Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: corporation. Support and training for ntp-4 are Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: available at https://www.nwtime.org/support Feb 13 19:48:55.727629 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: ---------------------------------------------------- Feb 13 19:48:55.726731 ntpd[1997]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:55.726660 (ntainerd)[2028]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:48:55.726779 ntpd[1997]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:55.745499 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: proto: precision = 0.096 usec (-23) Feb 13 19:48:55.745499 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: basedate set to 2025-02-01 Feb 13 19:48:55.745499 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:55.745499 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:55.745499 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:55.726800 ntpd[1997]: ---------------------------------------------------- Feb 13 19:48:55.726819 ntpd[1997]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:55.726838 ntpd[1997]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:55.726857 ntpd[1997]: corporation. Support and training for ntp-4 are Feb 13 19:48:55.726877 ntpd[1997]: available at https://www.nwtime.org/support Feb 13 19:48:55.726895 ntpd[1997]: ---------------------------------------------------- Feb 13 19:48:55.735079 ntpd[1997]: proto: precision = 0.096 usec (-23) Feb 13 19:48:55.736419 ntpd[1997]: basedate set to 2025-02-01 Feb 13 19:48:55.736468 ntpd[1997]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:55.762226 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:55.762226 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Listen normally on 3 eth0 172.31.28.108:123 Feb 13 19:48:55.762226 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:55.762226 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: bind(21) AF_INET6 fe80::470:b2ff:fef7:d877%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:55.762226 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: unable to create socket on eth0 (5) for fe80::470:b2ff:fef7:d877%2#123 Feb 13 19:48:55.762226 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: failed to init interface for address fe80::470:b2ff:fef7:d877%2 Feb 13 19:48:55.762226 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: Listening on routing socket on fd #21 for interface updates Feb 13 19:48:55.743909 ntpd[1997]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:55.744028 ntpd[1997]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:55.751280 ntpd[1997]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:55.751467 ntpd[1997]: Listen normally on 3 eth0 172.31.28.108:123 Feb 13 19:48:55.751581 ntpd[1997]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:55.751720 ntpd[1997]: bind(21) AF_INET6 fe80::470:b2ff:fef7:d877%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:55.751819 ntpd[1997]: unable to create socket on eth0 (5) for fe80::470:b2ff:fef7:d877%2#123 Feb 13 19:48:55.751870 ntpd[1997]: failed to init interface for address fe80::470:b2ff:fef7:d877%2 Feb 13 19:48:55.751993 ntpd[1997]: Listening on routing socket on fd #21 for interface updates Feb 13 19:48:55.774329 extend-filesystems[1995]: Found loop4 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found loop5 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found loop6 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found loop7 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1p1 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1p2 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1p3 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found usr Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1p4 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1p6 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1p7 Feb 13 19:48:55.774329 extend-filesystems[1995]: Found nvme0n1p9 Feb 13 19:48:55.774329 extend-filesystems[1995]: Checking size of /dev/nvme0n1p9 Feb 13 19:48:55.829633 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:48:55.796121 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:55.839156 coreos-metadata[1992]: Feb 13 19:48:55.805 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:55.839156 coreos-metadata[1992]: Feb 13 19:48:55.831 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:48:55.839890 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:55.839890 ntpd[1997]: 13 Feb 19:48:55 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:55.830710 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:48:55.796215 ntpd[1997]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:55.851531 coreos-metadata[1992]: Feb 13 19:48:55.843 INFO Fetch successful Feb 13 19:48:55.851531 coreos-metadata[1992]: Feb 13 19:48:55.843 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:48:55.851531 coreos-metadata[1992]: Feb 13 19:48:55.850 INFO Fetch successful Feb 13 19:48:55.851531 coreos-metadata[1992]: Feb 13 19:48:55.850 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:48:55.851804 update_engine[2005]: I20250213 19:48:55.850962 2005 main.cc:92] Flatcar Update Engine starting Feb 13 19:48:55.863796 coreos-metadata[1992]: Feb 13 19:48:55.859 INFO Fetch successful Feb 13 19:48:55.863796 coreos-metadata[1992]: Feb 13 19:48:55.859 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:48:55.863796 coreos-metadata[1992]: Feb 13 19:48:55.859 INFO Fetch successful Feb 13 19:48:55.863796 coreos-metadata[1992]: Feb 13 19:48:55.859 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:48:55.863796 coreos-metadata[1992]: Feb 13 19:48:55.863 INFO Fetch failed with 404: resource not found Feb 13 19:48:55.863796 coreos-metadata[1992]: Feb 13 19:48:55.863 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:48:55.866042 coreos-metadata[1992]: Feb 13 19:48:55.865 INFO Fetch successful Feb 13 19:48:55.875081 coreos-metadata[1992]: Feb 13 19:48:55.866 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:48:55.875081 coreos-metadata[1992]: Feb 13 19:48:55.872 INFO Fetch successful Feb 13 19:48:55.875081 coreos-metadata[1992]: Feb 13 19:48:55.872 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:48:55.875235 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:48:55.882352 update_engine[2005]: I20250213 19:48:55.876164 2005 update_check_scheduler.cc:74] Next update check in 6m12s Feb 13 19:48:55.895343 coreos-metadata[1992]: Feb 13 19:48:55.895 INFO Fetch successful Feb 13 19:48:55.895343 coreos-metadata[1992]: Feb 13 19:48:55.895 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:48:55.895792 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:48:55.906365 coreos-metadata[1992]: Feb 13 19:48:55.905 INFO Fetch successful Feb 13 19:48:55.906365 coreos-metadata[1992]: Feb 13 19:48:55.905 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:48:55.911759 coreos-metadata[1992]: Feb 13 19:48:55.909 INFO Fetch successful Feb 13 19:48:55.914654 extend-filesystems[1995]: Resized partition /dev/nvme0n1p9 Feb 13 19:48:55.928140 extend-filesystems[2058]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:48:55.944315 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:48:55.970456 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:48:56.021723 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:48:56.025582 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:48:56.069399 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1795) Feb 13 19:48:56.089386 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:48:56.105673 extend-filesystems[2058]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:48:56.105673 extend-filesystems[2058]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:48:56.105673 extend-filesystems[2058]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:48:56.126894 extend-filesystems[1995]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:48:56.134032 bash[2073]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:56.131183 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:48:56.131793 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:48:56.137069 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:48:56.165621 systemd[1]: Starting sshkeys.service... Feb 13 19:48:56.228256 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:48:56.281097 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:48:56.348050 systemd-logind[2004]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:48:56.355533 systemd-logind[2004]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:48:56.357829 systemd-logind[2004]: New seat seat0. Feb 13 19:48:56.368523 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:48:56.383738 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:48:56.384538 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:48:56.391381 dbus-daemon[1993]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2019 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:56.401163 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:48:56.455002 polkitd[2119]: Started polkitd version 121 Feb 13 19:48:56.468646 polkitd[2119]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:48:56.468964 polkitd[2119]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:48:56.472174 polkitd[2119]: Finished loading, compiling and executing 2 rules Feb 13 19:48:56.473256 dbus-daemon[1993]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:48:56.473931 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:48:56.477247 polkitd[2119]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:48:56.591126 systemd-resolved[1935]: System hostname changed to 'ip-172-31-28-108'. Feb 13 19:48:56.591724 systemd-hostnamed[2019]: Hostname set to (transient) Feb 13 19:48:56.673815 locksmithd[2049]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:48:56.727870 ntpd[1997]: bind(24) AF_INET6 fe80::470:b2ff:fef7:d877%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:56.730192 ntpd[1997]: 13 Feb 19:48:56 ntpd[1997]: bind(24) AF_INET6 fe80::470:b2ff:fef7:d877%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:56.730192 ntpd[1997]: 13 Feb 19:48:56 ntpd[1997]: unable to create socket on eth0 (6) for fe80::470:b2ff:fef7:d877%2#123 Feb 13 19:48:56.730192 ntpd[1997]: 13 Feb 19:48:56 ntpd[1997]: failed to init interface for address fe80::470:b2ff:fef7:d877%2 Feb 13 19:48:56.728008 ntpd[1997]: unable to create socket on eth0 (6) for fe80::470:b2ff:fef7:d877%2#123 Feb 13 19:48:56.728044 ntpd[1997]: failed to init interface for address fe80::470:b2ff:fef7:d877%2 Feb 13 19:48:56.806357 containerd[2028]: time="2025-02-13T19:48:56.804409933Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:48:56.829475 coreos-metadata[2096]: Feb 13 19:48:56.827 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:56.837689 coreos-metadata[2096]: Feb 13 19:48:56.834 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:48:56.853940 coreos-metadata[2096]: Feb 13 19:48:56.853 INFO Fetch successful Feb 13 19:48:56.853940 coreos-metadata[2096]: Feb 13 19:48:56.853 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:48:56.857459 coreos-metadata[2096]: Feb 13 19:48:56.857 INFO Fetch successful Feb 13 19:48:56.861212 unknown[2096]: wrote ssh authorized keys file for user: core Feb 13 19:48:56.942749 update-ssh-keys[2194]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:56.943921 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:48:56.956749 systemd[1]: Finished sshkeys.service. Feb 13 19:48:57.002864 containerd[2028]: time="2025-02-13T19:48:57.002659246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:57.014837 containerd[2028]: time="2025-02-13T19:48:57.012549946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:57.014837 containerd[2028]: time="2025-02-13T19:48:57.012666082Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:48:57.014837 containerd[2028]: time="2025-02-13T19:48:57.012732346Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:48:57.014837 containerd[2028]: time="2025-02-13T19:48:57.013256134Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:48:57.014837 containerd[2028]: time="2025-02-13T19:48:57.013350214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:57.014837 containerd[2028]: time="2025-02-13T19:48:57.013525246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:57.014837 containerd[2028]: time="2025-02-13T19:48:57.013567030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:57.016823 containerd[2028]: time="2025-02-13T19:48:57.016741019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:57.017881 containerd[2028]: time="2025-02-13T19:48:57.017823623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:57.018100 containerd[2028]: time="2025-02-13T19:48:57.018047879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:57.018243 containerd[2028]: time="2025-02-13T19:48:57.018207671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:57.019894 containerd[2028]: time="2025-02-13T19:48:57.018592355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:57.019894 containerd[2028]: time="2025-02-13T19:48:57.019147691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:57.021627 containerd[2028]: time="2025-02-13T19:48:57.021572099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:57.021810 containerd[2028]: time="2025-02-13T19:48:57.021771755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:48:57.022263 containerd[2028]: time="2025-02-13T19:48:57.022212215Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:48:57.024168 containerd[2028]: time="2025-02-13T19:48:57.023124611Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:48:57.031564 containerd[2028]: time="2025-02-13T19:48:57.031475291Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:48:57.033766 containerd[2028]: time="2025-02-13T19:48:57.033171755Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:48:57.033766 containerd[2028]: time="2025-02-13T19:48:57.033352115Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:48:57.033766 containerd[2028]: time="2025-02-13T19:48:57.033422279Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:48:57.033766 containerd[2028]: time="2025-02-13T19:48:57.033512579Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:48:57.034369 containerd[2028]: time="2025-02-13T19:48:57.034281323Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.040964051Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041312483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041378627Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041423915Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041471315Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041521679Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041566571Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041626943Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041682455Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041738435Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041780483Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041821835Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041882483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.043319 containerd[2028]: time="2025-02-13T19:48:57.041953127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.044057 containerd[2028]: time="2025-02-13T19:48:57.041997839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.044057 containerd[2028]: time="2025-02-13T19:48:57.042052619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.044057 containerd[2028]: time="2025-02-13T19:48:57.042111239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.044057 containerd[2028]: time="2025-02-13T19:48:57.042160931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.044057 containerd[2028]: time="2025-02-13T19:48:57.042213827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.042270671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.047748467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.047881199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.047931899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.047985119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048042755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048103763Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048189119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048235043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048279395Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048462131Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048516635Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:48:57.050402 containerd[2028]: time="2025-02-13T19:48:57.048553571Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:48:57.051043 containerd[2028]: time="2025-02-13T19:48:57.048593543Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:48:57.051043 containerd[2028]: time="2025-02-13T19:48:57.048627839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.051043 containerd[2028]: time="2025-02-13T19:48:57.048665339Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:48:57.051043 containerd[2028]: time="2025-02-13T19:48:57.048700751Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:48:57.051043 containerd[2028]: time="2025-02-13T19:48:57.048735959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:48:57.054144 containerd[2028]: time="2025-02-13T19:48:57.053831339Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:48:57.057344 containerd[2028]: time="2025-02-13T19:48:57.056127659Z" level=info msg="Connect containerd service" Feb 13 19:48:57.057344 containerd[2028]: time="2025-02-13T19:48:57.056334011Z" level=info msg="using legacy CRI server" Feb 13 19:48:57.057344 containerd[2028]: time="2025-02-13T19:48:57.056359655Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:48:57.057344 containerd[2028]: time="2025-02-13T19:48:57.056534891Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:48:57.060185 containerd[2028]: time="2025-02-13T19:48:57.060093407Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:48:57.061326 containerd[2028]: time="2025-02-13T19:48:57.061205459Z" level=info msg="Start subscribing containerd event" Feb 13 19:48:57.061446 containerd[2028]: time="2025-02-13T19:48:57.061340471Z" level=info msg="Start recovering state" Feb 13 19:48:57.061501 containerd[2028]: time="2025-02-13T19:48:57.061478159Z" level=info msg="Start event monitor" Feb 13 19:48:57.061550 containerd[2028]: time="2025-02-13T19:48:57.061510391Z" level=info msg="Start snapshots syncer" Feb 13 19:48:57.061550 containerd[2028]: time="2025-02-13T19:48:57.061534295Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:48:57.061643 containerd[2028]: time="2025-02-13T19:48:57.061553903Z" level=info msg="Start streaming server" Feb 13 19:48:57.063232 containerd[2028]: time="2025-02-13T19:48:57.062123003Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:48:57.063485 containerd[2028]: time="2025-02-13T19:48:57.062307587Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:48:57.064069 containerd[2028]: time="2025-02-13T19:48:57.064026083Z" level=info msg="containerd successfully booted in 0.264847s" Feb 13 19:48:57.064163 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:48:57.121506 systemd-networkd[1932]: eth0: Gained IPv6LL Feb 13 19:48:57.133494 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:48:57.139716 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:48:57.149995 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:48:57.163860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:48:57.176913 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:48:57.308386 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:48:57.357493 amazon-ssm-agent[2200]: Initializing new seelog logger Feb 13 19:48:57.357493 amazon-ssm-agent[2200]: New Seelog Logger Creation Complete Feb 13 19:48:57.357493 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.357493 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.363582 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.363582 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.363582 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.363582 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.363582 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.363582 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.363582 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.368173 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO Proxy environment variables: Feb 13 19:48:57.372957 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.372957 amazon-ssm-agent[2200]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.373122 amazon-ssm-agent[2200]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.469176 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO https_proxy: Feb 13 19:48:57.571337 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO http_proxy: Feb 13 19:48:57.672410 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO no_proxy: Feb 13 19:48:57.769796 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:48:57.869680 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:48:57.929788 tar[2008]: linux-arm64/LICENSE Feb 13 19:48:57.930438 tar[2008]: linux-arm64/README.md Feb 13 19:48:57.968180 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO Agent will take identity from EC2 Feb 13 19:48:57.980186 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:48:58.067531 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:58.167320 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:58.268365 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:58.367042 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [Registrar] Starting registrar module Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:57 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:58 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:58 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:58 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:48:58.370445 amazon-ssm-agent[2200]: 2025-02-13 19:48:58 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:48:58.467251 amazon-ssm-agent[2200]: 2025-02-13 19:48:58 INFO [CredentialRefresher] Next credential rotation will be in 30.583323376033334 minutes Feb 13 19:48:58.515210 sshd_keygen[2040]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:48:58.568519 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:48:58.582108 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:48:58.593188 systemd[1]: Started sshd@0-172.31.28.108:22-139.178.89.65:47556.service - OpenSSH per-connection server daemon (139.178.89.65:47556). Feb 13 19:48:58.634796 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:48:58.637551 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:48:58.652989 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:48:58.694151 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:48:58.710371 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:48:58.724221 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:48:58.728843 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:48:58.902528 sshd[2230]: Accepted publickey for core from 139.178.89.65 port 47556 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:58.906933 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:58.946183 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:48:58.954858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:48:58.960821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:48:58.976529 systemd-logind[2004]: New session 1 of user core. Feb 13 19:48:58.980515 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:48:58.996167 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:48:58.999384 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:48:59.012991 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:48:59.036024 (systemd)[2247]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:48:59.272019 systemd[2247]: Queued start job for default target default.target. Feb 13 19:48:59.280528 systemd[2247]: Created slice app.slice - User Application Slice. Feb 13 19:48:59.280596 systemd[2247]: Reached target paths.target - Paths. Feb 13 19:48:59.280629 systemd[2247]: Reached target timers.target - Timers. Feb 13 19:48:59.284560 systemd[2247]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:48:59.333273 systemd[2247]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:48:59.334932 systemd[2247]: Reached target sockets.target - Sockets. Feb 13 19:48:59.334989 systemd[2247]: Reached target basic.target - Basic System. Feb 13 19:48:59.335103 systemd[2247]: Reached target default.target - Main User Target. Feb 13 19:48:59.335169 systemd[2247]: Startup finished in 286ms. Feb 13 19:48:59.335385 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:48:59.344709 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:48:59.349361 systemd[1]: Startup finished in 1.290s (kernel) + 9.743s (initrd) + 9.849s (userspace) = 20.884s. Feb 13 19:48:59.409561 amazon-ssm-agent[2200]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:48:59.510861 amazon-ssm-agent[2200]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2265) started Feb 13 19:48:59.533137 systemd[1]: Started sshd@1-172.31.28.108:22-139.178.89.65:39112.service - OpenSSH per-connection server daemon (139.178.89.65:39112). Feb 13 19:48:59.610724 amazon-ssm-agent[2200]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:48:59.728780 ntpd[1997]: Listen normally on 7 eth0 [fe80::470:b2ff:fef7:d877%2]:123 Feb 13 19:48:59.731700 ntpd[1997]: 13 Feb 19:48:59 ntpd[1997]: Listen normally on 7 eth0 [fe80::470:b2ff:fef7:d877%2]:123 Feb 13 19:48:59.755504 sshd[2271]: Accepted publickey for core from 139.178.89.65 port 39112 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:59.760070 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:59.774417 systemd-logind[2004]: New session 2 of user core. Feb 13 19:48:59.785635 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:48:59.919752 sshd[2271]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:59.929545 systemd[1]: sshd@1-172.31.28.108:22-139.178.89.65:39112.service: Deactivated successfully. Feb 13 19:48:59.935781 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:48:59.939445 systemd-logind[2004]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:48:59.965107 systemd[1]: Started sshd@2-172.31.28.108:22-139.178.89.65:39128.service - OpenSSH per-connection server daemon (139.178.89.65:39128). Feb 13 19:48:59.968482 systemd-logind[2004]: Removed session 2. Feb 13 19:49:00.063677 kubelet[2244]: E0213 19:49:00.063386 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:00.068690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:00.069029 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:00.069871 systemd[1]: kubelet.service: Consumed 1.345s CPU time. Feb 13 19:49:00.157846 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 39128 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:00.160570 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:00.169407 systemd-logind[2004]: New session 3 of user core. Feb 13 19:49:00.180694 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:49:00.305872 sshd[2284]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:00.313682 systemd[1]: sshd@2-172.31.28.108:22-139.178.89.65:39128.service: Deactivated successfully. Feb 13 19:49:00.317784 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:49:00.319631 systemd-logind[2004]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:49:00.327466 systemd-logind[2004]: Removed session 3. Feb 13 19:49:00.352050 systemd[1]: Started sshd@3-172.31.28.108:22-139.178.89.65:39132.service - OpenSSH per-connection server daemon (139.178.89.65:39132). Feb 13 19:49:00.544591 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 39132 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:00.547634 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:00.558419 systemd-logind[2004]: New session 4 of user core. Feb 13 19:49:00.576730 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:49:00.711660 sshd[2292]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:00.717529 systemd[1]: sshd@3-172.31.28.108:22-139.178.89.65:39132.service: Deactivated successfully. Feb 13 19:49:00.721016 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:49:00.725042 systemd-logind[2004]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:49:00.727937 systemd-logind[2004]: Removed session 4. Feb 13 19:49:00.756056 systemd[1]: Started sshd@4-172.31.28.108:22-139.178.89.65:39138.service - OpenSSH per-connection server daemon (139.178.89.65:39138). Feb 13 19:49:00.943480 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 39138 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:00.946095 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:00.955503 systemd-logind[2004]: New session 5 of user core. Feb 13 19:49:00.961668 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:49:01.089144 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:49:01.090371 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:01.112882 sudo[2302]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:01.137259 sshd[2299]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:01.144925 systemd[1]: sshd@4-172.31.28.108:22-139.178.89.65:39138.service: Deactivated successfully. Feb 13 19:49:01.148773 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:49:01.152045 systemd-logind[2004]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:49:01.155471 systemd-logind[2004]: Removed session 5. Feb 13 19:49:01.180014 systemd[1]: Started sshd@5-172.31.28.108:22-139.178.89.65:39144.service - OpenSSH per-connection server daemon (139.178.89.65:39144). Feb 13 19:49:01.367508 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 39144 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:01.370853 sshd[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:01.383400 systemd-logind[2004]: New session 6 of user core. Feb 13 19:49:01.389902 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:49:01.508013 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:49:01.508964 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:01.517930 sudo[2311]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:01.532357 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:49:01.533142 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:01.557983 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:01.572428 auditctl[2314]: No rules Feb 13 19:49:01.573278 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:49:01.573742 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:01.583858 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:01.644314 augenrules[2332]: No rules Feb 13 19:49:01.647476 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:01.649743 sudo[2310]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:01.674646 sshd[2307]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:01.680008 systemd[1]: sshd@5-172.31.28.108:22-139.178.89.65:39144.service: Deactivated successfully. Feb 13 19:49:01.683659 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:49:01.686191 systemd-logind[2004]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:49:01.688226 systemd-logind[2004]: Removed session 6. Feb 13 19:49:01.716812 systemd[1]: Started sshd@6-172.31.28.108:22-139.178.89.65:39152.service - OpenSSH per-connection server daemon (139.178.89.65:39152). Feb 13 19:49:01.900946 sshd[2340]: Accepted publickey for core from 139.178.89.65 port 39152 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:01.903812 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:01.911818 systemd-logind[2004]: New session 7 of user core. Feb 13 19:49:01.922576 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:49:02.034276 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:49:02.035249 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:02.533922 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:49:02.537784 (dockerd)[2358]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:49:02.863067 systemd-resolved[1935]: Clock change detected. Flushing caches. Feb 13 19:49:03.072475 dockerd[2358]: time="2025-02-13T19:49:03.072361397Z" level=info msg="Starting up" Feb 13 19:49:03.397519 dockerd[2358]: time="2025-02-13T19:49:03.396923922Z" level=info msg="Loading containers: start." Feb 13 19:49:03.575273 kernel: Initializing XFRM netlink socket Feb 13 19:49:03.613075 (udev-worker)[2381]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:49:03.729165 systemd-networkd[1932]: docker0: Link UP Feb 13 19:49:03.765200 dockerd[2358]: time="2025-02-13T19:49:03.765028724Z" level=info msg="Loading containers: done." Feb 13 19:49:03.792727 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1135905271-merged.mount: Deactivated successfully. Feb 13 19:49:03.795663 dockerd[2358]: time="2025-02-13T19:49:03.795247004Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:49:03.795663 dockerd[2358]: time="2025-02-13T19:49:03.795391388Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:49:03.795663 dockerd[2358]: time="2025-02-13T19:49:03.795627920Z" level=info msg="Daemon has completed initialization" Feb 13 19:49:03.855145 dockerd[2358]: time="2025-02-13T19:49:03.855070352Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:49:03.855781 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:49:04.959312 containerd[2028]: time="2025-02-13T19:49:04.959203918Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:49:05.631524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655582136.mount: Deactivated successfully. Feb 13 19:49:07.102125 containerd[2028]: time="2025-02-13T19:49:07.102054525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.105141 containerd[2028]: time="2025-02-13T19:49:07.105072057Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 19:49:07.106211 containerd[2028]: time="2025-02-13T19:49:07.106109973Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.113998 containerd[2028]: time="2025-02-13T19:49:07.113909481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.122010 containerd[2028]: time="2025-02-13T19:49:07.121875669Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.162556191s" Feb 13 19:49:07.123753 containerd[2028]: time="2025-02-13T19:49:07.123514701Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:49:07.124718 containerd[2028]: time="2025-02-13T19:49:07.124585413Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:49:08.567283 containerd[2028]: time="2025-02-13T19:49:08.567209328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.570946 containerd[2028]: time="2025-02-13T19:49:08.570822972Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 19:49:08.574798 containerd[2028]: time="2025-02-13T19:49:08.574683708Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.589106 containerd[2028]: time="2025-02-13T19:49:08.588910884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.591543 containerd[2028]: time="2025-02-13T19:49:08.591469716Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.466823031s" Feb 13 19:49:08.592006 containerd[2028]: time="2025-02-13T19:49:08.591791964Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:49:08.593284 containerd[2028]: time="2025-02-13T19:49:08.592914228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:49:09.849030 containerd[2028]: time="2025-02-13T19:49:09.847817234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.851125 containerd[2028]: time="2025-02-13T19:49:09.851042918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 19:49:09.852508 containerd[2028]: time="2025-02-13T19:49:09.852459998Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.859471 containerd[2028]: time="2025-02-13T19:49:09.859362254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.862740 containerd[2028]: time="2025-02-13T19:49:09.862502522Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.269010506s" Feb 13 19:49:09.862740 containerd[2028]: time="2025-02-13T19:49:09.862567706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:49:09.864328 containerd[2028]: time="2025-02-13T19:49:09.863711246Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:49:10.205592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:49:10.214668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:10.854500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:10.864778 (kubelet)[2573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:11.000006 kubelet[2573]: E0213 19:49:10.998354 2573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:11.007600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:11.008365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:11.456869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855619425.mount: Deactivated successfully. Feb 13 19:49:12.072261 containerd[2028]: time="2025-02-13T19:49:12.071921293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.073933 containerd[2028]: time="2025-02-13T19:49:12.073847905Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:49:12.075441 containerd[2028]: time="2025-02-13T19:49:12.075341509Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.079910 containerd[2028]: time="2025-02-13T19:49:12.079772173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.082013 containerd[2028]: time="2025-02-13T19:49:12.081447517Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 2.217667307s" Feb 13 19:49:12.082013 containerd[2028]: time="2025-02-13T19:49:12.081523561Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:49:12.083373 containerd[2028]: time="2025-02-13T19:49:12.083295721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:49:12.705874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404085227.mount: Deactivated successfully. Feb 13 19:49:13.974550 containerd[2028]: time="2025-02-13T19:49:13.974477635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:13.976935 containerd[2028]: time="2025-02-13T19:49:13.976854079Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:49:13.978070 containerd[2028]: time="2025-02-13T19:49:13.977877667Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:13.984720 containerd[2028]: time="2025-02-13T19:49:13.984613495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:13.987549 containerd[2028]: time="2025-02-13T19:49:13.987459487Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.904082682s" Feb 13 19:49:13.988055 containerd[2028]: time="2025-02-13T19:49:13.987802579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:49:13.988800 containerd[2028]: time="2025-02-13T19:49:13.988531723Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:49:14.514552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1924269568.mount: Deactivated successfully. Feb 13 19:49:14.524812 containerd[2028]: time="2025-02-13T19:49:14.524713505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:14.526627 containerd[2028]: time="2025-02-13T19:49:14.526434653Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:49:14.528332 containerd[2028]: time="2025-02-13T19:49:14.528251813Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:14.541947 containerd[2028]: time="2025-02-13T19:49:14.541846290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:14.545378 containerd[2028]: time="2025-02-13T19:49:14.544934082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.338831ms" Feb 13 19:49:14.545378 containerd[2028]: time="2025-02-13T19:49:14.545068458Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:49:14.547316 containerd[2028]: time="2025-02-13T19:49:14.547213578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:49:15.141377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685866663.mount: Deactivated successfully. Feb 13 19:49:17.472189 containerd[2028]: time="2025-02-13T19:49:17.471889892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.473707 containerd[2028]: time="2025-02-13T19:49:17.473591276Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 19:49:17.475022 containerd[2028]: time="2025-02-13T19:49:17.474820532Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.484110 containerd[2028]: time="2025-02-13T19:49:17.484030292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.486955 containerd[2028]: time="2025-02-13T19:49:17.486563084Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.93925839s" Feb 13 19:49:17.486955 containerd[2028]: time="2025-02-13T19:49:17.486630800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:49:21.205513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:49:21.217553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:21.566420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:21.577595 (kubelet)[2714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:21.667828 kubelet[2714]: E0213 19:49:21.667762 2714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:21.672827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:21.674437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:26.230901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:26.243711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:26.324074 systemd[1]: Reloading requested from client PID 2728 ('systemctl') (unit session-7.scope)... Feb 13 19:49:26.324127 systemd[1]: Reloading... Feb 13 19:49:26.670007 zram_generator::config[2775]: No configuration found. Feb 13 19:49:26.924681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:27.099402 systemd[1]: Reloading finished in 774 ms. Feb 13 19:49:27.166669 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:49:27.226476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:27.231482 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:27.239599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:27.240929 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:27.241434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:27.253534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:27.571571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:27.594541 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:27.675553 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:27.675553 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:27.675553 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:27.676235 kubelet[2838]: I0213 19:49:27.675686 2838 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:29.048336 kubelet[2838]: I0213 19:49:29.048232 2838 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:49:29.050005 kubelet[2838]: I0213 19:49:29.049297 2838 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:29.050005 kubelet[2838]: I0213 19:49:29.049796 2838 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:49:29.096885 kubelet[2838]: I0213 19:49:29.096789 2838 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:29.097469 kubelet[2838]: E0213 19:49:29.096796 2838 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.110501 kubelet[2838]: E0213 19:49:29.110393 2838 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:49:29.111270 kubelet[2838]: I0213 19:49:29.110922 2838 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:49:29.122039 kubelet[2838]: I0213 19:49:29.120894 2838 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:29.122039 kubelet[2838]: I0213 19:49:29.121425 2838 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:49:29.122039 kubelet[2838]: I0213 19:49:29.121842 2838 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:29.123125 kubelet[2838]: I0213 19:49:29.121907 2838 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-108","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:49:29.123517 kubelet[2838]: I0213 19:49:29.123477 2838 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:29.123715 kubelet[2838]: I0213 19:49:29.123685 2838 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:49:29.124189 kubelet[2838]: I0213 19:49:29.124141 2838 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:29.130499 kubelet[2838]: I0213 19:49:29.130410 2838 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:49:29.130764 kubelet[2838]: I0213 19:49:29.130527 2838 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:29.130764 kubelet[2838]: I0213 19:49:29.130618 2838 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:49:29.130764 kubelet[2838]: I0213 19:49:29.130649 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:29.133013 kubelet[2838]: W0213 19:49:29.132426 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-108&limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:29.133013 kubelet[2838]: E0213 19:49:29.132603 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-108&limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.134936 kubelet[2838]: W0213 19:49:29.134231 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:29.134936 kubelet[2838]: E0213 19:49:29.134384 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.134936 kubelet[2838]: I0213 19:49:29.134595 2838 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:29.137811 kubelet[2838]: I0213 19:49:29.137760 2838 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:29.139943 kubelet[2838]: W0213 19:49:29.139391 2838 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:49:29.143818 kubelet[2838]: I0213 19:49:29.143432 2838 server.go:1269] "Started kubelet" Feb 13 19:49:29.147016 kubelet[2838]: I0213 19:49:29.146016 2838 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:29.148493 kubelet[2838]: I0213 19:49:29.148416 2838 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:49:29.151078 kubelet[2838]: I0213 19:49:29.150921 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:29.151733 kubelet[2838]: I0213 19:49:29.151699 2838 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:29.154164 kubelet[2838]: E0213 19:49:29.152166 2838 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.108:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.108:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-108.1823dc5c150c7272 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-108,UID:ip-172-31-28-108,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-108,},FirstTimestamp:2025-02-13 19:49:29.143382642 +0000 UTC m=+1.541902821,LastTimestamp:2025-02-13 19:49:29.143382642 +0000 UTC m=+1.541902821,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-108,}" Feb 13 19:49:29.155703 kubelet[2838]: I0213 19:49:29.155630 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:29.164833 kubelet[2838]: I0213 19:49:29.162802 2838 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:49:29.164833 kubelet[2838]: I0213 19:49:29.156594 2838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:49:29.164833 kubelet[2838]: I0213 19:49:29.163365 2838 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:49:29.164833 kubelet[2838]: I0213 19:49:29.163536 2838 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:29.167480 kubelet[2838]: E0213 19:49:29.167385 2838 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-108\" not found" Feb 13 19:49:29.169602 kubelet[2838]: W0213 19:49:29.169148 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:29.169602 kubelet[2838]: E0213 19:49:29.169302 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.169602 kubelet[2838]: E0213 19:49:29.169467 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-108?timeout=10s\": dial tcp 172.31.28.108:6443: connect: connection refused" interval="200ms" Feb 13 19:49:29.173628 kubelet[2838]: E0213 19:49:29.171855 2838 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:49:29.173628 kubelet[2838]: I0213 19:49:29.172208 2838 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:29.173628 kubelet[2838]: I0213 19:49:29.172392 2838 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:29.177056 kubelet[2838]: I0213 19:49:29.176408 2838 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:29.210320 kubelet[2838]: I0213 19:49:29.210249 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:29.217259 kubelet[2838]: I0213 19:49:29.217168 2838 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:29.217259 kubelet[2838]: I0213 19:49:29.217238 2838 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:29.217552 kubelet[2838]: I0213 19:49:29.217283 2838 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:49:29.217552 kubelet[2838]: E0213 19:49:29.217401 2838 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:29.219476 kubelet[2838]: W0213 19:49:29.219363 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:29.220595 kubelet[2838]: E0213 19:49:29.219499 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.232018 kubelet[2838]: I0213 19:49:29.231735 2838 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:29.232018 kubelet[2838]: I0213 19:49:29.231771 2838 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:29.232018 kubelet[2838]: I0213 19:49:29.231809 2838 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:29.239902 kubelet[2838]: I0213 19:49:29.239538 2838 policy_none.go:49] "None policy: Start" Feb 13 19:49:29.242849 kubelet[2838]: I0213 19:49:29.242796 2838 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:29.243233 kubelet[2838]: I0213 19:49:29.242952 2838 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:29.254811 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:49:29.268566 kubelet[2838]: E0213 19:49:29.268465 2838 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-108\" not found" Feb 13 19:49:29.278377 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:49:29.285900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:49:29.295873 kubelet[2838]: I0213 19:49:29.295815 2838 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:29.296978 kubelet[2838]: I0213 19:49:29.296926 2838 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:49:29.297706 kubelet[2838]: I0213 19:49:29.297340 2838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:29.298327 kubelet[2838]: I0213 19:49:29.298255 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:29.303844 kubelet[2838]: E0213 19:49:29.301914 2838 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-108\" not found" Feb 13 19:49:29.343718 systemd[1]: Created slice kubepods-burstable-pod9c40aedb75d6c59ab456c5b53cece31c.slice - libcontainer container kubepods-burstable-pod9c40aedb75d6c59ab456c5b53cece31c.slice. Feb 13 19:49:29.371367 systemd[1]: Created slice kubepods-burstable-pod11d0346b55ab7f839d0c564a5d9b26cd.slice - libcontainer container kubepods-burstable-pod11d0346b55ab7f839d0c564a5d9b26cd.slice. Feb 13 19:49:29.372088 kubelet[2838]: E0213 19:49:29.371718 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-108?timeout=10s\": dial tcp 172.31.28.108:6443: connect: connection refused" interval="400ms" Feb 13 19:49:29.381164 systemd[1]: Created slice kubepods-burstable-podb3fa687317e1ecce03636daa1e343ee4.slice - libcontainer container kubepods-burstable-podb3fa687317e1ecce03636daa1e343ee4.slice. Feb 13 19:49:29.401031 kubelet[2838]: I0213 19:49:29.400863 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-108" Feb 13 19:49:29.401754 kubelet[2838]: E0213 19:49:29.401665 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.108:6443/api/v1/nodes\": dial tcp 172.31.28.108:6443: connect: connection refused" node="ip-172-31-28-108" Feb 13 19:49:29.465358 kubelet[2838]: I0213 19:49:29.465274 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c40aedb75d6c59ab456c5b53cece31c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-108\" (UID: \"9c40aedb75d6c59ab456c5b53cece31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:29.465358 kubelet[2838]: I0213 19:49:29.465372 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c40aedb75d6c59ab456c5b53cece31c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-108\" (UID: \"9c40aedb75d6c59ab456c5b53cece31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:29.465687 kubelet[2838]: I0213 19:49:29.465454 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:29.465687 kubelet[2838]: I0213 19:49:29.465496 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:29.465687 kubelet[2838]: I0213 19:49:29.465536 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:29.465687 kubelet[2838]: I0213 19:49:29.465575 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3fa687317e1ecce03636daa1e343ee4-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-108\" (UID: \"b3fa687317e1ecce03636daa1e343ee4\") " pod="kube-system/kube-scheduler-ip-172-31-28-108" Feb 13 19:49:29.465687 kubelet[2838]: I0213 19:49:29.465613 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:29.466021 kubelet[2838]: I0213 19:49:29.465652 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:29.466021 kubelet[2838]: I0213 19:49:29.465693 2838 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c40aedb75d6c59ab456c5b53cece31c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-108\" (UID: \"9c40aedb75d6c59ab456c5b53cece31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:29.606482 kubelet[2838]: I0213 19:49:29.604616 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-108" Feb 13 19:49:29.606482 kubelet[2838]: E0213 19:49:29.605543 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.108:6443/api/v1/nodes\": dial tcp 172.31.28.108:6443: connect: connection refused" node="ip-172-31-28-108" Feb 13 19:49:29.661909 containerd[2028]: time="2025-02-13T19:49:29.661830369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-108,Uid:9c40aedb75d6c59ab456c5b53cece31c,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:29.680223 containerd[2028]: time="2025-02-13T19:49:29.680165817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-108,Uid:11d0346b55ab7f839d0c564a5d9b26cd,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:29.687154 containerd[2028]: time="2025-02-13T19:49:29.687085293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-108,Uid:b3fa687317e1ecce03636daa1e343ee4,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:29.772715 kubelet[2838]: E0213 19:49:29.772629 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-108?timeout=10s\": dial tcp 172.31.28.108:6443: connect: connection refused" interval="800ms" Feb 13 19:49:30.009200 kubelet[2838]: I0213 19:49:30.008944 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-108" Feb 13 19:49:30.010699 kubelet[2838]: E0213 19:49:30.010505 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.108:6443/api/v1/nodes\": dial tcp 172.31.28.108:6443: connect: connection refused" node="ip-172-31-28-108" Feb 13 19:49:30.115338 kubelet[2838]: W0213 19:49:30.115235 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-108&limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:30.116008 kubelet[2838]: E0213 19:49:30.115926 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-108&limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.171207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount537989679.mount: Deactivated successfully. Feb 13 19:49:30.180038 containerd[2028]: time="2025-02-13T19:49:30.179739535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.183747 containerd[2028]: time="2025-02-13T19:49:30.183638995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:49:30.185441 containerd[2028]: time="2025-02-13T19:49:30.185305327Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.187066 containerd[2028]: time="2025-02-13T19:49:30.186929515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:30.187422 kubelet[2838]: W0213 19:49:30.187205 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:30.187422 kubelet[2838]: E0213 19:49:30.187307 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.188537 containerd[2028]: time="2025-02-13T19:49:30.188341219Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.190244 containerd[2028]: time="2025-02-13T19:49:30.190144783Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.191029 containerd[2028]: time="2025-02-13T19:49:30.190642243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:30.199668 containerd[2028]: time="2025-02-13T19:49:30.199583683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.205813 containerd[2028]: time="2025-02-13T19:49:30.205026403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.71733ms" Feb 13 19:49:30.210331 containerd[2028]: time="2025-02-13T19:49:30.210255103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.30003ms" Feb 13 19:49:30.210683 containerd[2028]: time="2025-02-13T19:49:30.210581323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 523.385462ms" Feb 13 19:49:30.385754 containerd[2028]: time="2025-02-13T19:49:30.385378676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:30.385754 containerd[2028]: time="2025-02-13T19:49:30.385486532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:30.385754 containerd[2028]: time="2025-02-13T19:49:30.385551464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.386362 containerd[2028]: time="2025-02-13T19:49:30.385814276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.387559 containerd[2028]: time="2025-02-13T19:49:30.381516896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:30.387559 containerd[2028]: time="2025-02-13T19:49:30.386163812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:30.387559 containerd[2028]: time="2025-02-13T19:49:30.386198240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.387559 containerd[2028]: time="2025-02-13T19:49:30.386392604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.394065 containerd[2028]: time="2025-02-13T19:49:30.393427568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:30.394451 containerd[2028]: time="2025-02-13T19:49:30.393634688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:30.394451 containerd[2028]: time="2025-02-13T19:49:30.393735704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.394451 containerd[2028]: time="2025-02-13T19:49:30.393909536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.397341 kubelet[2838]: W0213 19:49:30.397168 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:30.397581 kubelet[2838]: E0213 19:49:30.397379 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.456486 systemd[1]: Started cri-containerd-2d09d5efeced69ee9df7c52f8a518566e2c8c980270d7018962adc9bced4d8cc.scope - libcontainer container 2d09d5efeced69ee9df7c52f8a518566e2c8c980270d7018962adc9bced4d8cc. Feb 13 19:49:30.483257 systemd[1]: Started cri-containerd-c59462bb0dd292d1d44494901bcf45dbad22bd7c1ae088156a9ec02641234f97.scope - libcontainer container c59462bb0dd292d1d44494901bcf45dbad22bd7c1ae088156a9ec02641234f97. Feb 13 19:49:30.491892 systemd[1]: Started cri-containerd-d8eff5944854a9d2d974a69279dbb2866610548a73a8227d56b1f2d11236b71c.scope - libcontainer container d8eff5944854a9d2d974a69279dbb2866610548a73a8227d56b1f2d11236b71c. Feb 13 19:49:30.573808 kubelet[2838]: E0213 19:49:30.573712 2838 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-108?timeout=10s\": dial tcp 172.31.28.108:6443: connect: connection refused" interval="1.6s" Feb 13 19:49:30.594914 containerd[2028]: time="2025-02-13T19:49:30.594433077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-108,Uid:11d0346b55ab7f839d0c564a5d9b26cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d09d5efeced69ee9df7c52f8a518566e2c8c980270d7018962adc9bced4d8cc\"" Feb 13 19:49:30.607590 containerd[2028]: time="2025-02-13T19:49:30.607537449Z" level=info msg="CreateContainer within sandbox \"2d09d5efeced69ee9df7c52f8a518566e2c8c980270d7018962adc9bced4d8cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:49:30.618813 containerd[2028]: time="2025-02-13T19:49:30.618744249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-108,Uid:9c40aedb75d6c59ab456c5b53cece31c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c59462bb0dd292d1d44494901bcf45dbad22bd7c1ae088156a9ec02641234f97\"" Feb 13 19:49:30.627375 containerd[2028]: time="2025-02-13T19:49:30.627140577Z" level=info msg="CreateContainer within sandbox \"c59462bb0dd292d1d44494901bcf45dbad22bd7c1ae088156a9ec02641234f97\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:49:30.652907 containerd[2028]: time="2025-02-13T19:49:30.652330078Z" level=info msg="CreateContainer within sandbox \"2d09d5efeced69ee9df7c52f8a518566e2c8c980270d7018962adc9bced4d8cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4\"" Feb 13 19:49:30.654823 containerd[2028]: time="2025-02-13T19:49:30.654675634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-108,Uid:b3fa687317e1ecce03636daa1e343ee4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8eff5944854a9d2d974a69279dbb2866610548a73a8227d56b1f2d11236b71c\"" Feb 13 19:49:30.656061 containerd[2028]: time="2025-02-13T19:49:30.655954150Z" level=info msg="StartContainer for \"a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4\"" Feb 13 19:49:30.659827 kubelet[2838]: W0213 19:49:30.659440 2838 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.108:6443: connect: connection refused Feb 13 19:49:30.659827 kubelet[2838]: E0213 19:49:30.659567 2838 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.666099 containerd[2028]: time="2025-02-13T19:49:30.665791534Z" level=info msg="CreateContainer within sandbox \"d8eff5944854a9d2d974a69279dbb2866610548a73a8227d56b1f2d11236b71c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:49:30.671407 containerd[2028]: time="2025-02-13T19:49:30.670643194Z" level=info msg="CreateContainer within sandbox \"c59462bb0dd292d1d44494901bcf45dbad22bd7c1ae088156a9ec02641234f97\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de12ce45b3175eca253cad5883e3661d3f38a16a4d4544bc6f90a997d44053dc\"" Feb 13 19:49:30.672485 containerd[2028]: time="2025-02-13T19:49:30.672410830Z" level=info msg="StartContainer for \"de12ce45b3175eca253cad5883e3661d3f38a16a4d4544bc6f90a997d44053dc\"" Feb 13 19:49:30.699224 containerd[2028]: time="2025-02-13T19:49:30.699003322Z" level=info msg="CreateContainer within sandbox \"d8eff5944854a9d2d974a69279dbb2866610548a73a8227d56b1f2d11236b71c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6\"" Feb 13 19:49:30.701712 containerd[2028]: time="2025-02-13T19:49:30.699799570Z" level=info msg="StartContainer for \"92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6\"" Feb 13 19:49:30.748348 systemd[1]: Started cri-containerd-a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4.scope - libcontainer container a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4. Feb 13 19:49:30.810323 systemd[1]: Started cri-containerd-de12ce45b3175eca253cad5883e3661d3f38a16a4d4544bc6f90a997d44053dc.scope - libcontainer container de12ce45b3175eca253cad5883e3661d3f38a16a4d4544bc6f90a997d44053dc. Feb 13 19:49:30.818501 kubelet[2838]: I0213 19:49:30.818163 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-108" Feb 13 19:49:30.819291 kubelet[2838]: E0213 19:49:30.819208 2838 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.28.108:6443/api/v1/nodes\": dial tcp 172.31.28.108:6443: connect: connection refused" node="ip-172-31-28-108" Feb 13 19:49:30.827796 systemd[1]: Started cri-containerd-92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6.scope - libcontainer container 92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6. Feb 13 19:49:30.912214 containerd[2028]: time="2025-02-13T19:49:30.910920347Z" level=info msg="StartContainer for \"a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4\" returns successfully" Feb 13 19:49:30.962140 containerd[2028]: time="2025-02-13T19:49:30.961593743Z" level=info msg="StartContainer for \"de12ce45b3175eca253cad5883e3661d3f38a16a4d4544bc6f90a997d44053dc\" returns successfully" Feb 13 19:49:31.006479 containerd[2028]: time="2025-02-13T19:49:31.006356359Z" level=info msg="StartContainer for \"92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6\" returns successfully" Feb 13 19:49:31.108937 kubelet[2838]: E0213 19:49:31.108842 2838 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.108:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:32.423689 kubelet[2838]: I0213 19:49:32.423639 2838 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-108" Feb 13 19:49:35.040761 kubelet[2838]: E0213 19:49:35.040680 2838 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-108\" not found" node="ip-172-31-28-108" Feb 13 19:49:35.136545 kubelet[2838]: I0213 19:49:35.136475 2838 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-108" Feb 13 19:49:35.140022 kubelet[2838]: I0213 19:49:35.137648 2838 apiserver.go:52] "Watching apiserver" Feb 13 19:49:35.164667 kubelet[2838]: I0213 19:49:35.164519 2838 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:49:37.568255 systemd[1]: Reloading requested from client PID 3110 ('systemctl') (unit session-7.scope)... Feb 13 19:49:37.568288 systemd[1]: Reloading... Feb 13 19:49:37.770068 zram_generator::config[3156]: No configuration found. Feb 13 19:49:38.058380 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:38.269850 systemd[1]: Reloading finished in 700 ms. Feb 13 19:49:38.381439 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:38.397828 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:38.398290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:38.398383 systemd[1]: kubelet.service: Consumed 2.360s CPU time, 115.6M memory peak, 0B memory swap peak. Feb 13 19:49:38.407801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:38.800783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:38.815619 (kubelet)[3209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:38.960861 kubelet[3209]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:38.961473 kubelet[3209]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:38.961473 kubelet[3209]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:38.961473 kubelet[3209]: I0213 19:49:38.961081 3209 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:38.978060 kubelet[3209]: I0213 19:49:38.977483 3209 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:49:38.978060 kubelet[3209]: I0213 19:49:38.977610 3209 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:38.978334 kubelet[3209]: I0213 19:49:38.978278 3209 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:49:38.987665 kubelet[3209]: I0213 19:49:38.987582 3209 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:49:38.998376 kubelet[3209]: I0213 19:49:38.998218 3209 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:39.017584 kubelet[3209]: E0213 19:49:39.015792 3209 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:49:39.017584 kubelet[3209]: I0213 19:49:39.016055 3209 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:49:39.025606 kubelet[3209]: I0213 19:49:39.025531 3209 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:39.025847 kubelet[3209]: I0213 19:49:39.025783 3209 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:49:39.026181 kubelet[3209]: I0213 19:49:39.026131 3209 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:39.027058 kubelet[3209]: I0213 19:49:39.026187 3209 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-108","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:49:39.027058 kubelet[3209]: I0213 19:49:39.026693 3209 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:39.027058 kubelet[3209]: I0213 19:49:39.026740 3209 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:49:39.027058 kubelet[3209]: I0213 19:49:39.026880 3209 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:39.030588 kubelet[3209]: I0213 19:49:39.027302 3209 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:49:39.030588 kubelet[3209]: I0213 19:49:39.027356 3209 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:39.030588 kubelet[3209]: I0213 19:49:39.027415 3209 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:49:39.030588 kubelet[3209]: I0213 19:49:39.027438 3209 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:39.032082 kubelet[3209]: I0213 19:49:39.032037 3209 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:39.033653 kubelet[3209]: I0213 19:49:39.033597 3209 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:39.034653 kubelet[3209]: I0213 19:49:39.034612 3209 server.go:1269] "Started kubelet" Feb 13 19:49:39.052914 kubelet[3209]: I0213 19:49:39.052755 3209 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:39.058297 kubelet[3209]: I0213 19:49:39.056505 3209 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:39.065010 kubelet[3209]: I0213 19:49:39.061639 3209 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:49:39.067294 kubelet[3209]: I0213 19:49:39.067177 3209 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:39.067879 kubelet[3209]: I0213 19:49:39.067799 3209 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:39.071945 kubelet[3209]: I0213 19:49:39.071872 3209 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:49:39.073658 kubelet[3209]: I0213 19:49:39.073591 3209 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:49:39.074183 kubelet[3209]: E0213 19:49:39.074069 3209 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-28-108\" not found" Feb 13 19:49:39.077590 kubelet[3209]: I0213 19:49:39.077526 3209 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:49:39.079148 sudo[3225]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:49:39.080848 sudo[3225]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:49:39.089855 kubelet[3209]: I0213 19:49:39.088680 3209 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:39.102795 kubelet[3209]: I0213 19:49:39.102713 3209 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:39.103050 kubelet[3209]: I0213 19:49:39.103000 3209 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:39.166426 kubelet[3209]: I0213 19:49:39.164926 3209 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:39.216206 kubelet[3209]: I0213 19:49:39.215667 3209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:39.218328 kubelet[3209]: I0213 19:49:39.218272 3209 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:39.218887 kubelet[3209]: I0213 19:49:39.218726 3209 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:39.218887 kubelet[3209]: I0213 19:49:39.218836 3209 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:49:39.219847 kubelet[3209]: E0213 19:49:39.219614 3209 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:39.320148 kubelet[3209]: E0213 19:49:39.319991 3209 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:49:39.357186 kubelet[3209]: I0213 19:49:39.356666 3209 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:39.357186 kubelet[3209]: I0213 19:49:39.356700 3209 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:39.357186 kubelet[3209]: I0213 19:49:39.356736 3209 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:39.358609 kubelet[3209]: I0213 19:49:39.357664 3209 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:49:39.358609 kubelet[3209]: I0213 19:49:39.357721 3209 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:49:39.358609 kubelet[3209]: I0213 19:49:39.357786 3209 policy_none.go:49] "None policy: Start" Feb 13 19:49:39.361135 kubelet[3209]: I0213 19:49:39.361067 3209 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:39.362015 kubelet[3209]: I0213 19:49:39.361683 3209 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:39.362422 kubelet[3209]: I0213 19:49:39.362378 3209 state_mem.go:75] "Updated machine memory state" Feb 13 19:49:39.373098 kubelet[3209]: I0213 19:49:39.373048 3209 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:39.376362 kubelet[3209]: I0213 19:49:39.376274 3209 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:49:39.379088 kubelet[3209]: I0213 19:49:39.376340 3209 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:39.379088 kubelet[3209]: I0213 19:49:39.376871 3209 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:39.508724 kubelet[3209]: I0213 19:49:39.508250 3209 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-28-108" Feb 13 19:49:39.550118 kubelet[3209]: I0213 19:49:39.548049 3209 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-28-108" Feb 13 19:49:39.550118 kubelet[3209]: I0213 19:49:39.548178 3209 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-28-108" Feb 13 19:49:39.554888 kubelet[3209]: E0213 19:49:39.554770 3209 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-108\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:39.559862 kubelet[3209]: E0213 19:49:39.558498 3209 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-28-108\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:39.597546 kubelet[3209]: I0213 19:49:39.597025 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:39.599478 kubelet[3209]: I0213 19:49:39.599176 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b3fa687317e1ecce03636daa1e343ee4-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-108\" (UID: \"b3fa687317e1ecce03636daa1e343ee4\") " pod="kube-system/kube-scheduler-ip-172-31-28-108" Feb 13 19:49:39.599478 kubelet[3209]: I0213 19:49:39.599290 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c40aedb75d6c59ab456c5b53cece31c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-108\" (UID: \"9c40aedb75d6c59ab456c5b53cece31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:39.599478 kubelet[3209]: I0213 19:49:39.599360 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:39.600333 kubelet[3209]: I0213 19:49:39.599754 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:39.601312 kubelet[3209]: I0213 19:49:39.600123 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:39.603198 kubelet[3209]: I0213 19:49:39.602462 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/11d0346b55ab7f839d0c564a5d9b26cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-108\" (UID: \"11d0346b55ab7f839d0c564a5d9b26cd\") " pod="kube-system/kube-controller-manager-ip-172-31-28-108" Feb 13 19:49:39.603198 kubelet[3209]: I0213 19:49:39.602584 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c40aedb75d6c59ab456c5b53cece31c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-108\" (UID: \"9c40aedb75d6c59ab456c5b53cece31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:39.603198 kubelet[3209]: I0213 19:49:39.603027 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c40aedb75d6c59ab456c5b53cece31c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-108\" (UID: \"9c40aedb75d6c59ab456c5b53cece31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:40.029098 kubelet[3209]: I0213 19:49:40.029005 3209 apiserver.go:52] "Watching apiserver" Feb 13 19:49:40.078047 kubelet[3209]: I0213 19:49:40.077928 3209 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:49:40.144375 sudo[3225]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:40.306714 kubelet[3209]: E0213 19:49:40.306553 3209 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-108\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-108" Feb 13 19:49:40.360124 kubelet[3209]: I0213 19:49:40.359662 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-108" podStartSLOduration=3.359640186 podStartE2EDuration="3.359640186s" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:40.340239954 +0000 UTC m=+1.508613045" watchObservedRunningTime="2025-02-13 19:49:40.359640186 +0000 UTC m=+1.528013241" Feb 13 19:49:40.377881 kubelet[3209]: I0213 19:49:40.377785 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-108" podStartSLOduration=3.37776297 podStartE2EDuration="3.37776297s" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:40.375956202 +0000 UTC m=+1.544329269" watchObservedRunningTime="2025-02-13 19:49:40.37776297 +0000 UTC m=+1.546136037" Feb 13 19:49:40.378154 kubelet[3209]: I0213 19:49:40.377925 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-108" podStartSLOduration=1.37791423 podStartE2EDuration="1.37791423s" podCreationTimestamp="2025-02-13 19:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:40.360043038 +0000 UTC m=+1.528416129" watchObservedRunningTime="2025-02-13 19:49:40.37791423 +0000 UTC m=+1.546287297" Feb 13 19:49:41.602101 update_engine[2005]: I20250213 19:49:41.602003 2005 update_attempter.cc:509] Updating boot flags... Feb 13 19:49:41.772664 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3272) Feb 13 19:49:42.419031 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3263) Feb 13 19:49:42.946049 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3263) Feb 13 19:49:43.428714 kubelet[3209]: I0213 19:49:43.428653 3209 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:49:43.434929 containerd[2028]: time="2025-02-13T19:49:43.433095753Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:49:43.439697 kubelet[3209]: I0213 19:49:43.438522 3209 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:49:43.712549 sudo[2343]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:43.739402 sshd[2340]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:43.748080 systemd[1]: sshd@6-172.31.28.108:22-139.178.89.65:39152.service: Deactivated successfully. Feb 13 19:49:43.755314 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:49:43.755790 systemd[1]: session-7.scope: Consumed 12.771s CPU time, 152.2M memory peak, 0B memory swap peak. Feb 13 19:49:43.760292 systemd-logind[2004]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:49:43.765812 systemd-logind[2004]: Removed session 7. Feb 13 19:49:44.278033 systemd[1]: Created slice kubepods-besteffort-podcfbebd25_3e83_4926_b20a_90b6e165a7c7.slice - libcontainer container kubepods-besteffort-podcfbebd25_3e83_4926_b20a_90b6e165a7c7.slice. Feb 13 19:49:44.278751 kubelet[3209]: W0213 19:49:44.278593 3209 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-28-108" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-108' and this object Feb 13 19:49:44.278751 kubelet[3209]: W0213 19:49:44.278657 3209 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-28-108" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-108' and this object Feb 13 19:49:44.278751 kubelet[3209]: E0213 19:49:44.278668 3209 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-28-108\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-108' and this object" logger="UnhandledError" Feb 13 19:49:44.278751 kubelet[3209]: E0213 19:49:44.278691 3209 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-28-108\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-108' and this object" logger="UnhandledError" Feb 13 19:49:44.305903 systemd[1]: Created slice kubepods-burstable-podff1006aa_f8f4_4883_9211_12c65eb2121c.slice - libcontainer container kubepods-burstable-podff1006aa_f8f4_4883_9211_12c65eb2121c.slice. Feb 13 19:49:44.322133 kubelet[3209]: W0213 19:49:44.322070 3209 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-108" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-108' and this object Feb 13 19:49:44.322423 kubelet[3209]: E0213 19:49:44.322142 3209 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-28-108\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-108' and this object" logger="UnhandledError" Feb 13 19:49:44.374321 kubelet[3209]: I0213 19:49:44.374144 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cfbebd25-3e83-4926-b20a-90b6e165a7c7-kube-proxy\") pod \"kube-proxy-bxfmf\" (UID: \"cfbebd25-3e83-4926-b20a-90b6e165a7c7\") " pod="kube-system/kube-proxy-bxfmf" Feb 13 19:49:44.374321 kubelet[3209]: I0213 19:49:44.374330 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh82m\" (UniqueName: \"kubernetes.io/projected/cfbebd25-3e83-4926-b20a-90b6e165a7c7-kube-api-access-bh82m\") pod \"kube-proxy-bxfmf\" (UID: \"cfbebd25-3e83-4926-b20a-90b6e165a7c7\") " pod="kube-system/kube-proxy-bxfmf" Feb 13 19:49:44.374730 kubelet[3209]: I0213 19:49:44.374407 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-bpf-maps\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.374730 kubelet[3209]: I0213 19:49:44.374464 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfbebd25-3e83-4926-b20a-90b6e165a7c7-xtables-lock\") pod \"kube-proxy-bxfmf\" (UID: \"cfbebd25-3e83-4926-b20a-90b6e165a7c7\") " pod="kube-system/kube-proxy-bxfmf" Feb 13 19:49:44.374730 kubelet[3209]: I0213 19:49:44.374549 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-hostproc\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.374730 kubelet[3209]: I0213 19:49:44.374593 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-cgroup\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.374730 kubelet[3209]: I0213 19:49:44.374632 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cni-path\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.374730 kubelet[3209]: I0213 19:49:44.374672 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-kernel\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375235 kubelet[3209]: I0213 19:49:44.374716 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-run\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375235 kubelet[3209]: I0213 19:49:44.374756 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-lib-modules\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375235 kubelet[3209]: I0213 19:49:44.374803 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-net\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375235 kubelet[3209]: I0213 19:49:44.374857 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-hubble-tls\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375235 kubelet[3209]: I0213 19:49:44.374899 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-xtables-lock\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375235 kubelet[3209]: I0213 19:49:44.374941 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-config-path\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375569 kubelet[3209]: I0213 19:49:44.375079 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfbebd25-3e83-4926-b20a-90b6e165a7c7-lib-modules\") pod \"kube-proxy-bxfmf\" (UID: \"cfbebd25-3e83-4926-b20a-90b6e165a7c7\") " pod="kube-system/kube-proxy-bxfmf" Feb 13 19:49:44.375569 kubelet[3209]: I0213 19:49:44.375135 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff1006aa-f8f4-4883-9211-12c65eb2121c-clustermesh-secrets\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375569 kubelet[3209]: I0213 19:49:44.375187 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4vvw\" (UniqueName: \"kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-kube-api-access-d4vvw\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.375569 kubelet[3209]: I0213 19:49:44.375231 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-etc-cni-netd\") pod \"cilium-r29nh\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " pod="kube-system/cilium-r29nh" Feb 13 19:49:44.424411 systemd[1]: Created slice kubepods-besteffort-pod7f2b075f_deae_4167_9b8f_ab09703863cf.slice - libcontainer container kubepods-besteffort-pod7f2b075f_deae_4167_9b8f_ab09703863cf.slice. Feb 13 19:49:44.479301 kubelet[3209]: I0213 19:49:44.475539 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f2b075f-deae-4167-9b8f-ab09703863cf-cilium-config-path\") pod \"cilium-operator-5d85765b45-hjzlw\" (UID: \"7f2b075f-deae-4167-9b8f-ab09703863cf\") " pod="kube-system/cilium-operator-5d85765b45-hjzlw" Feb 13 19:49:44.479301 kubelet[3209]: I0213 19:49:44.475688 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mkg7\" (UniqueName: \"kubernetes.io/projected/7f2b075f-deae-4167-9b8f-ab09703863cf-kube-api-access-2mkg7\") pod \"cilium-operator-5d85765b45-hjzlw\" (UID: \"7f2b075f-deae-4167-9b8f-ab09703863cf\") " pod="kube-system/cilium-operator-5d85765b45-hjzlw" Feb 13 19:49:45.477159 kubelet[3209]: E0213 19:49:45.477072 3209 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:49:45.477515 kubelet[3209]: E0213 19:49:45.477090 3209 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:45.477515 kubelet[3209]: E0213 19:49:45.477228 3209 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfbebd25-3e83-4926-b20a-90b6e165a7c7-kube-proxy podName:cfbebd25-3e83-4926-b20a-90b6e165a7c7 nodeName:}" failed. No retries permitted until 2025-02-13 19:49:45.977196727 +0000 UTC m=+7.145569782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/cfbebd25-3e83-4926-b20a-90b6e165a7c7-kube-proxy") pod "kube-proxy-bxfmf" (UID: "cfbebd25-3e83-4926-b20a-90b6e165a7c7") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:49:45.477515 kubelet[3209]: E0213 19:49:45.477241 3209 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-r29nh: failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:45.477515 kubelet[3209]: E0213 19:49:45.477303 3209 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-hubble-tls podName:ff1006aa-f8f4-4883-9211-12c65eb2121c nodeName:}" failed. No retries permitted until 2025-02-13 19:49:45.977286019 +0000 UTC m=+7.145659062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-hubble-tls") pod "cilium-r29nh" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:45.637936 containerd[2028]: time="2025-02-13T19:49:45.637819320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hjzlw,Uid:7f2b075f-deae-4167-9b8f-ab09703863cf,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:45.693849 containerd[2028]: time="2025-02-13T19:49:45.693052104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:45.693849 containerd[2028]: time="2025-02-13T19:49:45.693231420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:45.693849 containerd[2028]: time="2025-02-13T19:49:45.693281364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:45.693849 containerd[2028]: time="2025-02-13T19:49:45.693559164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:45.732301 systemd[1]: Started cri-containerd-69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd.scope - libcontainer container 69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd. Feb 13 19:49:45.813136 containerd[2028]: time="2025-02-13T19:49:45.813052525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hjzlw,Uid:7f2b075f-deae-4167-9b8f-ab09703863cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\"" Feb 13 19:49:45.820167 containerd[2028]: time="2025-02-13T19:49:45.819548365Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:49:46.092047 containerd[2028]: time="2025-02-13T19:49:46.091649674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxfmf,Uid:cfbebd25-3e83-4926-b20a-90b6e165a7c7,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:46.115139 containerd[2028]: time="2025-02-13T19:49:46.115078258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r29nh,Uid:ff1006aa-f8f4-4883-9211-12c65eb2121c,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:46.164882 containerd[2028]: time="2025-02-13T19:49:46.164189495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:46.164882 containerd[2028]: time="2025-02-13T19:49:46.164302331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:46.164882 containerd[2028]: time="2025-02-13T19:49:46.164340551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:46.164882 containerd[2028]: time="2025-02-13T19:49:46.164597387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:46.185015 containerd[2028]: time="2025-02-13T19:49:46.182611067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:46.187149 containerd[2028]: time="2025-02-13T19:49:46.186687203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:46.187149 containerd[2028]: time="2025-02-13T19:49:46.186748019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:46.187149 containerd[2028]: time="2025-02-13T19:49:46.186950423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:46.211377 systemd[1]: Started cri-containerd-f4be85ccdd7643104108959741d7c6682b0ef0841ec1aa01dad9d51bf84c903e.scope - libcontainer container f4be85ccdd7643104108959741d7c6682b0ef0841ec1aa01dad9d51bf84c903e. Feb 13 19:49:46.236338 systemd[1]: Started cri-containerd-8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a.scope - libcontainer container 8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a. Feb 13 19:49:46.293873 containerd[2028]: time="2025-02-13T19:49:46.293573171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bxfmf,Uid:cfbebd25-3e83-4926-b20a-90b6e165a7c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4be85ccdd7643104108959741d7c6682b0ef0841ec1aa01dad9d51bf84c903e\"" Feb 13 19:49:46.312860 containerd[2028]: time="2025-02-13T19:49:46.312609239Z" level=info msg="CreateContainer within sandbox \"f4be85ccdd7643104108959741d7c6682b0ef0841ec1aa01dad9d51bf84c903e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:49:46.351362 containerd[2028]: time="2025-02-13T19:49:46.351186936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r29nh,Uid:ff1006aa-f8f4-4883-9211-12c65eb2121c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\"" Feb 13 19:49:46.372903 containerd[2028]: time="2025-02-13T19:49:46.372489840Z" level=info msg="CreateContainer within sandbox \"f4be85ccdd7643104108959741d7c6682b0ef0841ec1aa01dad9d51bf84c903e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9cca8fa2f21f52b1c93c326cd45bf83bf99af4f8d1338b60078c86531e7269fe\"" Feb 13 19:49:46.377068 containerd[2028]: time="2025-02-13T19:49:46.373779420Z" level=info msg="StartContainer for \"9cca8fa2f21f52b1c93c326cd45bf83bf99af4f8d1338b60078c86531e7269fe\"" Feb 13 19:49:46.438488 systemd[1]: Started cri-containerd-9cca8fa2f21f52b1c93c326cd45bf83bf99af4f8d1338b60078c86531e7269fe.scope - libcontainer container 9cca8fa2f21f52b1c93c326cd45bf83bf99af4f8d1338b60078c86531e7269fe. Feb 13 19:49:46.501119 containerd[2028]: time="2025-02-13T19:49:46.500950368Z" level=info msg="StartContainer for \"9cca8fa2f21f52b1c93c326cd45bf83bf99af4f8d1338b60078c86531e7269fe\" returns successfully" Feb 13 19:49:47.250709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2666089917.mount: Deactivated successfully. Feb 13 19:49:47.398723 kubelet[3209]: I0213 19:49:47.398570 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bxfmf" podStartSLOduration=3.398540077 podStartE2EDuration="3.398540077s" podCreationTimestamp="2025-02-13 19:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:47.395923189 +0000 UTC m=+8.564296268" watchObservedRunningTime="2025-02-13 19:49:47.398540077 +0000 UTC m=+8.566913132" Feb 13 19:49:47.934992 containerd[2028]: time="2025-02-13T19:49:47.934899171Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:47.938121 containerd[2028]: time="2025-02-13T19:49:47.938046927Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:49:47.938527 containerd[2028]: time="2025-02-13T19:49:47.938486259Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:47.941766 containerd[2028]: time="2025-02-13T19:49:47.941452239Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.121827878s" Feb 13 19:49:47.941766 containerd[2028]: time="2025-02-13T19:49:47.941525799Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:49:47.944392 containerd[2028]: time="2025-02-13T19:49:47.944317335Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:49:47.948573 containerd[2028]: time="2025-02-13T19:49:47.947799639Z" level=info msg="CreateContainer within sandbox \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:49:47.975570 containerd[2028]: time="2025-02-13T19:49:47.975452836Z" level=info msg="CreateContainer within sandbox \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\"" Feb 13 19:49:47.978056 containerd[2028]: time="2025-02-13T19:49:47.977631892Z" level=info msg="StartContainer for \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\"" Feb 13 19:49:48.041838 systemd[1]: run-containerd-runc-k8s.io-1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455-runc.Nik2mU.mount: Deactivated successfully. Feb 13 19:49:48.057318 systemd[1]: Started cri-containerd-1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455.scope - libcontainer container 1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455. Feb 13 19:49:48.109596 containerd[2028]: time="2025-02-13T19:49:48.109394976Z" level=info msg="StartContainer for \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\" returns successfully" Feb 13 19:49:49.408351 kubelet[3209]: I0213 19:49:49.407794 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hjzlw" podStartSLOduration=3.280628521 podStartE2EDuration="5.407764635s" podCreationTimestamp="2025-02-13 19:49:44 +0000 UTC" firstStartedPulling="2025-02-13 19:49:45.816589225 +0000 UTC m=+6.984962280" lastFinishedPulling="2025-02-13 19:49:47.943725327 +0000 UTC m=+9.112098394" observedRunningTime="2025-02-13 19:49:48.487574318 +0000 UTC m=+9.655947361" watchObservedRunningTime="2025-02-13 19:49:49.407764635 +0000 UTC m=+10.576137690" Feb 13 19:49:54.115088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340267303.mount: Deactivated successfully. Feb 13 19:49:57.039997 containerd[2028]: time="2025-02-13T19:49:57.039887325Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:57.042131 containerd[2028]: time="2025-02-13T19:49:57.042029085Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:49:57.043871 containerd[2028]: time="2025-02-13T19:49:57.043757937Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:57.049580 containerd[2028]: time="2025-02-13T19:49:57.049397709Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.105004318s" Feb 13 19:49:57.049580 containerd[2028]: time="2025-02-13T19:49:57.049470681Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:49:57.054616 containerd[2028]: time="2025-02-13T19:49:57.054538761Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:49:57.075818 containerd[2028]: time="2025-02-13T19:49:57.075708285Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777\"" Feb 13 19:49:57.079014 containerd[2028]: time="2025-02-13T19:49:57.078741897Z" level=info msg="StartContainer for \"cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777\"" Feb 13 19:49:57.140332 systemd[1]: Started cri-containerd-cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777.scope - libcontainer container cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777. Feb 13 19:49:57.209219 containerd[2028]: time="2025-02-13T19:49:57.209112945Z" level=info msg="StartContainer for \"cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777\" returns successfully" Feb 13 19:49:57.254881 systemd[1]: cri-containerd-cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777.scope: Deactivated successfully. Feb 13 19:49:58.067192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777-rootfs.mount: Deactivated successfully. Feb 13 19:49:58.393328 containerd[2028]: time="2025-02-13T19:49:58.392897747Z" level=info msg="shim disconnected" id=cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777 namespace=k8s.io Feb 13 19:49:58.393328 containerd[2028]: time="2025-02-13T19:49:58.393043175Z" level=warning msg="cleaning up after shim disconnected" id=cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777 namespace=k8s.io Feb 13 19:49:58.393328 containerd[2028]: time="2025-02-13T19:49:58.393066023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:58.460281 containerd[2028]: time="2025-02-13T19:49:58.460141752Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:49:58.494486 containerd[2028]: time="2025-02-13T19:49:58.493192092Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619\"" Feb 13 19:49:58.496337 containerd[2028]: time="2025-02-13T19:49:58.496236732Z" level=info msg="StartContainer for \"54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619\"" Feb 13 19:49:58.583015 systemd[1]: Started cri-containerd-54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619.scope - libcontainer container 54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619. Feb 13 19:49:58.654400 containerd[2028]: time="2025-02-13T19:49:58.654133981Z" level=info msg="StartContainer for \"54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619\" returns successfully" Feb 13 19:49:58.691675 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:49:58.693648 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:58.694189 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:58.705913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:58.706872 systemd[1]: cri-containerd-54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619.scope: Deactivated successfully. Feb 13 19:49:58.782325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619-rootfs.mount: Deactivated successfully. Feb 13 19:49:58.784672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:58.802721 containerd[2028]: time="2025-02-13T19:49:58.802634029Z" level=info msg="shim disconnected" id=54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619 namespace=k8s.io Feb 13 19:49:58.802721 containerd[2028]: time="2025-02-13T19:49:58.802714453Z" level=warning msg="cleaning up after shim disconnected" id=54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619 namespace=k8s.io Feb 13 19:49:58.803497 containerd[2028]: time="2025-02-13T19:49:58.802737361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:59.469620 containerd[2028]: time="2025-02-13T19:49:59.469521553Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:49:59.537814 containerd[2028]: time="2025-02-13T19:49:59.537733129Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c\"" Feb 13 19:49:59.540281 containerd[2028]: time="2025-02-13T19:49:59.538954285Z" level=info msg="StartContainer for \"ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c\"" Feb 13 19:49:59.645319 systemd[1]: Started cri-containerd-ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c.scope - libcontainer container ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c. Feb 13 19:49:59.739432 systemd[1]: cri-containerd-ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c.scope: Deactivated successfully. Feb 13 19:49:59.746470 containerd[2028]: time="2025-02-13T19:49:59.746112326Z" level=info msg="StartContainer for \"ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c\" returns successfully" Feb 13 19:49:59.810200 containerd[2028]: time="2025-02-13T19:49:59.809795570Z" level=info msg="shim disconnected" id=ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c namespace=k8s.io Feb 13 19:49:59.810200 containerd[2028]: time="2025-02-13T19:49:59.810072254Z" level=warning msg="cleaning up after shim disconnected" id=ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c namespace=k8s.io Feb 13 19:49:59.810200 containerd[2028]: time="2025-02-13T19:49:59.810101570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:00.474428 containerd[2028]: time="2025-02-13T19:50:00.474338186Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:50:00.502110 containerd[2028]: time="2025-02-13T19:50:00.497838914Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc\"" Feb 13 19:50:00.502110 containerd[2028]: time="2025-02-13T19:50:00.498655034Z" level=info msg="StartContainer for \"46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc\"" Feb 13 19:50:00.521789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c-rootfs.mount: Deactivated successfully. Feb 13 19:50:00.569260 systemd[1]: Started cri-containerd-46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc.scope - libcontainer container 46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc. Feb 13 19:50:00.630932 systemd[1]: cri-containerd-46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc.scope: Deactivated successfully. Feb 13 19:50:00.637440 containerd[2028]: time="2025-02-13T19:50:00.637130126Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff1006aa_f8f4_4883_9211_12c65eb2121c.slice/cri-containerd-46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc.scope/memory.events\": no such file or directory" Feb 13 19:50:00.643141 containerd[2028]: time="2025-02-13T19:50:00.640888958Z" level=info msg="StartContainer for \"46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc\" returns successfully" Feb 13 19:50:00.691770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc-rootfs.mount: Deactivated successfully. Feb 13 19:50:00.699221 containerd[2028]: time="2025-02-13T19:50:00.699133935Z" level=info msg="shim disconnected" id=46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc namespace=k8s.io Feb 13 19:50:00.699221 containerd[2028]: time="2025-02-13T19:50:00.699218187Z" level=warning msg="cleaning up after shim disconnected" id=46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc namespace=k8s.io Feb 13 19:50:00.699821 containerd[2028]: time="2025-02-13T19:50:00.699240411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:01.482540 containerd[2028]: time="2025-02-13T19:50:01.481595295Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:50:01.517615 containerd[2028]: time="2025-02-13T19:50:01.517397547Z" level=info msg="CreateContainer within sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\"" Feb 13 19:50:01.518999 containerd[2028]: time="2025-02-13T19:50:01.518862111Z" level=info msg="StartContainer for \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\"" Feb 13 19:50:01.598314 systemd[1]: Started cri-containerd-5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861.scope - libcontainer container 5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861. Feb 13 19:50:01.661907 containerd[2028]: time="2025-02-13T19:50:01.661671988Z" level=info msg="StartContainer for \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\" returns successfully" Feb 13 19:50:01.933715 kubelet[3209]: I0213 19:50:01.933227 3209 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:50:02.018932 systemd[1]: Created slice kubepods-burstable-pod87882388_a59b_4d56_926d_8f35683e47b9.slice - libcontainer container kubepods-burstable-pod87882388_a59b_4d56_926d_8f35683e47b9.slice. Feb 13 19:50:02.056009 systemd[1]: Created slice kubepods-burstable-podf738d4d8_d51e_4fe4_a60a_cb27b5e761aa.slice - libcontainer container kubepods-burstable-podf738d4d8_d51e_4fe4_a60a_cb27b5e761aa.slice. Feb 13 19:50:02.129383 kubelet[3209]: I0213 19:50:02.129093 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87882388-a59b-4d56-926d-8f35683e47b9-config-volume\") pod \"coredns-6f6b679f8f-hpkfw\" (UID: \"87882388-a59b-4d56-926d-8f35683e47b9\") " pod="kube-system/coredns-6f6b679f8f-hpkfw" Feb 13 19:50:02.129383 kubelet[3209]: I0213 19:50:02.129168 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fntsx\" (UniqueName: \"kubernetes.io/projected/87882388-a59b-4d56-926d-8f35683e47b9-kube-api-access-fntsx\") pod \"coredns-6f6b679f8f-hpkfw\" (UID: \"87882388-a59b-4d56-926d-8f35683e47b9\") " pod="kube-system/coredns-6f6b679f8f-hpkfw" Feb 13 19:50:02.129383 kubelet[3209]: I0213 19:50:02.129215 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f738d4d8-d51e-4fe4-a60a-cb27b5e761aa-config-volume\") pod \"coredns-6f6b679f8f-q56rk\" (UID: \"f738d4d8-d51e-4fe4-a60a-cb27b5e761aa\") " pod="kube-system/coredns-6f6b679f8f-q56rk" Feb 13 19:50:02.129383 kubelet[3209]: I0213 19:50:02.129252 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57q5f\" (UniqueName: \"kubernetes.io/projected/f738d4d8-d51e-4fe4-a60a-cb27b5e761aa-kube-api-access-57q5f\") pod \"coredns-6f6b679f8f-q56rk\" (UID: \"f738d4d8-d51e-4fe4-a60a-cb27b5e761aa\") " pod="kube-system/coredns-6f6b679f8f-q56rk" Feb 13 19:50:02.341208 containerd[2028]: time="2025-02-13T19:50:02.341152719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpkfw,Uid:87882388-a59b-4d56-926d-8f35683e47b9,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:02.396053 containerd[2028]: time="2025-02-13T19:50:02.395569815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q56rk,Uid:f738d4d8-d51e-4fe4-a60a-cb27b5e761aa,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:02.550004 kubelet[3209]: I0213 19:50:02.548376 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r29nh" podStartSLOduration=7.853492783 podStartE2EDuration="18.548327092s" podCreationTimestamp="2025-02-13 19:49:44 +0000 UTC" firstStartedPulling="2025-02-13 19:49:46.355757484 +0000 UTC m=+7.524130551" lastFinishedPulling="2025-02-13 19:49:57.050591817 +0000 UTC m=+18.218964860" observedRunningTime="2025-02-13 19:50:02.544164304 +0000 UTC m=+23.712537371" watchObservedRunningTime="2025-02-13 19:50:02.548327092 +0000 UTC m=+23.716700135" Feb 13 19:50:04.809568 systemd-networkd[1932]: cilium_host: Link UP Feb 13 19:50:04.811649 systemd-networkd[1932]: cilium_net: Link UP Feb 13 19:50:04.813626 systemd-networkd[1932]: cilium_net: Gained carrier Feb 13 19:50:04.814192 systemd-networkd[1932]: cilium_host: Gained carrier Feb 13 19:50:04.815423 (udev-worker)[4265]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:04.817815 (udev-worker)[4267]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:05.018635 systemd-networkd[1932]: cilium_vxlan: Link UP Feb 13 19:50:05.018654 systemd-networkd[1932]: cilium_vxlan: Gained carrier Feb 13 19:50:05.305241 systemd-networkd[1932]: cilium_host: Gained IPv6LL Feb 13 19:50:05.480447 systemd-networkd[1932]: cilium_net: Gained IPv6LL Feb 13 19:50:05.633181 kernel: NET: Registered PF_ALG protocol family Feb 13 19:50:06.376337 systemd-networkd[1932]: cilium_vxlan: Gained IPv6LL Feb 13 19:50:07.208487 systemd-networkd[1932]: lxc_health: Link UP Feb 13 19:50:07.224817 systemd-networkd[1932]: lxc_health: Gained carrier Feb 13 19:50:07.948504 systemd-networkd[1932]: lxcc890ca5ef36f: Link UP Feb 13 19:50:07.956023 kernel: eth0: renamed from tmp599a9 Feb 13 19:50:07.967479 systemd-networkd[1932]: lxcc890ca5ef36f: Gained carrier Feb 13 19:50:08.018126 systemd-networkd[1932]: lxcdba546352315: Link UP Feb 13 19:50:08.021355 kernel: eth0: renamed from tmp4620a Feb 13 19:50:08.028248 (udev-worker)[4308]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:08.031008 systemd-networkd[1932]: lxcdba546352315: Gained carrier Feb 13 19:50:09.192287 systemd-networkd[1932]: lxcc890ca5ef36f: Gained IPv6LL Feb 13 19:50:09.256248 systemd-networkd[1932]: lxc_health: Gained IPv6LL Feb 13 19:50:09.832280 systemd-networkd[1932]: lxcdba546352315: Gained IPv6LL Feb 13 19:50:11.862426 ntpd[1997]: Listen normally on 8 cilium_host 192.168.0.158:123 Feb 13 19:50:11.865127 ntpd[1997]: 13 Feb 19:50:11 ntpd[1997]: Listen normally on 8 cilium_host 192.168.0.158:123 Feb 13 19:50:11.865127 ntpd[1997]: 13 Feb 19:50:11 ntpd[1997]: Listen normally on 9 cilium_net [fe80::58b4:a9ff:fe86:fd75%4]:123 Feb 13 19:50:11.865127 ntpd[1997]: 13 Feb 19:50:11 ntpd[1997]: Listen normally on 10 cilium_host [fe80::74a7:beff:fef9:d038%5]:123 Feb 13 19:50:11.865127 ntpd[1997]: 13 Feb 19:50:11 ntpd[1997]: Listen normally on 11 cilium_vxlan [fe80::b8dc:50ff:fe11:53ca%6]:123 Feb 13 19:50:11.865127 ntpd[1997]: 13 Feb 19:50:11 ntpd[1997]: Listen normally on 12 lxc_health [fe80::f8db:a5ff:feae:2d56%8]:123 Feb 13 19:50:11.865127 ntpd[1997]: 13 Feb 19:50:11 ntpd[1997]: Listen normally on 13 lxcc890ca5ef36f [fe80::4c08:b5ff:fe77:da21%10]:123 Feb 13 19:50:11.865127 ntpd[1997]: 13 Feb 19:50:11 ntpd[1997]: Listen normally on 14 lxcdba546352315 [fe80::93:66ff:fe6b:176b%12]:123 Feb 13 19:50:11.862611 ntpd[1997]: Listen normally on 9 cilium_net [fe80::58b4:a9ff:fe86:fd75%4]:123 Feb 13 19:50:11.862722 ntpd[1997]: Listen normally on 10 cilium_host [fe80::74a7:beff:fef9:d038%5]:123 Feb 13 19:50:11.862799 ntpd[1997]: Listen normally on 11 cilium_vxlan [fe80::b8dc:50ff:fe11:53ca%6]:123 Feb 13 19:50:11.862883 ntpd[1997]: Listen normally on 12 lxc_health [fe80::f8db:a5ff:feae:2d56%8]:123 Feb 13 19:50:11.863003 ntpd[1997]: Listen normally on 13 lxcc890ca5ef36f [fe80::4c08:b5ff:fe77:da21%10]:123 Feb 13 19:50:11.863096 ntpd[1997]: Listen normally on 14 lxcdba546352315 [fe80::93:66ff:fe6b:176b%12]:123 Feb 13 19:50:17.477435 systemd[1]: Started sshd@7-172.31.28.108:22-139.178.89.65:53674.service - OpenSSH per-connection server daemon (139.178.89.65:53674). Feb 13 19:50:17.690208 sshd[4672]: Accepted publickey for core from 139.178.89.65 port 53674 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:17.693885 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:17.710447 systemd-logind[2004]: New session 8 of user core. Feb 13 19:50:17.719292 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:50:18.132857 sshd[4672]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:18.146117 systemd[1]: sshd@7-172.31.28.108:22-139.178.89.65:53674.service: Deactivated successfully. Feb 13 19:50:18.146468 systemd-logind[2004]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:50:18.157243 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:50:18.163055 systemd-logind[2004]: Removed session 8. Feb 13 19:50:18.216528 containerd[2028]: time="2025-02-13T19:50:18.214554606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:18.216528 containerd[2028]: time="2025-02-13T19:50:18.216064074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:18.216528 containerd[2028]: time="2025-02-13T19:50:18.216186546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:18.218648 containerd[2028]: time="2025-02-13T19:50:18.218371950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:18.308552 systemd[1]: Started cri-containerd-599a992984076dc2f5ed65ef7e3cf05569465d2082d8e811a5dc64b333ffa77c.scope - libcontainer container 599a992984076dc2f5ed65ef7e3cf05569465d2082d8e811a5dc64b333ffa77c. Feb 13 19:50:18.414104 containerd[2028]: time="2025-02-13T19:50:18.413609455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:18.415029 containerd[2028]: time="2025-02-13T19:50:18.413782363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:18.415029 containerd[2028]: time="2025-02-13T19:50:18.413844955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:18.415029 containerd[2028]: time="2025-02-13T19:50:18.414794131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:18.499446 containerd[2028]: time="2025-02-13T19:50:18.499000303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hpkfw,Uid:87882388-a59b-4d56-926d-8f35683e47b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"599a992984076dc2f5ed65ef7e3cf05569465d2082d8e811a5dc64b333ffa77c\"" Feb 13 19:50:18.511770 containerd[2028]: time="2025-02-13T19:50:18.511659643Z" level=info msg="CreateContainer within sandbox \"599a992984076dc2f5ed65ef7e3cf05569465d2082d8e811a5dc64b333ffa77c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:18.517305 systemd[1]: Started cri-containerd-4620a0192c9b49e65891aa4dbfa31ed549335de78e1d374e1dbabeea0e1fa6ef.scope - libcontainer container 4620a0192c9b49e65891aa4dbfa31ed549335de78e1d374e1dbabeea0e1fa6ef. Feb 13 19:50:18.543242 containerd[2028]: time="2025-02-13T19:50:18.543179959Z" level=info msg="CreateContainer within sandbox \"599a992984076dc2f5ed65ef7e3cf05569465d2082d8e811a5dc64b333ffa77c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f7ad4ca3c963ab9ec555f55d42585777b19800fa8b02b52b77b5dce99d571e3\"" Feb 13 19:50:18.544626 containerd[2028]: time="2025-02-13T19:50:18.544556431Z" level=info msg="StartContainer for \"8f7ad4ca3c963ab9ec555f55d42585777b19800fa8b02b52b77b5dce99d571e3\"" Feb 13 19:50:18.628586 systemd[1]: Started cri-containerd-8f7ad4ca3c963ab9ec555f55d42585777b19800fa8b02b52b77b5dce99d571e3.scope - libcontainer container 8f7ad4ca3c963ab9ec555f55d42585777b19800fa8b02b52b77b5dce99d571e3. Feb 13 19:50:18.744758 containerd[2028]: time="2025-02-13T19:50:18.744668948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q56rk,Uid:f738d4d8-d51e-4fe4-a60a-cb27b5e761aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"4620a0192c9b49e65891aa4dbfa31ed549335de78e1d374e1dbabeea0e1fa6ef\"" Feb 13 19:50:18.767608 containerd[2028]: time="2025-02-13T19:50:18.765803217Z" level=info msg="CreateContainer within sandbox \"4620a0192c9b49e65891aa4dbfa31ed549335de78e1d374e1dbabeea0e1fa6ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:18.806623 containerd[2028]: time="2025-02-13T19:50:18.806542389Z" level=info msg="StartContainer for \"8f7ad4ca3c963ab9ec555f55d42585777b19800fa8b02b52b77b5dce99d571e3\" returns successfully" Feb 13 19:50:18.821223 containerd[2028]: time="2025-02-13T19:50:18.820954065Z" level=info msg="CreateContainer within sandbox \"4620a0192c9b49e65891aa4dbfa31ed549335de78e1d374e1dbabeea0e1fa6ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d795f4530e614f6327bcc43944cb24fd4806c3cc47dcfa5bf8b31dd3f27836a6\"" Feb 13 19:50:18.827499 containerd[2028]: time="2025-02-13T19:50:18.827405709Z" level=info msg="StartContainer for \"d795f4530e614f6327bcc43944cb24fd4806c3cc47dcfa5bf8b31dd3f27836a6\"" Feb 13 19:50:18.912890 systemd[1]: Started cri-containerd-d795f4530e614f6327bcc43944cb24fd4806c3cc47dcfa5bf8b31dd3f27836a6.scope - libcontainer container d795f4530e614f6327bcc43944cb24fd4806c3cc47dcfa5bf8b31dd3f27836a6. Feb 13 19:50:19.005322 containerd[2028]: time="2025-02-13T19:50:19.005162466Z" level=info msg="StartContainer for \"d795f4530e614f6327bcc43944cb24fd4806c3cc47dcfa5bf8b31dd3f27836a6\" returns successfully" Feb 13 19:50:19.232743 systemd[1]: run-containerd-runc-k8s.io-4620a0192c9b49e65891aa4dbfa31ed549335de78e1d374e1dbabeea0e1fa6ef-runc.5IlcBu.mount: Deactivated successfully. Feb 13 19:50:19.590338 kubelet[3209]: I0213 19:50:19.590167 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hpkfw" podStartSLOduration=35.590125101 podStartE2EDuration="35.590125101s" podCreationTimestamp="2025-02-13 19:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:19.586893921 +0000 UTC m=+40.755267000" watchObservedRunningTime="2025-02-13 19:50:19.590125101 +0000 UTC m=+40.758498156" Feb 13 19:50:23.175534 systemd[1]: Started sshd@8-172.31.28.108:22-139.178.89.65:53684.service - OpenSSH per-connection server daemon (139.178.89.65:53684). Feb 13 19:50:23.367311 sshd[4858]: Accepted publickey for core from 139.178.89.65 port 53684 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:23.371126 sshd[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:23.379800 systemd-logind[2004]: New session 9 of user core. Feb 13 19:50:23.391353 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:50:23.659666 sshd[4858]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:23.668638 systemd[1]: sshd@8-172.31.28.108:22-139.178.89.65:53684.service: Deactivated successfully. Feb 13 19:50:23.673676 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:50:23.675538 systemd-logind[2004]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:50:23.679431 systemd-logind[2004]: Removed session 9. Feb 13 19:50:28.702569 systemd[1]: Started sshd@9-172.31.28.108:22-139.178.89.65:52534.service - OpenSSH per-connection server daemon (139.178.89.65:52534). Feb 13 19:50:28.876379 sshd[4872]: Accepted publickey for core from 139.178.89.65 port 52534 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:28.882394 sshd[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:28.893882 systemd-logind[2004]: New session 10 of user core. Feb 13 19:50:28.900276 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:50:29.147030 sshd[4872]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:29.156716 systemd[1]: sshd@9-172.31.28.108:22-139.178.89.65:52534.service: Deactivated successfully. Feb 13 19:50:29.161600 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:50:29.166400 systemd-logind[2004]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:50:29.168634 systemd-logind[2004]: Removed session 10. Feb 13 19:50:34.191516 systemd[1]: Started sshd@10-172.31.28.108:22-139.178.89.65:52548.service - OpenSSH per-connection server daemon (139.178.89.65:52548). Feb 13 19:50:34.365244 sshd[4886]: Accepted publickey for core from 139.178.89.65 port 52548 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:34.369166 sshd[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:34.383938 systemd-logind[2004]: New session 11 of user core. Feb 13 19:50:34.387347 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:50:34.650687 sshd[4886]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:34.660102 systemd[1]: sshd@10-172.31.28.108:22-139.178.89.65:52548.service: Deactivated successfully. Feb 13 19:50:34.666293 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:50:34.671420 systemd-logind[2004]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:50:34.674952 systemd-logind[2004]: Removed session 11. Feb 13 19:50:39.692519 systemd[1]: Started sshd@11-172.31.28.108:22-139.178.89.65:59426.service - OpenSSH per-connection server daemon (139.178.89.65:59426). Feb 13 19:50:39.881365 sshd[4901]: Accepted publickey for core from 139.178.89.65 port 59426 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:39.886279 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:39.905079 systemd-logind[2004]: New session 12 of user core. Feb 13 19:50:39.912405 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:50:40.167177 sshd[4901]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:40.173572 systemd[1]: sshd@11-172.31.28.108:22-139.178.89.65:59426.service: Deactivated successfully. Feb 13 19:50:40.178220 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:50:40.179682 systemd-logind[2004]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:50:40.183006 systemd-logind[2004]: Removed session 12. Feb 13 19:50:40.207504 systemd[1]: Started sshd@12-172.31.28.108:22-139.178.89.65:59440.service - OpenSSH per-connection server daemon (139.178.89.65:59440). Feb 13 19:50:40.394484 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 59440 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:40.398379 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:40.410182 systemd-logind[2004]: New session 13 of user core. Feb 13 19:50:40.416308 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:50:40.795373 sshd[4915]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:40.806949 systemd[1]: sshd@12-172.31.28.108:22-139.178.89.65:59440.service: Deactivated successfully. Feb 13 19:50:40.814889 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:50:40.820329 systemd-logind[2004]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:50:40.853913 systemd[1]: Started sshd@13-172.31.28.108:22-139.178.89.65:59446.service - OpenSSH per-connection server daemon (139.178.89.65:59446). Feb 13 19:50:40.863159 systemd-logind[2004]: Removed session 13. Feb 13 19:50:41.067067 sshd[4926]: Accepted publickey for core from 139.178.89.65 port 59446 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:41.071460 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:41.083099 systemd-logind[2004]: New session 14 of user core. Feb 13 19:50:41.093365 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:50:41.376392 sshd[4926]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:41.386386 systemd[1]: sshd@13-172.31.28.108:22-139.178.89.65:59446.service: Deactivated successfully. Feb 13 19:50:41.390938 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:50:41.394316 systemd-logind[2004]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:50:41.397100 systemd-logind[2004]: Removed session 14. Feb 13 19:50:46.425653 systemd[1]: Started sshd@14-172.31.28.108:22-139.178.89.65:48312.service - OpenSSH per-connection server daemon (139.178.89.65:48312). Feb 13 19:50:46.623020 sshd[4939]: Accepted publickey for core from 139.178.89.65 port 48312 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:46.625999 sshd[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:46.636667 systemd-logind[2004]: New session 15 of user core. Feb 13 19:50:46.642398 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:50:46.908703 sshd[4939]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:46.916941 systemd[1]: sshd@14-172.31.28.108:22-139.178.89.65:48312.service: Deactivated successfully. Feb 13 19:50:46.923115 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:50:46.925410 systemd-logind[2004]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:50:46.927451 systemd-logind[2004]: Removed session 15. Feb 13 19:50:51.952632 systemd[1]: Started sshd@15-172.31.28.108:22-139.178.89.65:48328.service - OpenSSH per-connection server daemon (139.178.89.65:48328). Feb 13 19:50:52.134116 sshd[4954]: Accepted publickey for core from 139.178.89.65 port 48328 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:52.137232 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:52.145876 systemd-logind[2004]: New session 16 of user core. Feb 13 19:50:52.154335 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:50:52.421776 sshd[4954]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:52.428832 systemd[1]: sshd@15-172.31.28.108:22-139.178.89.65:48328.service: Deactivated successfully. Feb 13 19:50:52.434222 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:50:52.436230 systemd-logind[2004]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:50:52.438237 systemd-logind[2004]: Removed session 16. Feb 13 19:50:57.471771 systemd[1]: Started sshd@16-172.31.28.108:22-139.178.89.65:35686.service - OpenSSH per-connection server daemon (139.178.89.65:35686). Feb 13 19:50:57.662649 sshd[4967]: Accepted publickey for core from 139.178.89.65 port 35686 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:57.666312 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:57.677618 systemd-logind[2004]: New session 17 of user core. Feb 13 19:50:57.682590 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:50:57.952431 sshd[4967]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:57.963943 systemd[1]: sshd@16-172.31.28.108:22-139.178.89.65:35686.service: Deactivated successfully. Feb 13 19:50:57.970817 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:50:57.972871 systemd-logind[2004]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:50:57.992672 systemd-logind[2004]: Removed session 17. Feb 13 19:50:58.003596 systemd[1]: Started sshd@17-172.31.28.108:22-139.178.89.65:35694.service - OpenSSH per-connection server daemon (139.178.89.65:35694). Feb 13 19:50:58.184943 sshd[4980]: Accepted publickey for core from 139.178.89.65 port 35694 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:58.188463 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:58.198659 systemd-logind[2004]: New session 18 of user core. Feb 13 19:50:58.206335 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:50:58.539802 sshd[4980]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:58.546421 systemd[1]: sshd@17-172.31.28.108:22-139.178.89.65:35694.service: Deactivated successfully. Feb 13 19:50:58.550665 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:50:58.552465 systemd-logind[2004]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:50:58.555623 systemd-logind[2004]: Removed session 18. Feb 13 19:50:58.577521 systemd[1]: Started sshd@18-172.31.28.108:22-139.178.89.65:35710.service - OpenSSH per-connection server daemon (139.178.89.65:35710). Feb 13 19:50:58.761037 sshd[4991]: Accepted publickey for core from 139.178.89.65 port 35710 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:58.764726 sshd[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:58.773916 systemd-logind[2004]: New session 19 of user core. Feb 13 19:50:58.781349 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:51:01.741572 sshd[4991]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:01.756272 systemd[1]: sshd@18-172.31.28.108:22-139.178.89.65:35710.service: Deactivated successfully. Feb 13 19:51:01.766157 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:51:01.766924 systemd[1]: session-19.scope: Consumed 1.023s CPU time. Feb 13 19:51:01.769892 systemd-logind[2004]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:51:01.801745 systemd[1]: Started sshd@19-172.31.28.108:22-139.178.89.65:35722.service - OpenSSH per-connection server daemon (139.178.89.65:35722). Feb 13 19:51:01.805828 systemd-logind[2004]: Removed session 19. Feb 13 19:51:01.989892 sshd[5012]: Accepted publickey for core from 139.178.89.65 port 35722 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:01.993125 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:02.004454 systemd-logind[2004]: New session 20 of user core. Feb 13 19:51:02.016425 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:51:02.573242 sshd[5012]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:02.585030 systemd[1]: sshd@19-172.31.28.108:22-139.178.89.65:35722.service: Deactivated successfully. Feb 13 19:51:02.594393 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:51:02.620690 systemd-logind[2004]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:51:02.633605 systemd[1]: Started sshd@20-172.31.28.108:22-139.178.89.65:35736.service - OpenSSH per-connection server daemon (139.178.89.65:35736). Feb 13 19:51:02.640127 systemd-logind[2004]: Removed session 20. Feb 13 19:51:02.828602 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 35736 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:02.830779 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:02.838343 systemd-logind[2004]: New session 21 of user core. Feb 13 19:51:02.848263 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:51:03.102232 sshd[5023]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:03.111930 systemd[1]: sshd@20-172.31.28.108:22-139.178.89.65:35736.service: Deactivated successfully. Feb 13 19:51:03.118679 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:51:03.120779 systemd-logind[2004]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:51:03.125037 systemd-logind[2004]: Removed session 21. Feb 13 19:51:08.147553 systemd[1]: Started sshd@21-172.31.28.108:22-139.178.89.65:40034.service - OpenSSH per-connection server daemon (139.178.89.65:40034). Feb 13 19:51:08.341361 sshd[5036]: Accepted publickey for core from 139.178.89.65 port 40034 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:08.345751 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:08.360428 systemd-logind[2004]: New session 22 of user core. Feb 13 19:51:08.369387 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:51:08.659187 sshd[5036]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:08.665402 systemd-logind[2004]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:51:08.666946 systemd[1]: sshd@21-172.31.28.108:22-139.178.89.65:40034.service: Deactivated successfully. Feb 13 19:51:08.672573 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:51:08.676856 systemd-logind[2004]: Removed session 22. Feb 13 19:51:13.697541 systemd[1]: Started sshd@22-172.31.28.108:22-139.178.89.65:40040.service - OpenSSH per-connection server daemon (139.178.89.65:40040). Feb 13 19:51:13.883597 sshd[5052]: Accepted publickey for core from 139.178.89.65 port 40040 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:13.887174 sshd[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:13.899386 systemd-logind[2004]: New session 23 of user core. Feb 13 19:51:13.910508 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:51:14.188017 sshd[5052]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:14.198073 systemd[1]: sshd@22-172.31.28.108:22-139.178.89.65:40040.service: Deactivated successfully. Feb 13 19:51:14.202888 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:51:14.205749 systemd-logind[2004]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:51:14.210154 systemd-logind[2004]: Removed session 23. Feb 13 19:51:19.232575 systemd[1]: Started sshd@23-172.31.28.108:22-139.178.89.65:43476.service - OpenSSH per-connection server daemon (139.178.89.65:43476). Feb 13 19:51:19.418126 sshd[5067]: Accepted publickey for core from 139.178.89.65 port 43476 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:19.422056 sshd[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:19.430391 systemd-logind[2004]: New session 24 of user core. Feb 13 19:51:19.437235 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:51:19.691804 sshd[5067]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:19.699197 systemd[1]: sshd@23-172.31.28.108:22-139.178.89.65:43476.service: Deactivated successfully. Feb 13 19:51:19.703546 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:51:19.707735 systemd-logind[2004]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:51:19.710027 systemd-logind[2004]: Removed session 24. Feb 13 19:51:24.741543 systemd[1]: Started sshd@24-172.31.28.108:22-139.178.89.65:44830.service - OpenSSH per-connection server daemon (139.178.89.65:44830). Feb 13 19:51:24.918369 sshd[5080]: Accepted publickey for core from 139.178.89.65 port 44830 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:24.921575 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:24.932453 systemd-logind[2004]: New session 25 of user core. Feb 13 19:51:24.941681 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:51:25.217401 sshd[5080]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:25.228505 systemd[1]: sshd@24-172.31.28.108:22-139.178.89.65:44830.service: Deactivated successfully. Feb 13 19:51:25.231283 systemd-logind[2004]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:51:25.235559 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:51:25.258572 systemd-logind[2004]: Removed session 25. Feb 13 19:51:25.272476 systemd[1]: Started sshd@25-172.31.28.108:22-139.178.89.65:44836.service - OpenSSH per-connection server daemon (139.178.89.65:44836). Feb 13 19:51:25.450758 sshd[5092]: Accepted publickey for core from 139.178.89.65 port 44836 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:25.453609 sshd[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:25.461885 systemd-logind[2004]: New session 26 of user core. Feb 13 19:51:25.473324 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:51:28.365701 kubelet[3209]: I0213 19:51:28.365540 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q56rk" podStartSLOduration=104.365505134 podStartE2EDuration="1m44.365505134s" podCreationTimestamp="2025-02-13 19:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:19.642545205 +0000 UTC m=+40.810918296" watchObservedRunningTime="2025-02-13 19:51:28.365505134 +0000 UTC m=+109.533878189" Feb 13 19:51:28.403943 containerd[2028]: time="2025-02-13T19:51:28.403825454Z" level=info msg="StopContainer for \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\" with timeout 30 (s)" Feb 13 19:51:28.407089 containerd[2028]: time="2025-02-13T19:51:28.405343226Z" level=info msg="Stop container \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\" with signal terminated" Feb 13 19:51:28.442229 systemd[1]: cri-containerd-1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455.scope: Deactivated successfully. Feb 13 19:51:28.462856 containerd[2028]: time="2025-02-13T19:51:28.462781455Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:28.482113 containerd[2028]: time="2025-02-13T19:51:28.481857747Z" level=info msg="StopContainer for \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\" with timeout 2 (s)" Feb 13 19:51:28.485623 containerd[2028]: time="2025-02-13T19:51:28.485406579Z" level=info msg="Stop container \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\" with signal terminated" Feb 13 19:51:28.511718 systemd-networkd[1932]: lxc_health: Link DOWN Feb 13 19:51:28.511737 systemd-networkd[1932]: lxc_health: Lost carrier Feb 13 19:51:28.534334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455-rootfs.mount: Deactivated successfully. Feb 13 19:51:28.544917 systemd[1]: cri-containerd-5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861.scope: Deactivated successfully. Feb 13 19:51:28.547888 systemd[1]: cri-containerd-5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861.scope: Consumed 16.908s CPU time. Feb 13 19:51:28.560396 containerd[2028]: time="2025-02-13T19:51:28.559703163Z" level=info msg="shim disconnected" id=1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455 namespace=k8s.io Feb 13 19:51:28.560396 containerd[2028]: time="2025-02-13T19:51:28.560031099Z" level=warning msg="cleaning up after shim disconnected" id=1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455 namespace=k8s.io Feb 13 19:51:28.560396 containerd[2028]: time="2025-02-13T19:51:28.560118795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:28.622715 containerd[2028]: time="2025-02-13T19:51:28.620638839Z" level=info msg="StopContainer for \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\" returns successfully" Feb 13 19:51:28.621861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861-rootfs.mount: Deactivated successfully. Feb 13 19:51:28.627713 containerd[2028]: time="2025-02-13T19:51:28.627619264Z" level=info msg="StopPodSandbox for \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\"" Feb 13 19:51:28.627920 containerd[2028]: time="2025-02-13T19:51:28.627717052Z" level=info msg="Container to stop \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:28.632149 containerd[2028]: time="2025-02-13T19:51:28.628553896Z" level=info msg="shim disconnected" id=5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861 namespace=k8s.io Feb 13 19:51:28.632149 containerd[2028]: time="2025-02-13T19:51:28.629230540Z" level=warning msg="cleaning up after shim disconnected" id=5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861 namespace=k8s.io Feb 13 19:51:28.632149 containerd[2028]: time="2025-02-13T19:51:28.629328196Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:28.637463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd-shm.mount: Deactivated successfully. Feb 13 19:51:28.655082 systemd[1]: cri-containerd-69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd.scope: Deactivated successfully. Feb 13 19:51:28.681878 containerd[2028]: time="2025-02-13T19:51:28.681712360Z" level=info msg="StopContainer for \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\" returns successfully" Feb 13 19:51:28.683807 containerd[2028]: time="2025-02-13T19:51:28.683686564Z" level=info msg="StopPodSandbox for \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\"" Feb 13 19:51:28.684435 containerd[2028]: time="2025-02-13T19:51:28.684265060Z" level=info msg="Container to stop \"46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:28.684770 containerd[2028]: time="2025-02-13T19:51:28.684373852Z" level=info msg="Container to stop \"ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:28.684909 containerd[2028]: time="2025-02-13T19:51:28.684708568Z" level=info msg="Container to stop \"cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:28.685165 containerd[2028]: time="2025-02-13T19:51:28.685064116Z" level=info msg="Container to stop \"54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:28.685389 containerd[2028]: time="2025-02-13T19:51:28.685313452Z" level=info msg="Container to stop \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:28.693150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a-shm.mount: Deactivated successfully. Feb 13 19:51:28.712179 systemd[1]: cri-containerd-8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a.scope: Deactivated successfully. Feb 13 19:51:28.740548 containerd[2028]: time="2025-02-13T19:51:28.740214700Z" level=info msg="shim disconnected" id=69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd namespace=k8s.io Feb 13 19:51:28.740548 containerd[2028]: time="2025-02-13T19:51:28.740298604Z" level=warning msg="cleaning up after shim disconnected" id=69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd namespace=k8s.io Feb 13 19:51:28.740548 containerd[2028]: time="2025-02-13T19:51:28.740320096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:28.773766 containerd[2028]: time="2025-02-13T19:51:28.773544292Z" level=info msg="TearDown network for sandbox \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" successfully" Feb 13 19:51:28.773766 containerd[2028]: time="2025-02-13T19:51:28.773608576Z" level=info msg="StopPodSandbox for \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" returns successfully" Feb 13 19:51:28.781950 containerd[2028]: time="2025-02-13T19:51:28.781634944Z" level=info msg="shim disconnected" id=8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a namespace=k8s.io Feb 13 19:51:28.781950 containerd[2028]: time="2025-02-13T19:51:28.781706776Z" level=warning msg="cleaning up after shim disconnected" id=8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a namespace=k8s.io Feb 13 19:51:28.781950 containerd[2028]: time="2025-02-13T19:51:28.781729060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:28.791847 kubelet[3209]: I0213 19:51:28.790944 3209 scope.go:117] "RemoveContainer" containerID="1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455" Feb 13 19:51:28.800004 containerd[2028]: time="2025-02-13T19:51:28.798302080Z" level=info msg="RemoveContainer for \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\"" Feb 13 19:51:28.817871 containerd[2028]: time="2025-02-13T19:51:28.817549036Z" level=info msg="RemoveContainer for \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\" returns successfully" Feb 13 19:51:28.818479 kubelet[3209]: I0213 19:51:28.818442 3209 scope.go:117] "RemoveContainer" containerID="1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455" Feb 13 19:51:28.819666 containerd[2028]: time="2025-02-13T19:51:28.819470428Z" level=error msg="ContainerStatus for \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\": not found" Feb 13 19:51:28.820140 kubelet[3209]: E0213 19:51:28.820040 3209 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\": not found" containerID="1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455" Feb 13 19:51:28.820305 kubelet[3209]: I0213 19:51:28.820135 3209 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455"} err="failed to get container status \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\": rpc error: code = NotFound desc = an error occurred when try to find container \"1e808958fd247f84e861c383942a22065b8f92470c0087066ba6c49ae4ef2455\": not found" Feb 13 19:51:28.826224 containerd[2028]: time="2025-02-13T19:51:28.826136789Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:51:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:51:28.828124 containerd[2028]: time="2025-02-13T19:51:28.827917097Z" level=info msg="TearDown network for sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" successfully" Feb 13 19:51:28.828124 containerd[2028]: time="2025-02-13T19:51:28.828018617Z" level=info msg="StopPodSandbox for \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" returns successfully" Feb 13 19:51:28.861933 kubelet[3209]: I0213 19:51:28.860596 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mkg7\" (UniqueName: \"kubernetes.io/projected/7f2b075f-deae-4167-9b8f-ab09703863cf-kube-api-access-2mkg7\") pod \"7f2b075f-deae-4167-9b8f-ab09703863cf\" (UID: \"7f2b075f-deae-4167-9b8f-ab09703863cf\") " Feb 13 19:51:28.861933 kubelet[3209]: I0213 19:51:28.860687 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f2b075f-deae-4167-9b8f-ab09703863cf-cilium-config-path\") pod \"7f2b075f-deae-4167-9b8f-ab09703863cf\" (UID: \"7f2b075f-deae-4167-9b8f-ab09703863cf\") " Feb 13 19:51:28.869836 kubelet[3209]: I0213 19:51:28.869762 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2b075f-deae-4167-9b8f-ab09703863cf-kube-api-access-2mkg7" (OuterVolumeSpecName: "kube-api-access-2mkg7") pod "7f2b075f-deae-4167-9b8f-ab09703863cf" (UID: "7f2b075f-deae-4167-9b8f-ab09703863cf"). InnerVolumeSpecName "kube-api-access-2mkg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:51:28.876751 kubelet[3209]: I0213 19:51:28.872735 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2b075f-deae-4167-9b8f-ab09703863cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f2b075f-deae-4167-9b8f-ab09703863cf" (UID: "7f2b075f-deae-4167-9b8f-ab09703863cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:51:28.961930 kubelet[3209]: I0213 19:51:28.961869 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-hostproc\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962128 kubelet[3209]: I0213 19:51:28.961939 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-config-path\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962128 kubelet[3209]: I0213 19:51:28.962019 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-run\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962128 kubelet[3209]: I0213 19:51:28.962058 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-bpf-maps\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962128 kubelet[3209]: I0213 19:51:28.962093 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-kernel\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962128 kubelet[3209]: I0213 19:51:28.962126 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-net\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962412 kubelet[3209]: I0213 19:51:28.962166 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff1006aa-f8f4-4883-9211-12c65eb2121c-clustermesh-secrets\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962412 kubelet[3209]: I0213 19:51:28.962198 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-lib-modules\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962412 kubelet[3209]: I0213 19:51:28.962232 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-etc-cni-netd\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962412 kubelet[3209]: I0213 19:51:28.962266 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-cgroup\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962412 kubelet[3209]: I0213 19:51:28.962302 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-hubble-tls\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962412 kubelet[3209]: I0213 19:51:28.962334 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-xtables-lock\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962814 kubelet[3209]: I0213 19:51:28.962367 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cni-path\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962814 kubelet[3209]: I0213 19:51:28.962404 3209 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4vvw\" (UniqueName: \"kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-kube-api-access-d4vvw\") pod \"ff1006aa-f8f4-4883-9211-12c65eb2121c\" (UID: \"ff1006aa-f8f4-4883-9211-12c65eb2121c\") " Feb 13 19:51:28.962814 kubelet[3209]: I0213 19:51:28.962466 3209 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f2b075f-deae-4167-9b8f-ab09703863cf-cilium-config-path\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:28.962814 kubelet[3209]: I0213 19:51:28.962492 3209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2mkg7\" (UniqueName: \"kubernetes.io/projected/7f2b075f-deae-4167-9b8f-ab09703863cf-kube-api-access-2mkg7\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:28.965000 kubelet[3209]: I0213 19:51:28.963167 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-hostproc" (OuterVolumeSpecName: "hostproc") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.965000 kubelet[3209]: I0213 19:51:28.963667 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.965000 kubelet[3209]: I0213 19:51:28.963737 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.965000 kubelet[3209]: I0213 19:51:28.963780 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.965405 kubelet[3209]: I0213 19:51:28.965367 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.965533 kubelet[3209]: I0213 19:51:28.965164 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.965640 kubelet[3209]: I0213 19:51:28.965408 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.965779 kubelet[3209]: I0213 19:51:28.965752 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cni-path" (OuterVolumeSpecName: "cni-path") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.968075 kubelet[3209]: I0213 19:51:28.965439 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.968234 kubelet[3209]: I0213 19:51:28.968115 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:28.971114 kubelet[3209]: I0213 19:51:28.971044 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff1006aa-f8f4-4883-9211-12c65eb2121c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:51:28.977210 kubelet[3209]: I0213 19:51:28.977147 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:51:28.977692 kubelet[3209]: I0213 19:51:28.977639 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-kube-api-access-d4vvw" (OuterVolumeSpecName: "kube-api-access-d4vvw") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "kube-api-access-d4vvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:51:28.979094 kubelet[3209]: I0213 19:51:28.979021 3209 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ff1006aa-f8f4-4883-9211-12c65eb2121c" (UID: "ff1006aa-f8f4-4883-9211-12c65eb2121c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:51:29.063104 kubelet[3209]: I0213 19:51:29.063003 3209 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-lib-modules\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063104 kubelet[3209]: I0213 19:51:29.063091 3209 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-etc-cni-netd\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063104 kubelet[3209]: I0213 19:51:29.063122 3209 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-hubble-tls\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063152 3209 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-xtables-lock\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063176 3209 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-cgroup\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063208 3209 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cni-path\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063237 3209 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d4vvw\" (UniqueName: \"kubernetes.io/projected/ff1006aa-f8f4-4883-9211-12c65eb2121c-kube-api-access-d4vvw\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063261 3209 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-hostproc\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063286 3209 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-config-path\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063308 3209 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-cilium-run\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.063504 kubelet[3209]: I0213 19:51:29.063330 3209 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-net\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.064222 kubelet[3209]: I0213 19:51:29.063352 3209 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-bpf-maps\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.064222 kubelet[3209]: I0213 19:51:29.063377 3209 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff1006aa-f8f4-4883-9211-12c65eb2121c-host-proc-sys-kernel\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.064222 kubelet[3209]: I0213 19:51:29.063399 3209 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff1006aa-f8f4-4883-9211-12c65eb2121c-clustermesh-secrets\") on node \"ip-172-31-28-108\" DevicePath \"\"" Feb 13 19:51:29.240102 systemd[1]: Removed slice kubepods-burstable-podff1006aa_f8f4_4883_9211_12c65eb2121c.slice - libcontainer container kubepods-burstable-podff1006aa_f8f4_4883_9211_12c65eb2121c.slice. Feb 13 19:51:29.240376 systemd[1]: kubepods-burstable-podff1006aa_f8f4_4883_9211_12c65eb2121c.slice: Consumed 17.092s CPU time. Feb 13 19:51:29.244699 systemd[1]: Removed slice kubepods-besteffort-pod7f2b075f_deae_4167_9b8f_ab09703863cf.slice - libcontainer container kubepods-besteffort-pod7f2b075f_deae_4167_9b8f_ab09703863cf.slice. Feb 13 19:51:29.423702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a-rootfs.mount: Deactivated successfully. Feb 13 19:51:29.423896 systemd[1]: var-lib-kubelet-pods-ff1006aa\x2df8f4\x2d4883\x2d9211\x2d12c65eb2121c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:51:29.424074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd-rootfs.mount: Deactivated successfully. Feb 13 19:51:29.424205 systemd[1]: var-lib-kubelet-pods-ff1006aa\x2df8f4\x2d4883\x2d9211\x2d12c65eb2121c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd4vvw.mount: Deactivated successfully. Feb 13 19:51:29.424337 systemd[1]: var-lib-kubelet-pods-7f2b075f\x2ddeae\x2d4167\x2d9b8f\x2dab09703863cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mkg7.mount: Deactivated successfully. Feb 13 19:51:29.424492 systemd[1]: var-lib-kubelet-pods-ff1006aa\x2df8f4\x2d4883\x2d9211\x2d12c65eb2121c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:51:29.430801 kubelet[3209]: E0213 19:51:29.430700 3209 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:51:29.822695 kubelet[3209]: I0213 19:51:29.821576 3209 scope.go:117] "RemoveContainer" containerID="5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861" Feb 13 19:51:29.827466 containerd[2028]: time="2025-02-13T19:51:29.827391473Z" level=info msg="RemoveContainer for \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\"" Feb 13 19:51:29.839609 containerd[2028]: time="2025-02-13T19:51:29.838564314Z" level=info msg="RemoveContainer for \"5e9534ddb494abd1bdbb5fff3393a1ade331c7fec801941519aa5a97119d9861\" returns successfully" Feb 13 19:51:29.840413 kubelet[3209]: I0213 19:51:29.840130 3209 scope.go:117] "RemoveContainer" containerID="46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc" Feb 13 19:51:29.854586 containerd[2028]: time="2025-02-13T19:51:29.854221854Z" level=info msg="RemoveContainer for \"46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc\"" Feb 13 19:51:29.866088 containerd[2028]: time="2025-02-13T19:51:29.865440150Z" level=info msg="RemoveContainer for \"46a82179bd37fb7c16f96ac009215838694b70e572d14a37a49de25db028b0cc\" returns successfully" Feb 13 19:51:29.867430 kubelet[3209]: I0213 19:51:29.867366 3209 scope.go:117] "RemoveContainer" containerID="ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c" Feb 13 19:51:29.878192 containerd[2028]: time="2025-02-13T19:51:29.877793322Z" level=info msg="RemoveContainer for \"ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c\"" Feb 13 19:51:29.887800 containerd[2028]: time="2025-02-13T19:51:29.887715330Z" level=info msg="RemoveContainer for \"ae035ec16002c0f14786a20b73adbfb34706cb2a4eff5d06609686d57fa19d3c\" returns successfully" Feb 13 19:51:29.888691 kubelet[3209]: I0213 19:51:29.888468 3209 scope.go:117] "RemoveContainer" containerID="54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619" Feb 13 19:51:29.892594 containerd[2028]: time="2025-02-13T19:51:29.892496562Z" level=info msg="RemoveContainer for \"54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619\"" Feb 13 19:51:29.900545 containerd[2028]: time="2025-02-13T19:51:29.900481134Z" level=info msg="RemoveContainer for \"54f9eab4b3a25819aef86b4ea478a254d22a64d4eeeab609df070ed50c87f619\" returns successfully" Feb 13 19:51:29.902078 kubelet[3209]: I0213 19:51:29.902013 3209 scope.go:117] "RemoveContainer" containerID="cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777" Feb 13 19:51:29.906854 containerd[2028]: time="2025-02-13T19:51:29.906799578Z" level=info msg="RemoveContainer for \"cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777\"" Feb 13 19:51:29.912540 containerd[2028]: time="2025-02-13T19:51:29.912446142Z" level=info msg="RemoveContainer for \"cb0b6134fa7c71926573f58fa8b01a3dce6c313b5189754253f5b79a7a5c2777\" returns successfully" Feb 13 19:51:30.304185 sshd[5092]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:30.311185 systemd[1]: sshd@25-172.31.28.108:22-139.178.89.65:44836.service: Deactivated successfully. Feb 13 19:51:30.316357 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:51:30.316930 systemd[1]: session-26.scope: Consumed 2.120s CPU time. Feb 13 19:51:30.321135 systemd-logind[2004]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:51:30.323267 systemd-logind[2004]: Removed session 26. Feb 13 19:51:30.346534 systemd[1]: Started sshd@26-172.31.28.108:22-139.178.89.65:44844.service - OpenSSH per-connection server daemon (139.178.89.65:44844). Feb 13 19:51:30.537692 sshd[5252]: Accepted publickey for core from 139.178.89.65 port 44844 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:30.541195 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:30.552824 systemd-logind[2004]: New session 27 of user core. Feb 13 19:51:30.557359 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:51:30.862326 ntpd[1997]: Deleting interface #12 lxc_health, fe80::f8db:a5ff:feae:2d56%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:51:30.863086 ntpd[1997]: 13 Feb 19:51:30 ntpd[1997]: Deleting interface #12 lxc_health, fe80::f8db:a5ff:feae:2d56%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Feb 13 19:51:31.229016 kubelet[3209]: I0213 19:51:31.228269 3209 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2b075f-deae-4167-9b8f-ab09703863cf" path="/var/lib/kubelet/pods/7f2b075f-deae-4167-9b8f-ab09703863cf/volumes" Feb 13 19:51:31.232095 kubelet[3209]: I0213 19:51:31.231063 3209 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1006aa-f8f4-4883-9211-12c65eb2121c" path="/var/lib/kubelet/pods/ff1006aa-f8f4-4883-9211-12c65eb2121c/volumes" Feb 13 19:51:32.470494 sshd[5252]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:32.487779 systemd[1]: sshd@26-172.31.28.108:22-139.178.89.65:44844.service: Deactivated successfully. Feb 13 19:51:32.500372 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:51:32.501391 systemd[1]: session-27.scope: Consumed 1.694s CPU time. Feb 13 19:51:32.508317 systemd-logind[2004]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:51:32.513777 kubelet[3209]: E0213 19:51:32.513711 3209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7f2b075f-deae-4167-9b8f-ab09703863cf" containerName="cilium-operator" Feb 13 19:51:32.513777 kubelet[3209]: E0213 19:51:32.513766 3209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff1006aa-f8f4-4883-9211-12c65eb2121c" containerName="mount-cgroup" Feb 13 19:51:32.513777 kubelet[3209]: E0213 19:51:32.513785 3209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff1006aa-f8f4-4883-9211-12c65eb2121c" containerName="clean-cilium-state" Feb 13 19:51:32.522513 kubelet[3209]: E0213 19:51:32.513801 3209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff1006aa-f8f4-4883-9211-12c65eb2121c" containerName="cilium-agent" Feb 13 19:51:32.522513 kubelet[3209]: E0213 19:51:32.513818 3209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff1006aa-f8f4-4883-9211-12c65eb2121c" containerName="apply-sysctl-overwrites" Feb 13 19:51:32.522513 kubelet[3209]: E0213 19:51:32.513834 3209 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff1006aa-f8f4-4883-9211-12c65eb2121c" containerName="mount-bpf-fs" Feb 13 19:51:32.522513 kubelet[3209]: I0213 19:51:32.513884 3209 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2b075f-deae-4167-9b8f-ab09703863cf" containerName="cilium-operator" Feb 13 19:51:32.522513 kubelet[3209]: I0213 19:51:32.513903 3209 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1006aa-f8f4-4883-9211-12c65eb2121c" containerName="cilium-agent" Feb 13 19:51:32.527424 kubelet[3209]: W0213 19:51:32.526847 3209 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-108" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-108' and this object Feb 13 19:51:32.528722 kubelet[3209]: E0213 19:51:32.527448 3209 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-28-108\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-28-108' and this object" logger="UnhandledError" Feb 13 19:51:32.534298 systemd-logind[2004]: Removed session 27. Feb 13 19:51:32.548201 systemd[1]: Started sshd@27-172.31.28.108:22-139.178.89.65:44846.service - OpenSSH per-connection server daemon (139.178.89.65:44846). Feb 13 19:51:32.579707 systemd[1]: Created slice kubepods-burstable-pode6423e8a_64ff_420b_86e1_d3224b6f3dcc.slice - libcontainer container kubepods-burstable-pode6423e8a_64ff_420b_86e1_d3224b6f3dcc.slice. Feb 13 19:51:32.586631 kubelet[3209]: I0213 19:51:32.586251 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-xtables-lock\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.586631 kubelet[3209]: I0213 19:51:32.586345 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-host-proc-sys-kernel\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.586631 kubelet[3209]: I0213 19:51:32.586400 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-bpf-maps\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.586631 kubelet[3209]: I0213 19:51:32.586451 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-cilium-cgroup\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.586631 kubelet[3209]: I0213 19:51:32.586500 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-etc-cni-netd\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.586631 kubelet[3209]: I0213 19:51:32.586546 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-host-proc-sys-net\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587253 kubelet[3209]: I0213 19:51:32.586583 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-hubble-tls\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587253 kubelet[3209]: I0213 19:51:32.586619 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-cilium-run\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587253 kubelet[3209]: I0213 19:51:32.586661 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-clustermesh-secrets\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587253 kubelet[3209]: I0213 19:51:32.586700 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-lib-modules\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587253 kubelet[3209]: I0213 19:51:32.586740 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jlvn\" (UniqueName: \"kubernetes.io/projected/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-kube-api-access-9jlvn\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587253 kubelet[3209]: I0213 19:51:32.586781 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-cilium-config-path\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587558 kubelet[3209]: I0213 19:51:32.586819 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-cilium-ipsec-secrets\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587558 kubelet[3209]: I0213 19:51:32.586861 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-hostproc\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.587558 kubelet[3209]: I0213 19:51:32.586900 3209 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e6423e8a-64ff-420b-86e1-d3224b6f3dcc-cni-path\") pod \"cilium-z672s\" (UID: \"e6423e8a-64ff-420b-86e1-d3224b6f3dcc\") " pod="kube-system/cilium-z672s" Feb 13 19:51:32.799380 sshd[5264]: Accepted publickey for core from 139.178.89.65 port 44846 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:32.803626 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:32.819132 systemd-logind[2004]: New session 28 of user core. Feb 13 19:51:32.829559 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:51:32.843777 kubelet[3209]: I0213 19:51:32.843150 3209 setters.go:600] "Node became not ready" node="ip-172-31-28-108" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:51:32Z","lastTransitionTime":"2025-02-13T19:51:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:51:32.965190 sshd[5264]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:32.973508 systemd-logind[2004]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:51:32.973653 systemd[1]: sshd@27-172.31.28.108:22-139.178.89.65:44846.service: Deactivated successfully. Feb 13 19:51:32.979694 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:51:32.985237 systemd-logind[2004]: Removed session 28. Feb 13 19:51:33.005580 systemd[1]: Started sshd@28-172.31.28.108:22-139.178.89.65:44850.service - OpenSSH per-connection server daemon (139.178.89.65:44850). Feb 13 19:51:33.198591 sshd[5275]: Accepted publickey for core from 139.178.89.65 port 44850 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:33.202183 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:33.213034 systemd-logind[2004]: New session 29 of user core. Feb 13 19:51:33.222702 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:51:33.801813 containerd[2028]: time="2025-02-13T19:51:33.801622197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z672s,Uid:e6423e8a-64ff-420b-86e1-d3224b6f3dcc,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:33.846175 containerd[2028]: time="2025-02-13T19:51:33.845861205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:33.846547 containerd[2028]: time="2025-02-13T19:51:33.846023493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:33.846547 containerd[2028]: time="2025-02-13T19:51:33.846223653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:33.847068 containerd[2028]: time="2025-02-13T19:51:33.846854277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:33.889436 systemd[1]: Started cri-containerd-83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69.scope - libcontainer container 83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69. Feb 13 19:51:33.934807 containerd[2028]: time="2025-02-13T19:51:33.934571770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z672s,Uid:e6423e8a-64ff-420b-86e1-d3224b6f3dcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\"" Feb 13 19:51:33.940907 containerd[2028]: time="2025-02-13T19:51:33.940716454Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:51:33.961175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409129696.mount: Deactivated successfully. Feb 13 19:51:33.968816 containerd[2028]: time="2025-02-13T19:51:33.968642002Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd\"" Feb 13 19:51:33.970687 containerd[2028]: time="2025-02-13T19:51:33.970549126Z" level=info msg="StartContainer for \"9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd\"" Feb 13 19:51:34.028518 systemd[1]: Started cri-containerd-9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd.scope - libcontainer container 9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd. Feb 13 19:51:34.087780 containerd[2028]: time="2025-02-13T19:51:34.087432691Z" level=info msg="StartContainer for \"9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd\" returns successfully" Feb 13 19:51:34.106355 systemd[1]: cri-containerd-9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd.scope: Deactivated successfully. Feb 13 19:51:34.176448 containerd[2028]: time="2025-02-13T19:51:34.176323423Z" level=info msg="shim disconnected" id=9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd namespace=k8s.io Feb 13 19:51:34.176766 containerd[2028]: time="2025-02-13T19:51:34.176428891Z" level=warning msg="cleaning up after shim disconnected" id=9d2d7e485cea3dacbba790155b264a35b84d3610eef25a0da4d136256f6d74cd namespace=k8s.io Feb 13 19:51:34.176766 containerd[2028]: time="2025-02-13T19:51:34.176475667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:34.432280 kubelet[3209]: E0213 19:51:34.432061 3209 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:51:34.862018 containerd[2028]: time="2025-02-13T19:51:34.861597970Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:51:34.888585 containerd[2028]: time="2025-02-13T19:51:34.888380495Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2\"" Feb 13 19:51:34.889724 containerd[2028]: time="2025-02-13T19:51:34.889659515Z" level=info msg="StartContainer for \"e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2\"" Feb 13 19:51:34.963473 systemd[1]: Started cri-containerd-e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2.scope - libcontainer container e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2. Feb 13 19:51:35.024112 containerd[2028]: time="2025-02-13T19:51:35.023649895Z" level=info msg="StartContainer for \"e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2\" returns successfully" Feb 13 19:51:35.038391 systemd[1]: cri-containerd-e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2.scope: Deactivated successfully. Feb 13 19:51:35.089420 containerd[2028]: time="2025-02-13T19:51:35.089123924Z" level=info msg="shim disconnected" id=e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2 namespace=k8s.io Feb 13 19:51:35.089420 containerd[2028]: time="2025-02-13T19:51:35.089289632Z" level=warning msg="cleaning up after shim disconnected" id=e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2 namespace=k8s.io Feb 13 19:51:35.089420 containerd[2028]: time="2025-02-13T19:51:35.089332088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:35.818933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7ec690dc6e539c6fc5f0a45f23f33a1370676e4aeed3346388642f53a2a41c2-rootfs.mount: Deactivated successfully. Feb 13 19:51:35.868910 containerd[2028]: time="2025-02-13T19:51:35.868815395Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:51:35.898045 containerd[2028]: time="2025-02-13T19:51:35.897732492Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f\"" Feb 13 19:51:35.899715 containerd[2028]: time="2025-02-13T19:51:35.898906500Z" level=info msg="StartContainer for \"ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f\"" Feb 13 19:51:35.966309 systemd[1]: Started cri-containerd-ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f.scope - libcontainer container ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f. Feb 13 19:51:36.025214 containerd[2028]: time="2025-02-13T19:51:36.024955004Z" level=info msg="StartContainer for \"ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f\" returns successfully" Feb 13 19:51:36.027384 systemd[1]: cri-containerd-ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f.scope: Deactivated successfully. Feb 13 19:51:36.076471 containerd[2028]: time="2025-02-13T19:51:36.076140093Z" level=info msg="shim disconnected" id=ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f namespace=k8s.io Feb 13 19:51:36.076471 containerd[2028]: time="2025-02-13T19:51:36.076319709Z" level=warning msg="cleaning up after shim disconnected" id=ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f namespace=k8s.io Feb 13 19:51:36.076471 containerd[2028]: time="2025-02-13T19:51:36.076364817Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:36.818778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef08a8855ffec69213a37f3e112cadf0b2ba5a9099223074c067462e7221541f-rootfs.mount: Deactivated successfully. Feb 13 19:51:36.881102 containerd[2028]: time="2025-02-13T19:51:36.880334785Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:51:36.914508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3913688971.mount: Deactivated successfully. Feb 13 19:51:36.915827 containerd[2028]: time="2025-02-13T19:51:36.915424225Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc\"" Feb 13 19:51:36.921634 containerd[2028]: time="2025-02-13T19:51:36.920350213Z" level=info msg="StartContainer for \"39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc\"" Feb 13 19:51:36.992296 systemd[1]: Started cri-containerd-39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc.scope - libcontainer container 39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc. Feb 13 19:51:37.036658 systemd[1]: cri-containerd-39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc.scope: Deactivated successfully. Feb 13 19:51:37.040341 containerd[2028]: time="2025-02-13T19:51:37.040272513Z" level=info msg="StartContainer for \"39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc\" returns successfully" Feb 13 19:51:37.083230 containerd[2028]: time="2025-02-13T19:51:37.082875874Z" level=info msg="shim disconnected" id=39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc namespace=k8s.io Feb 13 19:51:37.083230 containerd[2028]: time="2025-02-13T19:51:37.082992142Z" level=warning msg="cleaning up after shim disconnected" id=39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc namespace=k8s.io Feb 13 19:51:37.083230 containerd[2028]: time="2025-02-13T19:51:37.083018098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:37.122672 containerd[2028]: time="2025-02-13T19:51:37.122584462Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:51:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:51:37.818835 systemd[1]: run-containerd-runc-k8s.io-39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc-runc.qrsLOT.mount: Deactivated successfully. Feb 13 19:51:37.819081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39aedecdd913f999672e6ed9d33171aa164810feae7d21e49fbfd5b8012090bc-rootfs.mount: Deactivated successfully. Feb 13 19:51:37.891330 containerd[2028]: time="2025-02-13T19:51:37.890647286Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:51:37.928828 containerd[2028]: time="2025-02-13T19:51:37.928740938Z" level=info msg="CreateContainer within sandbox \"83ddc1fc882ec0f5bab12a767c0822d53ccc832396a0cf8355b25236a5222c69\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8d7b66dfd00e5f40d67efd0d95d65a4c0aa7a723b6b66485d207cc138dd371de\"" Feb 13 19:51:37.930175 containerd[2028]: time="2025-02-13T19:51:37.930087626Z" level=info msg="StartContainer for \"8d7b66dfd00e5f40d67efd0d95d65a4c0aa7a723b6b66485d207cc138dd371de\"" Feb 13 19:51:37.986308 systemd[1]: Started cri-containerd-8d7b66dfd00e5f40d67efd0d95d65a4c0aa7a723b6b66485d207cc138dd371de.scope - libcontainer container 8d7b66dfd00e5f40d67efd0d95d65a4c0aa7a723b6b66485d207cc138dd371de. Feb 13 19:51:38.056540 containerd[2028]: time="2025-02-13T19:51:38.056426074Z" level=info msg="StartContainer for \"8d7b66dfd00e5f40d67efd0d95d65a4c0aa7a723b6b66485d207cc138dd371de\" returns successfully" Feb 13 19:51:38.935293 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:51:38.954518 kubelet[3209]: I0213 19:51:38.953799 3209 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z672s" podStartSLOduration=6.953773107 podStartE2EDuration="6.953773107s" podCreationTimestamp="2025-02-13 19:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:38.952099095 +0000 UTC m=+120.120472162" watchObservedRunningTime="2025-02-13 19:51:38.953773107 +0000 UTC m=+120.122146162" Feb 13 19:51:39.173060 containerd[2028]: time="2025-02-13T19:51:39.172690128Z" level=info msg="StopPodSandbox for \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\"" Feb 13 19:51:39.173060 containerd[2028]: time="2025-02-13T19:51:39.172835304Z" level=info msg="TearDown network for sandbox \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" successfully" Feb 13 19:51:39.173060 containerd[2028]: time="2025-02-13T19:51:39.172859376Z" level=info msg="StopPodSandbox for \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" returns successfully" Feb 13 19:51:39.176074 containerd[2028]: time="2025-02-13T19:51:39.175135920Z" level=info msg="RemovePodSandbox for \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\"" Feb 13 19:51:39.176074 containerd[2028]: time="2025-02-13T19:51:39.175212576Z" level=info msg="Forcibly stopping sandbox \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\"" Feb 13 19:51:39.176074 containerd[2028]: time="2025-02-13T19:51:39.175320888Z" level=info msg="TearDown network for sandbox \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" successfully" Feb 13 19:51:39.184331 containerd[2028]: time="2025-02-13T19:51:39.184246968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:39.184708 containerd[2028]: time="2025-02-13T19:51:39.184666164Z" level=info msg="RemovePodSandbox \"69e1a671e09e58c95a1f8526025c7bd88c66bba622b9a1cb0a952a476d1ccbbd\" returns successfully" Feb 13 19:51:39.187265 containerd[2028]: time="2025-02-13T19:51:39.186129888Z" level=info msg="StopPodSandbox for \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\"" Feb 13 19:51:39.187265 containerd[2028]: time="2025-02-13T19:51:39.187151196Z" level=info msg="TearDown network for sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" successfully" Feb 13 19:51:39.187265 containerd[2028]: time="2025-02-13T19:51:39.187186428Z" level=info msg="StopPodSandbox for \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" returns successfully" Feb 13 19:51:39.189850 containerd[2028]: time="2025-02-13T19:51:39.189254652Z" level=info msg="RemovePodSandbox for \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\"" Feb 13 19:51:39.189850 containerd[2028]: time="2025-02-13T19:51:39.189342000Z" level=info msg="Forcibly stopping sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\"" Feb 13 19:51:39.189850 containerd[2028]: time="2025-02-13T19:51:39.189647256Z" level=info msg="TearDown network for sandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" successfully" Feb 13 19:51:39.199031 containerd[2028]: time="2025-02-13T19:51:39.197335440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:39.199031 containerd[2028]: time="2025-02-13T19:51:39.197462928Z" level=info msg="RemovePodSandbox \"8b15439b84014c3b65a75a0c04b2b46ec07f709948176a0ccbf2701318ee9a9a\" returns successfully" Feb 13 19:51:42.022521 systemd[1]: run-containerd-runc-k8s.io-8d7b66dfd00e5f40d67efd0d95d65a4c0aa7a723b6b66485d207cc138dd371de-runc.u3BHxI.mount: Deactivated successfully. Feb 13 19:51:43.630039 (udev-worker)[6117]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:43.638339 systemd-networkd[1932]: lxc_health: Link UP Feb 13 19:51:43.651107 (udev-worker)[6119]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:43.663626 systemd-networkd[1932]: lxc_health: Gained carrier Feb 13 19:51:45.640429 systemd-networkd[1932]: lxc_health: Gained IPv6LL Feb 13 19:51:47.862328 ntpd[1997]: Listen normally on 15 lxc_health [fe80::2c64:b1ff:fed5:321d%14]:123 Feb 13 19:51:47.862897 ntpd[1997]: 13 Feb 19:51:47 ntpd[1997]: Listen normally on 15 lxc_health [fe80::2c64:b1ff:fed5:321d%14]:123 Feb 13 19:51:49.163662 kubelet[3209]: E0213 19:51:49.163186 3209 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40110->127.0.0.1:45693: write tcp 127.0.0.1:40110->127.0.0.1:45693: write: broken pipe Feb 13 19:51:49.204073 sshd[5275]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:49.216902 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:51:49.224156 systemd-logind[2004]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:51:49.225630 systemd[1]: sshd@28-172.31.28.108:22-139.178.89.65:44850.service: Deactivated successfully. Feb 13 19:51:49.237951 systemd-logind[2004]: Removed session 29. Feb 13 19:52:03.248168 systemd[1]: cri-containerd-a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4.scope: Deactivated successfully. Feb 13 19:52:03.248754 systemd[1]: cri-containerd-a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4.scope: Consumed 5.579s CPU time, 17.7M memory peak, 0B memory swap peak. Feb 13 19:52:03.299254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4-rootfs.mount: Deactivated successfully. Feb 13 19:52:03.305021 containerd[2028]: time="2025-02-13T19:52:03.304837896Z" level=info msg="shim disconnected" id=a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4 namespace=k8s.io Feb 13 19:52:03.306682 containerd[2028]: time="2025-02-13T19:52:03.304932444Z" level=warning msg="cleaning up after shim disconnected" id=a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4 namespace=k8s.io Feb 13 19:52:03.306682 containerd[2028]: time="2025-02-13T19:52:03.305124744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:03.334456 containerd[2028]: time="2025-02-13T19:52:03.334318428Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:52:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:52:04.004513 kubelet[3209]: I0213 19:52:04.004432 3209 scope.go:117] "RemoveContainer" containerID="a8859362e2e819112282f8726175446edf1200a1ce359650e3d881e2136f08a4" Feb 13 19:52:04.008830 containerd[2028]: time="2025-02-13T19:52:04.008505767Z" level=info msg="CreateContainer within sandbox \"2d09d5efeced69ee9df7c52f8a518566e2c8c980270d7018962adc9bced4d8cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:52:04.038567 containerd[2028]: time="2025-02-13T19:52:04.038365667Z" level=info msg="CreateContainer within sandbox \"2d09d5efeced69ee9df7c52f8a518566e2c8c980270d7018962adc9bced4d8cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a39b0c75c39c5bac421b5dcdcdb39eaff67c9bd84c83faf1f8b090595eecd664\"" Feb 13 19:52:04.040097 containerd[2028]: time="2025-02-13T19:52:04.039058331Z" level=info msg="StartContainer for \"a39b0c75c39c5bac421b5dcdcdb39eaff67c9bd84c83faf1f8b090595eecd664\"" Feb 13 19:52:04.097385 systemd[1]: Started cri-containerd-a39b0c75c39c5bac421b5dcdcdb39eaff67c9bd84c83faf1f8b090595eecd664.scope - libcontainer container a39b0c75c39c5bac421b5dcdcdb39eaff67c9bd84c83faf1f8b090595eecd664. Feb 13 19:52:04.189997 containerd[2028]: time="2025-02-13T19:52:04.189823860Z" level=info msg="StartContainer for \"a39b0c75c39c5bac421b5dcdcdb39eaff67c9bd84c83faf1f8b090595eecd664\" returns successfully" Feb 13 19:52:08.197600 systemd[1]: cri-containerd-92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6.scope: Deactivated successfully. Feb 13 19:52:08.198958 systemd[1]: cri-containerd-92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6.scope: Consumed 3.539s CPU time, 15.4M memory peak, 0B memory swap peak. Feb 13 19:52:08.245350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6-rootfs.mount: Deactivated successfully. Feb 13 19:52:08.262182 containerd[2028]: time="2025-02-13T19:52:08.262053844Z" level=info msg="shim disconnected" id=92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6 namespace=k8s.io Feb 13 19:52:08.262182 containerd[2028]: time="2025-02-13T19:52:08.262157068Z" level=warning msg="cleaning up after shim disconnected" id=92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6 namespace=k8s.io Feb 13 19:52:08.262182 containerd[2028]: time="2025-02-13T19:52:08.262180048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:09.031043 kubelet[3209]: I0213 19:52:09.030849 3209 scope.go:117] "RemoveContainer" containerID="92b966d0537ad5aceb83fc40ae2c7b315477893b04d3b9017d63ae0085b5dca6" Feb 13 19:52:09.034359 containerd[2028]: time="2025-02-13T19:52:09.034289224Z" level=info msg="CreateContainer within sandbox \"d8eff5944854a9d2d974a69279dbb2866610548a73a8227d56b1f2d11236b71c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:52:09.066314 containerd[2028]: time="2025-02-13T19:52:09.066150544Z" level=info msg="CreateContainer within sandbox \"d8eff5944854a9d2d974a69279dbb2866610548a73a8227d56b1f2d11236b71c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d2ac258369d9320f9fd1158a2dbb6f2d421109d928e839088b1c66a6be3a01aa\"" Feb 13 19:52:09.067711 containerd[2028]: time="2025-02-13T19:52:09.067251964Z" level=info msg="StartContainer for \"d2ac258369d9320f9fd1158a2dbb6f2d421109d928e839088b1c66a6be3a01aa\"" Feb 13 19:52:09.132411 systemd[1]: Started cri-containerd-d2ac258369d9320f9fd1158a2dbb6f2d421109d928e839088b1c66a6be3a01aa.scope - libcontainer container d2ac258369d9320f9fd1158a2dbb6f2d421109d928e839088b1c66a6be3a01aa. Feb 13 19:52:09.218225 containerd[2028]: time="2025-02-13T19:52:09.218112713Z" level=info msg="StartContainer for \"d2ac258369d9320f9fd1158a2dbb6f2d421109d928e839088b1c66a6be3a01aa\" returns successfully" Feb 13 19:52:11.692454 kubelet[3209]: E0213 19:52:11.691084 3209 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-108?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"