Mar 17 17:35:18.170362 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 17:35:18.170407 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Mar 17 16:11:40 -00 2025 Mar 17 17:35:18.170432 kernel: KASLR disabled due to lack of seed Mar 17 17:35:18.170448 kernel: efi: EFI v2.7 by EDK II Mar 17 17:35:18.170463 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Mar 17 17:35:18.170478 kernel: secureboot: Secure boot disabled Mar 17 17:35:18.170495 kernel: ACPI: Early table checksum verification disabled Mar 17 17:35:18.170510 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 17:35:18.170525 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 17:35:18.170540 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:35:18.170560 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 17:35:18.170576 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:35:18.170590 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 17:35:18.170606 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 17:35:18.170623 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 17:35:18.170644 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:35:18.170661 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 17:35:18.170676 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 17:35:18.170692 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 17:35:18.170708 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 17:35:18.170724 kernel: printk: bootconsole [uart0] enabled Mar 17 17:35:18.170739 kernel: NUMA: Failed to initialise from firmware Mar 17 17:35:18.170756 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:35:18.170771 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 17 17:35:18.170787 kernel: Zone ranges: Mar 17 17:35:18.170803 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:35:18.170823 kernel: DMA32 empty Mar 17 17:35:18.170839 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 17:35:18.170855 kernel: Movable zone start for each node Mar 17 17:35:18.170871 kernel: Early memory node ranges Mar 17 17:35:18.170887 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 17:35:18.170903 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 17:35:18.170919 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 17:35:18.170935 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 17:35:18.170950 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 17:35:18.170966 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 17:35:18.170982 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 17:35:18.170997 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 17:35:18.171017 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:35:18.171034 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 17:35:18.171057 kernel: psci: probing for conduit method from ACPI. Mar 17 17:35:18.171074 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 17:35:18.171090 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:35:18.171111 kernel: psci: Trusted OS migration not required Mar 17 17:35:18.171158 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:35:18.171177 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:35:18.171194 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:35:18.171211 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:35:18.171228 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:35:18.171245 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:35:18.171261 kernel: CPU features: detected: Spectre-v2 Mar 17 17:35:18.171291 kernel: CPU features: detected: Spectre-v3a Mar 17 17:35:18.171310 kernel: CPU features: detected: Spectre-BHB Mar 17 17:35:18.171327 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 17:35:18.171344 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 17:35:18.171367 kernel: alternatives: applying boot alternatives Mar 17 17:35:18.171387 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:35:18.171405 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:35:18.171422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:35:18.171438 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:35:18.171455 kernel: Fallback order for Node 0: 0 Mar 17 17:35:18.171472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 17:35:18.171488 kernel: Policy zone: Normal Mar 17 17:35:18.171505 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:35:18.171521 kernel: software IO TLB: area num 2. Mar 17 17:35:18.171543 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 17:35:18.171560 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Mar 17 17:35:18.171577 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:35:18.171593 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:35:18.171611 kernel: rcu: RCU event tracing is enabled. Mar 17 17:35:18.171628 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:35:18.171645 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:35:18.171662 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:35:18.171695 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:35:18.171717 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:35:18.171735 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:35:18.171757 kernel: GICv3: 96 SPIs implemented Mar 17 17:35:18.171774 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:35:18.171790 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:35:18.171807 kernel: GICv3: GICv3 features: 16 PPIs Mar 17 17:35:18.171824 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 17:35:18.171840 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 17:35:18.171857 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:35:18.171874 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:35:18.171890 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 17 17:35:18.171907 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 17:35:18.171924 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 17 17:35:18.171941 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:35:18.171962 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 17:35:18.171979 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 17:35:18.171996 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 17:35:18.172013 kernel: Console: colour dummy device 80x25 Mar 17 17:35:18.172030 kernel: printk: console [tty1] enabled Mar 17 17:35:18.172047 kernel: ACPI: Core revision 20230628 Mar 17 17:35:18.172065 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 17:35:18.172082 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:35:18.172099 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:35:18.172116 kernel: landlock: Up and running. Mar 17 17:35:18.182072 kernel: SELinux: Initializing. Mar 17 17:35:18.182094 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:35:18.182112 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:35:18.182164 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:35:18.182184 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:35:18.182202 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:35:18.182221 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:35:18.182238 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 17:35:18.182264 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 17:35:18.182282 kernel: Remapping and enabling EFI services. Mar 17 17:35:18.182299 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:35:18.182316 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:35:18.182333 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 17:35:18.182350 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 17 17:35:18.182368 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 17:35:18.182384 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:35:18.182401 kernel: SMP: Total of 2 processors activated. Mar 17 17:35:18.182418 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:35:18.182440 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 17:35:18.182457 kernel: CPU features: detected: CRC32 instructions Mar 17 17:35:18.182486 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:35:18.182508 kernel: alternatives: applying system-wide alternatives Mar 17 17:35:18.182525 kernel: devtmpfs: initialized Mar 17 17:35:18.182544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:35:18.182561 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:35:18.182579 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:35:18.182597 kernel: SMBIOS 3.0.0 present. Mar 17 17:35:18.182619 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 17:35:18.182637 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:35:18.182655 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:35:18.182673 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:35:18.182691 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:35:18.182709 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:35:18.182727 kernel: audit: type=2000 audit(0.219:1): state=initialized audit_enabled=0 res=1 Mar 17 17:35:18.182749 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:35:18.182767 kernel: cpuidle: using governor menu Mar 17 17:35:18.182785 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:35:18.182803 kernel: ASID allocator initialised with 65536 entries Mar 17 17:35:18.182821 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:35:18.182839 kernel: Serial: AMBA PL011 UART driver Mar 17 17:35:18.182857 kernel: Modules: 17760 pages in range for non-PLT usage Mar 17 17:35:18.182875 kernel: Modules: 509280 pages in range for PLT usage Mar 17 17:35:18.182893 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:35:18.182915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:35:18.182933 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:35:18.182951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:35:18.182969 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:35:18.182987 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:35:18.183005 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:35:18.183023 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:35:18.183041 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:35:18.183058 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:35:18.183081 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:35:18.183099 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:35:18.183117 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:35:18.183154 kernel: ACPI: Interpreter enabled Mar 17 17:35:18.183173 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:35:18.183192 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:35:18.183210 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 17:35:18.185719 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:35:18.186017 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:35:18.186367 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:35:18.186572 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 17:35:18.186769 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 17:35:18.186794 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 17:35:18.186812 kernel: acpiphp: Slot [1] registered Mar 17 17:35:18.186831 kernel: acpiphp: Slot [2] registered Mar 17 17:35:18.186848 kernel: acpiphp: Slot [3] registered Mar 17 17:35:18.186875 kernel: acpiphp: Slot [4] registered Mar 17 17:35:18.186893 kernel: acpiphp: Slot [5] registered Mar 17 17:35:18.186911 kernel: acpiphp: Slot [6] registered Mar 17 17:35:18.186929 kernel: acpiphp: Slot [7] registered Mar 17 17:35:18.186946 kernel: acpiphp: Slot [8] registered Mar 17 17:35:18.186964 kernel: acpiphp: Slot [9] registered Mar 17 17:35:18.186982 kernel: acpiphp: Slot [10] registered Mar 17 17:35:18.186999 kernel: acpiphp: Slot [11] registered Mar 17 17:35:18.187017 kernel: acpiphp: Slot [12] registered Mar 17 17:35:18.187035 kernel: acpiphp: Slot [13] registered Mar 17 17:35:18.187057 kernel: acpiphp: Slot [14] registered Mar 17 17:35:18.187075 kernel: acpiphp: Slot [15] registered Mar 17 17:35:18.187092 kernel: acpiphp: Slot [16] registered Mar 17 17:35:18.187110 kernel: acpiphp: Slot [17] registered Mar 17 17:35:18.187146 kernel: acpiphp: Slot [18] registered Mar 17 17:35:18.187166 kernel: acpiphp: Slot [19] registered Mar 17 17:35:18.187184 kernel: acpiphp: Slot [20] registered Mar 17 17:35:18.187202 kernel: acpiphp: Slot [21] registered Mar 17 17:35:18.187220 kernel: acpiphp: Slot [22] registered Mar 17 17:35:18.187243 kernel: acpiphp: Slot [23] registered Mar 17 17:35:18.187261 kernel: acpiphp: Slot [24] registered Mar 17 17:35:18.187279 kernel: acpiphp: Slot [25] registered Mar 17 17:35:18.187296 kernel: acpiphp: Slot [26] registered Mar 17 17:35:18.187314 kernel: acpiphp: Slot [27] registered Mar 17 17:35:18.187331 kernel: acpiphp: Slot [28] registered Mar 17 17:35:18.187349 kernel: acpiphp: Slot [29] registered Mar 17 17:35:18.187367 kernel: acpiphp: Slot [30] registered Mar 17 17:35:18.187384 kernel: acpiphp: Slot [31] registered Mar 17 17:35:18.187401 kernel: PCI host bridge to bus 0000:00 Mar 17 17:35:18.187624 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 17:35:18.187875 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:35:18.188069 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 17:35:18.188363 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 17:35:18.188607 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 17:35:18.188869 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 17:35:18.189097 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 17:35:18.189395 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:35:18.189621 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 17:35:18.189830 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:35:18.190065 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:35:18.192219 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 17:35:18.192458 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 17:35:18.192680 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 17:35:18.192889 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:35:18.193097 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 17:35:18.194198 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 17:35:18.194483 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 17:35:18.194695 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 17:35:18.194908 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 17:35:18.195107 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 17:35:18.196446 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:35:18.196632 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 17:35:18.196658 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:35:18.196691 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:35:18.196711 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:35:18.196730 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:35:18.196748 kernel: iommu: Default domain type: Translated Mar 17 17:35:18.196775 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:35:18.196794 kernel: efivars: Registered efivars operations Mar 17 17:35:18.196811 kernel: vgaarb: loaded Mar 17 17:35:18.196829 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:35:18.196847 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:35:18.196865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:35:18.196883 kernel: pnp: PnP ACPI init Mar 17 17:35:18.197106 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 17:35:18.197161 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:35:18.197183 kernel: NET: Registered PF_INET protocol family Mar 17 17:35:18.197201 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:35:18.197232 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:35:18.197253 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:35:18.197271 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:35:18.197289 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:35:18.197308 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:35:18.197326 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:35:18.197351 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:35:18.197369 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:35:18.197387 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:35:18.197405 kernel: kvm [1]: HYP mode not available Mar 17 17:35:18.197438 kernel: Initialise system trusted keyrings Mar 17 17:35:18.197457 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:35:18.197475 kernel: Key type asymmetric registered Mar 17 17:35:18.197493 kernel: Asymmetric key parser 'x509' registered Mar 17 17:35:18.197510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:35:18.197534 kernel: io scheduler mq-deadline registered Mar 17 17:35:18.197561 kernel: io scheduler kyber registered Mar 17 17:35:18.197593 kernel: io scheduler bfq registered Mar 17 17:35:18.197817 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 17:35:18.197845 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:35:18.197863 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:35:18.197881 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 17:35:18.197899 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 17:35:18.197923 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:35:18.197942 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:35:18.201449 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 17:35:18.201497 kernel: printk: console [ttyS0] disabled Mar 17 17:35:18.201517 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 17:35:18.201535 kernel: printk: console [ttyS0] enabled Mar 17 17:35:18.201554 kernel: printk: bootconsole [uart0] disabled Mar 17 17:35:18.201571 kernel: thunder_xcv, ver 1.0 Mar 17 17:35:18.201589 kernel: thunder_bgx, ver 1.0 Mar 17 17:35:18.201607 kernel: nicpf, ver 1.0 Mar 17 17:35:18.201636 kernel: nicvf, ver 1.0 Mar 17 17:35:18.201877 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:35:18.202075 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:35:17 UTC (1742232917) Mar 17 17:35:18.202101 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:35:18.202154 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 17:35:18.202180 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:35:18.202199 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:35:18.202226 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:35:18.202260 kernel: Segment Routing with IPv6 Mar 17 17:35:18.202278 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:35:18.202297 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:35:18.202316 kernel: Key type dns_resolver registered Mar 17 17:35:18.202334 kernel: registered taskstats version 1 Mar 17 17:35:18.202352 kernel: Loading compiled-in X.509 certificates Mar 17 17:35:18.202370 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: f4ff2820cf7379ce82b759137d15b536f0a99b51' Mar 17 17:35:18.202388 kernel: Key type .fscrypt registered Mar 17 17:35:18.202406 kernel: Key type fscrypt-provisioning registered Mar 17 17:35:18.202430 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:35:18.202449 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:35:18.202467 kernel: ima: No architecture policies found Mar 17 17:35:18.202485 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:35:18.202503 kernel: clk: Disabling unused clocks Mar 17 17:35:18.202521 kernel: Freeing unused kernel memory: 38336K Mar 17 17:35:18.202539 kernel: Run /init as init process Mar 17 17:35:18.202557 kernel: with arguments: Mar 17 17:35:18.202575 kernel: /init Mar 17 17:35:18.202597 kernel: with environment: Mar 17 17:35:18.202615 kernel: HOME=/ Mar 17 17:35:18.202633 kernel: TERM=linux Mar 17 17:35:18.202650 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:35:18.202670 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:35:18.202695 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:35:18.202716 systemd[1]: Detected virtualization amazon. Mar 17 17:35:18.202740 systemd[1]: Detected architecture arm64. Mar 17 17:35:18.202759 systemd[1]: Running in initrd. Mar 17 17:35:18.202779 systemd[1]: No hostname configured, using default hostname. Mar 17 17:35:18.202799 systemd[1]: Hostname set to . Mar 17 17:35:18.202818 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:35:18.202838 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:35:18.202857 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:35:18.202877 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:35:18.202898 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:35:18.202922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:35:18.202942 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:35:18.202964 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:35:18.202985 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:35:18.203005 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:35:18.203025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:35:18.203050 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:35:18.203069 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:35:18.203089 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:35:18.203164 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:35:18.203190 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:35:18.203210 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:35:18.203231 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:35:18.203251 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:35:18.203270 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:35:18.203297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:35:18.203318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:35:18.203337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:35:18.203357 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:35:18.203377 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:35:18.203397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:35:18.203416 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:35:18.203436 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:35:18.203461 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:35:18.203481 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:35:18.203500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:18.203520 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:35:18.203540 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:35:18.203560 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:35:18.203632 systemd-journald[251]: Collecting audit messages is disabled. Mar 17 17:35:18.203676 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:35:18.203718 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:35:18.203745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:18.203764 kernel: Bridge firewalling registered Mar 17 17:35:18.203784 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:35:18.203803 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:35:18.203824 systemd-journald[251]: Journal started Mar 17 17:35:18.203862 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2299d08a38231e62e82d1aee1f538b) is 8M, max 75.3M, 67.3M free. Mar 17 17:35:18.139892 systemd-modules-load[252]: Inserted module 'overlay' Mar 17 17:35:18.181118 systemd-modules-load[252]: Inserted module 'br_netfilter' Mar 17 17:35:18.236457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:35:18.236536 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:35:18.219369 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:35:18.228433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:35:18.233431 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:35:18.270771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:35:18.276339 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:35:18.288806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:35:18.299482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:35:18.303654 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:18.314436 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:35:18.358886 dracut-cmdline[289]: dracut-dracut-053 Mar 17 17:35:18.368205 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f8298a09e890fc732131b7281e24befaf65b596eb5216e969c8eca4cab4a2b3a Mar 17 17:35:18.395620 systemd-resolved[287]: Positive Trust Anchors: Mar 17 17:35:18.395656 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:35:18.395736 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:35:18.541167 kernel: SCSI subsystem initialized Mar 17 17:35:18.548169 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:35:18.561168 kernel: iscsi: registered transport (tcp) Mar 17 17:35:18.583166 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:35:18.583249 kernel: QLogic iSCSI HBA Driver Mar 17 17:35:18.638165 kernel: random: crng init done Mar 17 17:35:18.638504 systemd-resolved[287]: Defaulting to hostname 'linux'. Mar 17 17:35:18.642308 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:35:18.645628 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:35:18.667415 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:35:18.683523 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:35:18.717000 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:35:18.717077 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:35:18.717102 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:35:18.782182 kernel: raid6: neonx8 gen() 6580 MB/s Mar 17 17:35:18.799155 kernel: raid6: neonx4 gen() 6518 MB/s Mar 17 17:35:18.816153 kernel: raid6: neonx2 gen() 5422 MB/s Mar 17 17:35:18.833157 kernel: raid6: neonx1 gen() 3933 MB/s Mar 17 17:35:18.850154 kernel: raid6: int64x8 gen() 3591 MB/s Mar 17 17:35:18.867155 kernel: raid6: int64x4 gen() 3681 MB/s Mar 17 17:35:18.884153 kernel: raid6: int64x2 gen() 3583 MB/s Mar 17 17:35:18.901921 kernel: raid6: int64x1 gen() 2749 MB/s Mar 17 17:35:18.901953 kernel: raid6: using algorithm neonx8 gen() 6580 MB/s Mar 17 17:35:18.919931 kernel: raid6: .... xor() 4729 MB/s, rmw enabled Mar 17 17:35:18.919972 kernel: raid6: using neon recovery algorithm Mar 17 17:35:18.927162 kernel: xor: measuring software checksum speed Mar 17 17:35:18.927221 kernel: 8regs : 11889 MB/sec Mar 17 17:35:18.929156 kernel: 32regs : 11996 MB/sec Mar 17 17:35:18.931230 kernel: arm64_neon : 8821 MB/sec Mar 17 17:35:18.931264 kernel: xor: using function: 32regs (11996 MB/sec) Mar 17 17:35:19.013177 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:35:19.032724 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:35:19.039429 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:35:19.079872 systemd-udevd[471]: Using default interface naming scheme 'v255'. Mar 17 17:35:19.091552 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:35:19.112502 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:35:19.140400 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Mar 17 17:35:19.195836 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:35:19.212510 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:35:19.327542 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:35:19.343450 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:35:19.389264 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:35:19.396529 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:35:19.401741 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:35:19.406788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:35:19.427487 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:35:19.475443 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:35:19.536499 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:35:19.536564 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 17:35:19.566822 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:35:19.567086 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:35:19.567712 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:88:07:1a:7d:b1 Mar 17 17:35:19.548573 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:35:19.548826 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:19.554353 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:35:19.558433 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:35:19.558829 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:19.563030 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:19.588353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:19.592874 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:35:19.596045 (udev-worker)[533]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:35:19.636439 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:35:19.636520 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:35:19.644310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:19.651244 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:35:19.656461 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:35:19.669163 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:35:19.669233 kernel: GPT:9289727 != 16777215 Mar 17 17:35:19.669258 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:35:19.671351 kernel: GPT:9289727 != 16777215 Mar 17 17:35:19.671415 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:35:19.672274 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:35:19.693828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:19.780177 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (522) Mar 17 17:35:19.816590 kernel: BTRFS: device fsid 5ecee764-de70-4de1-8711-3798360e0d13 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (532) Mar 17 17:35:19.898436 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:35:19.925209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:35:19.949983 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:35:19.987034 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:35:19.987401 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:35:20.008449 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:35:20.022037 disk-uuid[665]: Primary Header is updated. Mar 17 17:35:20.022037 disk-uuid[665]: Secondary Entries is updated. Mar 17 17:35:20.022037 disk-uuid[665]: Secondary Header is updated. Mar 17 17:35:20.030163 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:35:21.050203 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:35:21.050740 disk-uuid[666]: The operation has completed successfully. Mar 17 17:35:21.237639 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:35:21.239504 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:35:21.330501 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:35:21.339891 sh[926]: Success Mar 17 17:35:21.367171 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:35:21.491293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:35:21.495746 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:35:21.505473 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:35:21.535606 kernel: BTRFS info (device dm-0): first mount of filesystem 5ecee764-de70-4de1-8711-3798360e0d13 Mar 17 17:35:21.535673 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:21.535727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:35:21.538467 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:35:21.538504 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:35:21.640169 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:35:21.675506 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:35:21.676919 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:35:21.687413 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:35:21.693424 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:35:21.728885 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:35:21.728956 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:21.728993 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:35:21.738359 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:35:21.755820 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:35:21.760325 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:35:21.769005 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:35:21.780507 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:35:21.867333 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:35:21.881488 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:35:21.953277 systemd-networkd[1121]: lo: Link UP Mar 17 17:35:21.953291 systemd-networkd[1121]: lo: Gained carrier Mar 17 17:35:21.958999 systemd-networkd[1121]: Enumeration completed Mar 17 17:35:21.960993 systemd-networkd[1121]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:21.961005 systemd-networkd[1121]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:35:21.961593 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:35:21.971407 systemd[1]: Reached target network.target - Network. Mar 17 17:35:21.976108 systemd-networkd[1121]: eth0: Link UP Mar 17 17:35:21.976140 systemd-networkd[1121]: eth0: Gained carrier Mar 17 17:35:21.976160 systemd-networkd[1121]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:21.994205 systemd-networkd[1121]: eth0: DHCPv4 address 172.31.28.49/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:35:22.166401 ignition[1044]: Ignition 2.20.0 Mar 17 17:35:22.166423 ignition[1044]: Stage: fetch-offline Mar 17 17:35:22.166850 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:22.166874 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:35:22.168104 ignition[1044]: Ignition finished successfully Mar 17 17:35:22.177167 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:35:22.189536 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:35:22.211177 ignition[1132]: Ignition 2.20.0 Mar 17 17:35:22.211207 ignition[1132]: Stage: fetch Mar 17 17:35:22.212636 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:22.212661 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:35:22.212825 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:35:22.222970 ignition[1132]: PUT result: OK Mar 17 17:35:22.237203 ignition[1132]: parsed url from cmdline: "" Mar 17 17:35:22.237225 ignition[1132]: no config URL provided Mar 17 17:35:22.237241 ignition[1132]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:35:22.237296 ignition[1132]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:35:22.237329 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:35:22.240932 ignition[1132]: PUT result: OK Mar 17 17:35:22.245775 ignition[1132]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:35:22.248275 ignition[1132]: GET result: OK Mar 17 17:35:22.248451 ignition[1132]: parsing config with SHA512: f559817bf703af64340915bbaf7d5b399c7458942577057e1333cba1290e451a709ec5cced516a35f722bfcc7573242be3ad2a518edfa86a5c945432d610482a Mar 17 17:35:22.258667 unknown[1132]: fetched base config from "system" Mar 17 17:35:22.258914 unknown[1132]: fetched base config from "system" Mar 17 17:35:22.259588 ignition[1132]: fetch: fetch complete Mar 17 17:35:22.258929 unknown[1132]: fetched user config from "aws" Mar 17 17:35:22.259600 ignition[1132]: fetch: fetch passed Mar 17 17:35:22.259717 ignition[1132]: Ignition finished successfully Mar 17 17:35:22.272172 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:35:22.281505 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:35:22.323298 ignition[1138]: Ignition 2.20.0 Mar 17 17:35:22.323329 ignition[1138]: Stage: kargs Mar 17 17:35:22.324858 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:22.324884 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:35:22.325030 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:35:22.328461 ignition[1138]: PUT result: OK Mar 17 17:35:22.336942 ignition[1138]: kargs: kargs passed Mar 17 17:35:22.337503 ignition[1138]: Ignition finished successfully Mar 17 17:35:22.342203 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:35:22.357952 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:35:22.380448 ignition[1145]: Ignition 2.20.0 Mar 17 17:35:22.380469 ignition[1145]: Stage: disks Mar 17 17:35:22.381035 ignition[1145]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:22.381059 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:35:22.381733 ignition[1145]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:35:22.384695 ignition[1145]: PUT result: OK Mar 17 17:35:22.394084 ignition[1145]: disks: disks passed Mar 17 17:35:22.394270 ignition[1145]: Ignition finished successfully Mar 17 17:35:22.398910 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:35:22.402843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:35:22.407042 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:35:22.413277 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:35:22.415407 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:35:22.417850 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:35:22.431410 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:35:22.489657 systemd-fsck[1154]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:35:22.496901 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:35:22.537377 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:35:22.632176 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3914ef65-c5cd-468c-8ee7-964383d8e9e2 r/w with ordered data mode. Quota mode: none. Mar 17 17:35:22.633268 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:35:22.637165 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:35:22.652295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:35:22.664322 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:35:22.668332 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:35:22.668428 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:35:22.675400 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:35:22.684048 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:35:22.693781 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:35:22.707161 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1173) Mar 17 17:35:22.711571 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:35:22.711622 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:22.712850 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:35:22.719171 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:35:22.722538 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:35:23.124170 initrd-setup-root[1197]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:35:23.133183 initrd-setup-root[1204]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:35:23.142161 initrd-setup-root[1211]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:35:23.152177 initrd-setup-root[1218]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:35:23.512363 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:35:23.529338 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:35:23.535443 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:35:23.551777 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:35:23.554322 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:35:23.596592 ignition[1286]: INFO : Ignition 2.20.0 Mar 17 17:35:23.596592 ignition[1286]: INFO : Stage: mount Mar 17 17:35:23.601616 ignition[1286]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:23.601616 ignition[1286]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:35:23.601616 ignition[1286]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:35:23.601616 ignition[1286]: INFO : PUT result: OK Mar 17 17:35:23.601527 systemd-networkd[1121]: eth0: Gained IPv6LL Mar 17 17:35:23.616489 ignition[1286]: INFO : mount: mount passed Mar 17 17:35:23.616489 ignition[1286]: INFO : Ignition finished successfully Mar 17 17:35:23.606355 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:35:23.615904 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:35:23.633897 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:35:23.657522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:35:23.680156 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1298) Mar 17 17:35:23.685588 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 8369c249-c0a6-415d-8511-1f18dbf3bf45 Mar 17 17:35:23.685646 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:35:23.685671 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:35:23.691163 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:35:23.694663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:35:23.733988 ignition[1315]: INFO : Ignition 2.20.0 Mar 17 17:35:23.733988 ignition[1315]: INFO : Stage: files Mar 17 17:35:23.737264 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:23.737264 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:35:23.737264 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:35:23.743486 ignition[1315]: INFO : PUT result: OK Mar 17 17:35:23.748413 ignition[1315]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:35:23.776008 ignition[1315]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:35:23.776008 ignition[1315]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:35:23.819199 ignition[1315]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:35:23.821830 ignition[1315]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:35:23.824599 unknown[1315]: wrote ssh authorized keys file for user: core Mar 17 17:35:23.828216 ignition[1315]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:35:23.841640 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:35:23.841640 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:35:23.976305 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:35:24.125567 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:35:24.125567 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:35:24.132453 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:35:24.608676 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:35:24.748794 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:35:24.752199 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:35:25.195419 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 17:35:25.514100 ignition[1315]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:35:25.514100 ignition[1315]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 17:35:25.520358 ignition[1315]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:35:25.520358 ignition[1315]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:35:25.520358 ignition[1315]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 17:35:25.520358 ignition[1315]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:35:25.532260 ignition[1315]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:35:25.532260 ignition[1315]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:35:25.532260 ignition[1315]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:35:25.532260 ignition[1315]: INFO : files: files passed Mar 17 17:35:25.532260 ignition[1315]: INFO : Ignition finished successfully Mar 17 17:35:25.545831 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:35:25.559465 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:35:25.566425 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:35:25.575020 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:35:25.579320 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:35:25.611739 initrd-setup-root-after-ignition[1344]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:35:25.611739 initrd-setup-root-after-ignition[1344]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:35:25.618995 initrd-setup-root-after-ignition[1348]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:35:25.624538 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:35:25.629500 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:35:25.635415 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:35:25.699111 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:35:25.699563 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:35:25.706305 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:35:25.708272 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:35:25.710469 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:35:25.720740 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:35:25.745196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:35:25.757454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:35:25.780379 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:35:25.784794 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:35:25.787336 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:35:25.805516 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:35:25.805783 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:35:25.812030 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:35:25.815796 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:35:25.817644 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:35:25.820001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:35:25.827593 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:35:25.829844 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:35:25.831994 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:35:25.840118 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:35:25.842216 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:35:25.844552 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:35:25.851839 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:35:25.852079 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:35:25.854654 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:35:25.862925 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:35:25.865875 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:35:25.871209 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:35:25.876271 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:35:25.876507 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:35:25.879108 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:35:25.879358 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:35:25.881855 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:35:25.882053 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:35:25.903293 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:35:25.911493 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:35:25.911833 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:35:25.928692 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:35:25.936586 ignition[1368]: INFO : Ignition 2.20.0 Mar 17 17:35:25.936586 ignition[1368]: INFO : Stage: umount Mar 17 17:35:25.936586 ignition[1368]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:35:25.936586 ignition[1368]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:35:25.936586 ignition[1368]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:35:25.932649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:35:25.972540 ignition[1368]: INFO : PUT result: OK Mar 17 17:35:25.972540 ignition[1368]: INFO : umount: umount passed Mar 17 17:35:25.972540 ignition[1368]: INFO : Ignition finished successfully Mar 17 17:35:25.936827 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:35:25.948995 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:35:25.949526 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:35:25.972866 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:35:25.975567 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:35:25.985704 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:35:25.990079 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:35:25.990541 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:35:26.003826 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:35:26.003940 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:35:26.005849 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:35:26.005947 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:35:26.008097 systemd[1]: Stopped target network.target - Network. Mar 17 17:35:26.014900 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:35:26.015018 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:35:26.017292 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:35:26.018966 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:35:26.022484 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:35:26.027353 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:35:26.042562 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:35:26.044428 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:35:26.044510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:35:26.046691 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:35:26.046765 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:35:26.048655 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:35:26.048744 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:35:26.050612 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:35:26.050692 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:35:26.052876 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:35:26.055268 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:35:26.061647 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:35:26.064198 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:35:26.085039 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:35:26.088114 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:35:26.112830 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:35:26.113478 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:35:26.113669 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:35:26.121924 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:35:26.122463 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:35:26.122639 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:35:26.139729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:35:26.139835 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:35:26.143647 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:35:26.143776 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:35:26.154281 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:35:26.156493 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:35:26.156609 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:35:26.162453 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:35:26.162561 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:35:26.174651 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:35:26.174757 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:35:26.178010 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:35:26.178103 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:35:26.188642 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:35:26.193857 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:35:26.193985 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:35:26.226194 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:35:26.226548 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:35:26.231061 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:35:26.231265 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:35:26.235953 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:35:26.236032 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:35:26.236226 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:35:26.237330 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:35:26.237909 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:35:26.237986 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:35:26.238469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:35:26.238549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:35:26.250662 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:35:26.265567 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:35:26.267454 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:35:26.270192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:35:26.270293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:26.296019 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:35:26.296171 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:35:26.296836 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:35:26.297225 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:35:26.302320 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:35:26.302741 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:35:26.308633 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:35:26.324408 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:35:26.344101 systemd[1]: Switching root. Mar 17 17:35:26.381509 systemd-journald[251]: Journal stopped Mar 17 17:35:28.841975 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 17 17:35:28.842092 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:35:28.842166 kernel: SELinux: policy capability open_perms=1 Mar 17 17:35:28.842205 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:35:28.842248 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:35:28.847376 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:35:28.847449 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:35:28.847478 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:35:28.847520 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:35:28.847551 kernel: audit: type=1403 audit(1742232926.912:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:35:28.847608 systemd[1]: Successfully loaded SELinux policy in 109.219ms. Mar 17 17:35:28.847658 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.318ms. Mar 17 17:35:28.847724 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:35:28.847756 systemd[1]: Detected virtualization amazon. Mar 17 17:35:28.847785 systemd[1]: Detected architecture arm64. Mar 17 17:35:28.847815 systemd[1]: Detected first boot. Mar 17 17:35:28.847854 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:35:28.847885 zram_generator::config[1413]: No configuration found. Mar 17 17:35:28.847928 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:35:28.847956 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:35:28.847992 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:35:28.848024 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:35:28.848055 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:35:28.848085 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:35:28.848155 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:35:28.848193 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:35:28.848223 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:35:28.848252 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:35:28.848283 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:35:28.848318 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:35:28.848348 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:35:28.848380 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:35:28.848410 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:35:28.848438 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:35:28.848467 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:35:28.848494 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:35:28.848524 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:35:28.848559 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:35:28.848590 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:35:28.848619 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:35:28.848649 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:35:28.848691 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:35:28.848724 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:35:28.848752 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:35:28.848786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:35:28.848820 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:35:28.848850 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:35:28.848879 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:35:28.848907 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:35:28.848937 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:35:28.848967 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:35:28.848994 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:35:28.849025 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:35:28.849066 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:35:28.849099 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:35:28.849149 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:35:28.849183 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:35:28.849211 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:35:28.849240 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:35:28.849271 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:35:28.849299 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:35:28.849328 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:35:28.849358 systemd[1]: Reached target machines.target - Containers. Mar 17 17:35:28.849393 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:35:28.849425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:35:28.849460 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:35:28.849491 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:35:28.849520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:35:28.849549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:35:28.849581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:35:28.849615 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:35:28.849644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:35:28.849678 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:35:28.849713 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:35:28.849743 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:35:28.849774 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:35:28.849804 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:35:28.849834 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:35:28.849862 kernel: fuse: init (API version 7.39) Mar 17 17:35:28.849897 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:35:28.849931 kernel: loop: module loaded Mar 17 17:35:28.849959 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:35:28.849987 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:35:28.850015 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:35:28.850044 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:35:28.850077 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:35:28.850116 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:35:28.850199 systemd[1]: Stopped verity-setup.service. Mar 17 17:35:28.850227 kernel: ACPI: bus type drm_connector registered Mar 17 17:35:28.850255 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:35:28.850284 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:35:28.850313 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:35:28.850341 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:35:28.850371 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:35:28.850406 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:35:28.850497 systemd-journald[1496]: Collecting audit messages is disabled. Mar 17 17:35:28.850556 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:35:28.850590 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:35:28.850624 systemd-journald[1496]: Journal started Mar 17 17:35:28.850671 systemd-journald[1496]: Runtime Journal (/run/log/journal/ec2299d08a38231e62e82d1aee1f538b) is 8M, max 75.3M, 67.3M free. Mar 17 17:35:28.273194 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:35:28.284881 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:35:28.285761 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:35:28.861283 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:35:28.868409 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:35:28.872389 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:35:28.877875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:35:28.879345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:35:28.884645 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:35:28.885016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:35:28.890033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:35:28.892546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:35:28.898996 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:35:28.900462 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:35:28.905579 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:35:28.909235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:35:28.914580 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:35:28.919817 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:35:28.925568 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:35:28.931510 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:35:28.963211 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:35:28.973385 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:35:28.983861 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:35:28.987360 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:35:28.987436 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:35:28.992881 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:35:29.005537 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:35:29.011362 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:35:29.014542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:35:29.034493 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:35:29.047466 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:35:29.049693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:35:29.051635 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:35:29.054765 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:35:29.069472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:35:29.078480 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:35:29.086393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:35:29.092074 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:35:29.094726 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:35:29.097252 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:35:29.101214 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:35:29.116794 systemd-journald[1496]: Time spent on flushing to /var/log/journal/ec2299d08a38231e62e82d1aee1f538b is 41.280ms for 923 entries. Mar 17 17:35:29.116794 systemd-journald[1496]: System Journal (/var/log/journal/ec2299d08a38231e62e82d1aee1f538b) is 8M, max 195.6M, 187.6M free. Mar 17 17:35:29.177966 systemd-journald[1496]: Received client request to flush runtime journal. Mar 17 17:35:29.129637 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:35:29.132560 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:35:29.137625 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:35:29.148226 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:35:29.176277 udevadm[1555]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:35:29.188637 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:35:29.222925 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:35:29.223385 kernel: loop0: detected capacity change from 0 to 113512 Mar 17 17:35:29.248255 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:35:29.290801 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:35:29.293251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:35:29.308514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:35:29.349826 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:35:29.374184 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 17:35:29.377402 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Mar 17 17:35:29.378090 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Mar 17 17:35:29.393531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:35:29.442197 kernel: loop2: detected capacity change from 0 to 53784 Mar 17 17:35:29.493154 kernel: loop3: detected capacity change from 0 to 123192 Mar 17 17:35:29.610172 kernel: loop4: detected capacity change from 0 to 113512 Mar 17 17:35:29.626545 kernel: loop5: detected capacity change from 0 to 194096 Mar 17 17:35:29.663310 kernel: loop6: detected capacity change from 0 to 53784 Mar 17 17:35:29.685178 kernel: loop7: detected capacity change from 0 to 123192 Mar 17 17:35:29.702471 (sd-merge)[1574]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:35:29.703579 (sd-merge)[1574]: Merged extensions into '/usr'. Mar 17 17:35:29.711173 systemd[1]: Reload requested from client PID 1548 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:35:29.711206 systemd[1]: Reloading... Mar 17 17:35:29.875170 zram_generator::config[1599]: No configuration found. Mar 17 17:35:30.230096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:35:30.387213 systemd[1]: Reloading finished in 673 ms. Mar 17 17:35:30.413195 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:35:30.416334 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:35:30.434412 systemd[1]: Starting ensure-sysext.service... Mar 17 17:35:30.440520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:35:30.451406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:35:30.483340 systemd[1]: Reload requested from client PID 1654 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:35:30.483539 systemd[1]: Reloading... Mar 17 17:35:30.521695 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:35:30.523109 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:35:30.527552 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:35:30.530667 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Mar 17 17:35:30.530826 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Mar 17 17:35:30.548203 systemd-tmpfiles[1655]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:35:30.548229 systemd-tmpfiles[1655]: Skipping /boot Mar 17 17:35:30.595034 systemd-udevd[1656]: Using default interface naming scheme 'v255'. Mar 17 17:35:30.595573 systemd-tmpfiles[1655]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:35:30.595586 systemd-tmpfiles[1655]: Skipping /boot Mar 17 17:35:30.712335 zram_generator::config[1686]: No configuration found. Mar 17 17:35:30.908221 ldconfig[1543]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:35:30.951802 (udev-worker)[1712]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:35:31.127471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:35:31.166387 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1716) Mar 17 17:35:31.333835 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:35:31.334744 systemd[1]: Reloading finished in 850 ms. Mar 17 17:35:31.351486 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:35:31.354981 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:35:31.389804 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:35:31.439882 systemd[1]: Finished ensure-sysext.service. Mar 17 17:35:31.481194 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:35:31.499637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:35:31.509441 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:35:31.514306 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:35:31.517757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:35:31.520475 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:35:31.534498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:35:31.540451 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:35:31.549448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:35:31.555508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:35:31.559559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:35:31.565500 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:35:31.567873 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:35:31.572465 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:35:31.582510 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:35:31.592362 lvm[1857]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:35:31.593050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:35:31.597311 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:35:31.602471 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:35:31.609665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:35:31.667292 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:35:31.671062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:35:31.672053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:35:31.674998 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:35:31.675458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:35:31.678695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:35:31.679176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:35:31.697714 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:35:31.698548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:35:31.712387 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:35:31.724646 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:35:31.728252 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:35:31.728581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:35:31.741651 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:35:31.749919 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:35:31.761402 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:35:31.776294 lvm[1891]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:35:31.788215 augenrules[1899]: No rules Mar 17 17:35:31.789964 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:35:31.793470 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:35:31.793916 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:35:31.811377 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:35:31.834432 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:35:31.835386 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:35:31.850515 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:35:31.855521 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:35:31.898021 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:35:31.938617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:35:32.034535 systemd-resolved[1871]: Positive Trust Anchors: Mar 17 17:35:32.034565 systemd-resolved[1871]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:35:32.034628 systemd-resolved[1871]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:35:32.036022 systemd-networkd[1870]: lo: Link UP Mar 17 17:35:32.036042 systemd-networkd[1870]: lo: Gained carrier Mar 17 17:35:32.039043 systemd-networkd[1870]: Enumeration completed Mar 17 17:35:32.039308 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:35:32.041806 systemd-networkd[1870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:32.041828 systemd-networkd[1870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:35:32.044036 systemd-networkd[1870]: eth0: Link UP Mar 17 17:35:32.044384 systemd-networkd[1870]: eth0: Gained carrier Mar 17 17:35:32.044419 systemd-networkd[1870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:35:32.050484 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:35:32.053243 systemd-networkd[1870]: eth0: DHCPv4 address 172.31.28.49/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:35:32.061476 systemd-resolved[1871]: Defaulting to hostname 'linux'. Mar 17 17:35:32.063444 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:35:32.068634 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:35:32.071018 systemd[1]: Reached target network.target - Network. Mar 17 17:35:32.073037 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:35:32.077295 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:35:32.079997 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:35:32.082372 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:35:32.085629 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:35:32.088234 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:35:32.091346 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:35:32.094306 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:35:32.094371 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:35:32.096297 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:35:32.100704 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:35:32.106350 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:35:32.113279 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:35:32.117040 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:35:32.119444 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:35:32.135298 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:35:32.138058 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:35:32.143187 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:35:32.146261 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:35:32.149473 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:35:32.152229 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:35:32.154343 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:35:32.154396 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:35:32.163258 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:35:32.170783 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:35:32.184421 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:35:32.189359 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:35:32.202430 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:35:32.205663 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:35:32.213311 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:35:32.226874 jq[1929]: false Mar 17 17:35:32.225454 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:35:32.230769 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:35:32.241256 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:35:32.250441 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:35:32.257007 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:35:32.268241 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:35:32.278227 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:35:32.279159 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:35:32.289615 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:35:32.305357 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:35:32.310982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:35:32.313223 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:35:32.337239 dbus-daemon[1928]: [system] SELinux support is enabled Mar 17 17:35:32.337815 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:35:32.346069 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:35:32.346157 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:35:32.348608 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:35:32.348643 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:35:32.359976 dbus-daemon[1928]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1870 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:35:32.386697 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:35:32.389267 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:35:32.392216 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:35:32.425359 extend-filesystems[1930]: Found loop4 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found loop5 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found loop6 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found loop7 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found nvme0n1 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found nvme0n1p1 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found nvme0n1p2 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found nvme0n1p3 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found usr Mar 17 17:35:32.425359 extend-filesystems[1930]: Found nvme0n1p4 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found nvme0n1p6 Mar 17 17:35:32.425359 extend-filesystems[1930]: Found nvme0n1p7 Mar 17 17:35:32.467734 extend-filesystems[1930]: Found nvme0n1p9 Mar 17 17:35:32.467734 extend-filesystems[1930]: Checking size of /dev/nvme0n1p9 Mar 17 17:35:32.471411 jq[1942]: true Mar 17 17:35:32.478797 ntpd[1932]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:16 UTC 2025 (1): Starting Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:16 UTC 2025 (1): Starting Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: ---------------------------------------------------- Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: corporation. Support and training for ntp-4 are Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: available at https://www.nwtime.org/support Mar 17 17:35:32.481159 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: ---------------------------------------------------- Mar 17 17:35:32.480808 ntpd[1932]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:35:32.480829 ntpd[1932]: ---------------------------------------------------- Mar 17 17:35:32.480847 ntpd[1932]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:35:32.480865 ntpd[1932]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:35:32.480883 ntpd[1932]: corporation. Support and training for ntp-4 are Mar 17 17:35:32.480900 ntpd[1932]: available at https://www.nwtime.org/support Mar 17 17:35:32.480918 ntpd[1932]: ---------------------------------------------------- Mar 17 17:35:32.484879 ntpd[1932]: proto: precision = 0.096 usec (-23) Mar 17 17:35:32.489312 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: proto: precision = 0.096 usec (-23) Mar 17 17:35:32.489702 ntpd[1932]: basedate set to 2025-03-05 Mar 17 17:35:32.490373 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: basedate set to 2025-03-05 Mar 17 17:35:32.490373 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: gps base set to 2025-03-09 (week 2357) Mar 17 17:35:32.489739 ntpd[1932]: gps base set to 2025-03-09 (week 2357) Mar 17 17:35:32.495101 ntpd[1932]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:35:32.498220 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:35:32.498406 ntpd[1932]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:35:32.498577 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:35:32.498895 ntpd[1932]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:35:32.504291 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:35:32.504291 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Listen normally on 3 eth0 172.31.28.49:123 Mar 17 17:35:32.504291 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Listen normally on 4 lo [::1]:123 Mar 17 17:35:32.504291 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: bind(21) AF_INET6 fe80::488:7ff:fe1a:7db1%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:35:32.504291 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: unable to create socket on eth0 (5) for fe80::488:7ff:fe1a:7db1%2#123 Mar 17 17:35:32.504291 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: failed to init interface for address fe80::488:7ff:fe1a:7db1%2 Mar 17 17:35:32.504291 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: Listening on routing socket on fd #21 for interface updates Mar 17 17:35:32.502763 (ntainerd)[1958]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:35:32.500247 ntpd[1932]: Listen normally on 3 eth0 172.31.28.49:123 Mar 17 17:35:32.500328 ntpd[1932]: Listen normally on 4 lo [::1]:123 Mar 17 17:35:32.500409 ntpd[1932]: bind(21) AF_INET6 fe80::488:7ff:fe1a:7db1%2#123 flags 0x11 failed: Cannot assign requested address Mar 17 17:35:32.500446 ntpd[1932]: unable to create socket on eth0 (5) for fe80::488:7ff:fe1a:7db1%2#123 Mar 17 17:35:32.500472 ntpd[1932]: failed to init interface for address fe80::488:7ff:fe1a:7db1%2 Mar 17 17:35:32.500527 ntpd[1932]: Listening on routing socket on fd #21 for interface updates Mar 17 17:35:32.510184 update_engine[1939]: I20250317 17:35:32.509984 1939 main.cc:92] Flatcar Update Engine starting Mar 17 17:35:32.511027 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:35:32.511211 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:35:32.511317 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:35:32.511423 ntpd[1932]: 17 Mar 17:35:32 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:35:32.532804 update_engine[1939]: I20250317 17:35:32.530684 1939 update_check_scheduler.cc:74] Next update check in 10m59s Mar 17 17:35:32.536030 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:35:32.546540 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:35:32.552358 tar[1944]: linux-arm64/helm Mar 17 17:35:32.558666 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:35:32.559149 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:35:32.568802 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:35:32.581310 extend-filesystems[1930]: Resized partition /dev/nvme0n1p9 Mar 17 17:35:32.594864 extend-filesystems[1980]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:35:32.602167 jq[1966]: true Mar 17 17:35:32.625162 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:35:32.749444 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:35:32.768842 extend-filesystems[1980]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:35:32.768842 extend-filesystems[1980]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:35:32.768842 extend-filesystems[1980]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:35:32.785427 extend-filesystems[1930]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:35:32.774598 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:35:32.787778 coreos-metadata[1927]: Mar 17 17:35:32.787 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:35:32.777230 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:35:32.801569 coreos-metadata[1927]: Mar 17 17:35:32.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:35:32.801569 coreos-metadata[1927]: Mar 17 17:35:32.799 INFO Fetch successful Mar 17 17:35:32.801569 coreos-metadata[1927]: Mar 17 17:35:32.799 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:35:32.803052 coreos-metadata[1927]: Mar 17 17:35:32.802 INFO Fetch successful Mar 17 17:35:32.803052 coreos-metadata[1927]: Mar 17 17:35:32.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:35:32.808528 coreos-metadata[1927]: Mar 17 17:35:32.808 INFO Fetch successful Mar 17 17:35:32.808528 coreos-metadata[1927]: Mar 17 17:35:32.808 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:35:32.811354 coreos-metadata[1927]: Mar 17 17:35:32.810 INFO Fetch successful Mar 17 17:35:32.811354 coreos-metadata[1927]: Mar 17 17:35:32.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:35:32.812435 coreos-metadata[1927]: Mar 17 17:35:32.812 INFO Fetch failed with 404: resource not found Mar 17 17:35:32.812435 coreos-metadata[1927]: Mar 17 17:35:32.812 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:35:32.819705 coreos-metadata[1927]: Mar 17 17:35:32.817 INFO Fetch successful Mar 17 17:35:32.819705 coreos-metadata[1927]: Mar 17 17:35:32.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:35:32.820842 coreos-metadata[1927]: Mar 17 17:35:32.820 INFO Fetch successful Mar 17 17:35:32.820842 coreos-metadata[1927]: Mar 17 17:35:32.820 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:35:32.821548 coreos-metadata[1927]: Mar 17 17:35:32.821 INFO Fetch successful Mar 17 17:35:32.821548 coreos-metadata[1927]: Mar 17 17:35:32.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:35:32.825092 coreos-metadata[1927]: Mar 17 17:35:32.824 INFO Fetch successful Mar 17 17:35:32.825092 coreos-metadata[1927]: Mar 17 17:35:32.824 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:35:32.833469 coreos-metadata[1927]: Mar 17 17:35:32.830 INFO Fetch successful Mar 17 17:35:32.881334 bash[2009]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:35:32.885271 systemd-logind[1938]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:35:32.885347 systemd-logind[1938]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 17:35:32.885717 systemd-logind[1938]: New seat seat0. Mar 17 17:35:32.889204 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:35:32.893023 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:35:32.903011 systemd[1]: Starting sshkeys.service... Mar 17 17:35:32.915303 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:35:32.921711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:35:33.066178 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1689) Mar 17 17:35:33.081096 containerd[1958]: time="2025-03-17T17:35:33.079985181Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:35:33.080195 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:35:33.106327 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:35:33.129931 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:35:33.263257 locksmithd[1975]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:35:33.270042 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:35:33.279640 dbus-daemon[1928]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:35:33.283306 dbus-daemon[1928]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1951 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:35:33.310114 containerd[1958]: time="2025-03-17T17:35:33.307189894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:33.309280 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:35:33.328319 systemd-networkd[1870]: eth0: Gained IPv6LL Mar 17 17:35:33.341152 containerd[1958]: time="2025-03-17T17:35:33.339242194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:33.341152 containerd[1958]: time="2025-03-17T17:35:33.339311410Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:35:33.341152 containerd[1958]: time="2025-03-17T17:35:33.339345766Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:35:33.341152 containerd[1958]: time="2025-03-17T17:35:33.339659314Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:35:33.341152 containerd[1958]: time="2025-03-17T17:35:33.339719722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:33.341152 containerd[1958]: time="2025-03-17T17:35:33.339855814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:33.341152 containerd[1958]: time="2025-03-17T17:35:33.339887026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.341830594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.341878654Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.341911030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.341936542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.342146602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.342557290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.342822226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:35:33.344497 containerd[1958]: time="2025-03-17T17:35:33.342854482Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:35:33.347905 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:35:33.351558 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:35:33.359369 containerd[1958]: time="2025-03-17T17:35:33.358179718Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:35:33.359369 containerd[1958]: time="2025-03-17T17:35:33.358363270Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:35:33.370311 containerd[1958]: time="2025-03-17T17:35:33.370255102Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:35:33.371720 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:35:33.386722 containerd[1958]: time="2025-03-17T17:35:33.373490446Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:35:33.386722 containerd[1958]: time="2025-03-17T17:35:33.384397426Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:35:33.386722 containerd[1958]: time="2025-03-17T17:35:33.384457714Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:35:33.386722 containerd[1958]: time="2025-03-17T17:35:33.384490630Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:35:33.386722 containerd[1958]: time="2025-03-17T17:35:33.384763090Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:35:33.386218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:33.392723 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:35:33.398974 containerd[1958]: time="2025-03-17T17:35:33.398928214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.401802598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.401857750Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.401895670Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.401926522Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.401959006Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.401988442Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.402022222Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.402053182Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.402084154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.402287 containerd[1958]: time="2025-03-17T17:35:33.402111502Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.402769126Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.402825502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.402862738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.402892138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.402922078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.402950758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.402980578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.403007614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.403038946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.403068406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.404226 containerd[1958]: time="2025-03-17T17:35:33.403102018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.411409 containerd[1958]: time="2025-03-17T17:35:33.406636943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.411409 containerd[1958]: time="2025-03-17T17:35:33.406701287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.411409 containerd[1958]: time="2025-03-17T17:35:33.406733543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.411409 containerd[1958]: time="2025-03-17T17:35:33.406769231Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:35:33.411409 containerd[1958]: time="2025-03-17T17:35:33.406818779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.411409 containerd[1958]: time="2025-03-17T17:35:33.406850627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.411409 containerd[1958]: time="2025-03-17T17:35:33.406877255Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:35:33.407197 polkitd[2067]: Started polkitd version 121 Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414364391Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414502091Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414530039Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414565799Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414589403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414617999Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414641615Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:35:33.418167 containerd[1958]: time="2025-03-17T17:35:33.414665207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:35:33.418601 containerd[1958]: time="2025-03-17T17:35:33.415296575Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:35:33.418601 containerd[1958]: time="2025-03-17T17:35:33.415396367Z" level=info msg="Connect containerd service" Mar 17 17:35:33.418601 containerd[1958]: time="2025-03-17T17:35:33.415453415Z" level=info msg="using legacy CRI server" Mar 17 17:35:33.418601 containerd[1958]: time="2025-03-17T17:35:33.415470683Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:35:33.418601 containerd[1958]: time="2025-03-17T17:35:33.415726919Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:35:33.425548 containerd[1958]: time="2025-03-17T17:35:33.424906247Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:35:33.426747 containerd[1958]: time="2025-03-17T17:35:33.425702447Z" level=info msg="Start subscribing containerd event" Mar 17 17:35:33.426747 containerd[1958]: time="2025-03-17T17:35:33.425783615Z" level=info msg="Start recovering state" Mar 17 17:35:33.426747 containerd[1958]: time="2025-03-17T17:35:33.425913551Z" level=info msg="Start event monitor" Mar 17 17:35:33.426747 containerd[1958]: time="2025-03-17T17:35:33.425935583Z" level=info msg="Start snapshots syncer" Mar 17 17:35:33.426747 containerd[1958]: time="2025-03-17T17:35:33.425960171Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:35:33.426747 containerd[1958]: time="2025-03-17T17:35:33.425978303Z" level=info msg="Start streaming server" Mar 17 17:35:33.431153 containerd[1958]: time="2025-03-17T17:35:33.429940139Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:35:33.431153 containerd[1958]: time="2025-03-17T17:35:33.430065167Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:35:33.445313 polkitd[2067]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:35:33.445434 polkitd[2067]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:35:33.448962 containerd[1958]: time="2025-03-17T17:35:33.447603899Z" level=info msg="containerd successfully booted in 0.368931s" Mar 17 17:35:33.461220 polkitd[2067]: Finished loading, compiling and executing 2 rules Mar 17 17:35:33.464251 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:35:33.466458 dbus-daemon[1928]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:35:33.471369 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:35:33.475682 polkitd[2067]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:35:33.497896 coreos-metadata[2033]: Mar 17 17:35:33.497 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:35:33.497896 coreos-metadata[2033]: Mar 17 17:35:33.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:35:33.502214 coreos-metadata[2033]: Mar 17 17:35:33.501 INFO Fetch successful Mar 17 17:35:33.502214 coreos-metadata[2033]: Mar 17 17:35:33.501 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:35:33.502214 coreos-metadata[2033]: Mar 17 17:35:33.501 INFO Fetch successful Mar 17 17:35:33.508927 unknown[2033]: wrote ssh authorized keys file for user: core Mar 17 17:35:33.619180 systemd-resolved[1871]: System hostname changed to 'ip-172-31-28-49'. Mar 17 17:35:33.619196 systemd-hostnamed[1951]: Hostname set to (transient) Mar 17 17:35:33.645173 update-ssh-keys[2105]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:35:33.641307 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:35:33.649919 systemd[1]: Finished sshkeys.service. Mar 17 17:35:33.677800 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:35:33.699068 amazon-ssm-agent[2077]: Initializing new seelog logger Mar 17 17:35:33.702068 amazon-ssm-agent[2077]: New Seelog Logger Creation Complete Mar 17 17:35:33.703288 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.703288 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.703288 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 processing appconfig overrides Mar 17 17:35:33.709166 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.709166 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.709166 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 processing appconfig overrides Mar 17 17:35:33.711267 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.711267 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.711267 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 processing appconfig overrides Mar 17 17:35:33.711267 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO Proxy environment variables: Mar 17 17:35:33.716219 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.716357 amazon-ssm-agent[2077]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:35:33.716559 amazon-ssm-agent[2077]: 2025/03/17 17:35:33 processing appconfig overrides Mar 17 17:35:33.812426 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO https_proxy: Mar 17 17:35:33.913838 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO http_proxy: Mar 17 17:35:34.014437 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO no_proxy: Mar 17 17:35:34.112946 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:35:34.211342 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:35:34.251878 sshd_keygen[1972]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:35:34.310198 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO Agent will take identity from EC2 Mar 17 17:35:34.337533 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:35:34.356671 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:35:34.366603 systemd[1]: Started sshd@0-172.31.28.49:22-147.75.109.163:36480.service - OpenSSH per-connection server daemon (147.75.109.163:36480). Mar 17 17:35:34.389879 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:35:34.390364 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:35:34.409593 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:35:34.413682 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:35:34.461243 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:35:34.479831 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:35:34.489725 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:35:34.493572 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:35:34.513834 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:35:34.613299 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:35:34.712503 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:35:34.721186 sshd[2162]: Accepted publickey for core from 147.75.109.163 port 36480 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:35:34.726074 sshd-session[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:34.739554 tar[1944]: linux-arm64/LICENSE Mar 17 17:35:34.741179 tar[1944]: linux-arm64/README.md Mar 17 17:35:34.787310 systemd-logind[1938]: New session 1 of user core. Mar 17 17:35:34.789182 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:35:34.796516 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:35:34.806826 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:35:34.812204 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 17 17:35:34.839259 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [Registrar] Starting registrar module Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:34 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:34 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:34 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:35:34.848432 amazon-ssm-agent[2077]: 2025-03-17 17:35:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:35:34.850642 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:35:34.876726 (systemd)[2177]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:35:34.881624 systemd-logind[1938]: New session c1 of user core. Mar 17 17:35:34.911612 amazon-ssm-agent[2077]: 2025-03-17 17:35:34 INFO [CredentialRefresher] Next credential rotation will be in 31.0749772484 minutes Mar 17 17:35:35.180355 systemd[2177]: Queued start job for default target default.target. Mar 17 17:35:35.189363 systemd[2177]: Created slice app.slice - User Application Slice. Mar 17 17:35:35.189607 systemd[2177]: Reached target paths.target - Paths. Mar 17 17:35:35.189782 systemd[2177]: Reached target timers.target - Timers. Mar 17 17:35:35.192745 systemd[2177]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:35:35.237899 systemd[2177]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:35:35.238203 systemd[2177]: Reached target sockets.target - Sockets. Mar 17 17:35:35.238504 systemd[2177]: Reached target basic.target - Basic System. Mar 17 17:35:35.238760 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:35:35.239074 systemd[2177]: Reached target default.target - Main User Target. Mar 17 17:35:35.239275 systemd[2177]: Startup finished in 343ms. Mar 17 17:35:35.253440 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:35:35.415678 systemd[1]: Started sshd@1-172.31.28.49:22-147.75.109.163:59636.service - OpenSSH per-connection server daemon (147.75.109.163:59636). Mar 17 17:35:35.482935 ntpd[1932]: Listen normally on 6 eth0 [fe80::488:7ff:fe1a:7db1%2]:123 Mar 17 17:35:35.491652 ntpd[1932]: 17 Mar 17:35:35 ntpd[1932]: Listen normally on 6 eth0 [fe80::488:7ff:fe1a:7db1%2]:123 Mar 17 17:35:35.484538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:35.485755 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:35:35.487919 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:35:35.490241 systemd[1]: Startup finished in 1.076s (kernel) + 9.071s (initrd) + 8.685s (userspace) = 18.833s. Mar 17 17:35:35.627479 sshd[2188]: Accepted publickey for core from 147.75.109.163 port 59636 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:35:35.630543 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:35.640917 systemd-logind[1938]: New session 2 of user core. Mar 17 17:35:35.648482 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:35:35.776335 sshd[2200]: Connection closed by 147.75.109.163 port 59636 Mar 17 17:35:35.778457 sshd-session[2188]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:35.783862 systemd[1]: sshd@1-172.31.28.49:22-147.75.109.163:59636.service: Deactivated successfully. Mar 17 17:35:35.789362 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:35:35.793503 systemd-logind[1938]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:35:35.795869 systemd-logind[1938]: Removed session 2. Mar 17 17:35:35.816744 systemd[1]: Started sshd@2-172.31.28.49:22-147.75.109.163:59652.service - OpenSSH per-connection server daemon (147.75.109.163:59652). Mar 17 17:35:35.900105 amazon-ssm-agent[2077]: 2025-03-17 17:35:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:35:36.000558 amazon-ssm-agent[2077]: 2025-03-17 17:35:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2213) started Mar 17 17:35:36.009159 sshd[2210]: Accepted publickey for core from 147.75.109.163 port 59652 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:35:36.010782 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:36.026279 systemd-logind[1938]: New session 3 of user core. Mar 17 17:35:36.034476 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:35:36.101572 amazon-ssm-agent[2077]: 2025-03-17 17:35:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:35:36.161176 sshd[2219]: Connection closed by 147.75.109.163 port 59652 Mar 17 17:35:36.160787 sshd-session[2210]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:36.167485 systemd-logind[1938]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:35:36.169052 systemd[1]: sshd@2-172.31.28.49:22-147.75.109.163:59652.service: Deactivated successfully. Mar 17 17:35:36.173769 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:35:36.180370 systemd-logind[1938]: Removed session 3. Mar 17 17:35:36.200882 systemd[1]: Started sshd@3-172.31.28.49:22-147.75.109.163:59664.service - OpenSSH per-connection server daemon (147.75.109.163:59664). Mar 17 17:35:36.386832 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 59664 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:35:36.389964 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:36.400548 systemd-logind[1938]: New session 4 of user core. Mar 17 17:35:36.410665 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:35:36.539446 sshd[2233]: Connection closed by 147.75.109.163 port 59664 Mar 17 17:35:36.541320 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:36.547843 kubelet[2195]: E0317 17:35:36.547592 2195 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:35:36.550333 systemd[1]: sshd@3-172.31.28.49:22-147.75.109.163:59664.service: Deactivated successfully. Mar 17 17:35:36.550334 systemd-logind[1938]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:35:36.555049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:35:36.555491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:35:36.556186 systemd[1]: kubelet.service: Consumed 1.321s CPU time, 244.1M memory peak. Mar 17 17:35:36.556990 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:35:36.561586 systemd-logind[1938]: Removed session 4. Mar 17 17:35:36.579705 systemd[1]: Started sshd@4-172.31.28.49:22-147.75.109.163:59670.service - OpenSSH per-connection server daemon (147.75.109.163:59670). Mar 17 17:35:36.764187 sshd[2240]: Accepted publickey for core from 147.75.109.163 port 59670 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:35:36.766574 sshd-session[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:36.775430 systemd-logind[1938]: New session 5 of user core. Mar 17 17:35:36.782418 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:35:36.925500 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:35:36.926679 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:35:36.948321 sudo[2243]: pam_unix(sudo:session): session closed for user root Mar 17 17:35:36.972633 sshd[2242]: Connection closed by 147.75.109.163 port 59670 Mar 17 17:35:36.972449 sshd-session[2240]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:36.979225 systemd[1]: sshd@4-172.31.28.49:22-147.75.109.163:59670.service: Deactivated successfully. Mar 17 17:35:36.982926 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:35:36.984749 systemd-logind[1938]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:35:36.986610 systemd-logind[1938]: Removed session 5. Mar 17 17:35:37.012637 systemd[1]: Started sshd@5-172.31.28.49:22-147.75.109.163:59684.service - OpenSSH per-connection server daemon (147.75.109.163:59684). Mar 17 17:35:37.202889 sshd[2249]: Accepted publickey for core from 147.75.109.163 port 59684 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:35:37.205400 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:37.214109 systemd-logind[1938]: New session 6 of user core. Mar 17 17:35:37.222406 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:35:37.328550 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:35:37.329275 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:35:37.335717 sudo[2253]: pam_unix(sudo:session): session closed for user root Mar 17 17:35:37.345908 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:35:37.346573 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:35:37.368755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:35:37.418716 augenrules[2275]: No rules Mar 17 17:35:37.420884 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:35:37.421366 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:35:37.423475 sudo[2252]: pam_unix(sudo:session): session closed for user root Mar 17 17:35:37.447429 sshd[2251]: Connection closed by 147.75.109.163 port 59684 Mar 17 17:35:37.448233 sshd-session[2249]: pam_unix(sshd:session): session closed for user core Mar 17 17:35:37.454290 systemd-logind[1938]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:35:37.454757 systemd[1]: sshd@5-172.31.28.49:22-147.75.109.163:59684.service: Deactivated successfully. Mar 17 17:35:37.457791 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:35:37.461078 systemd-logind[1938]: Removed session 6. Mar 17 17:35:37.490639 systemd[1]: Started sshd@6-172.31.28.49:22-147.75.109.163:59696.service - OpenSSH per-connection server daemon (147.75.109.163:59696). Mar 17 17:35:37.673790 sshd[2284]: Accepted publickey for core from 147.75.109.163 port 59696 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:35:37.677043 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:35:37.686113 systemd-logind[1938]: New session 7 of user core. Mar 17 17:35:37.694394 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:35:37.799781 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:35:37.800956 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:35:38.556622 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:35:38.569641 (dockerd)[2305]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:35:39.038267 dockerd[2305]: time="2025-03-17T17:35:39.037909946Z" level=info msg="Starting up" Mar 17 17:35:39.345457 dockerd[2305]: time="2025-03-17T17:35:39.344967736Z" level=info msg="Loading containers: start." Mar 17 17:35:39.210741 systemd-resolved[1871]: Clock change detected. Flushing caches. Mar 17 17:35:39.218243 systemd-journald[1496]: Time jumped backwards, rotating. Mar 17 17:35:39.347199 kernel: Initializing XFRM netlink socket Mar 17 17:35:39.392151 (udev-worker)[2330]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:35:39.486125 systemd-networkd[1870]: docker0: Link UP Mar 17 17:35:39.528520 dockerd[2305]: time="2025-03-17T17:35:39.528449105Z" level=info msg="Loading containers: done." Mar 17 17:35:39.556080 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3883718929-merged.mount: Deactivated successfully. Mar 17 17:35:39.557285 dockerd[2305]: time="2025-03-17T17:35:39.556425701Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:35:39.557285 dockerd[2305]: time="2025-03-17T17:35:39.556575569Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:35:39.557285 dockerd[2305]: time="2025-03-17T17:35:39.556819265Z" level=info msg="Daemon has completed initialization" Mar 17 17:35:39.609813 dockerd[2305]: time="2025-03-17T17:35:39.609551153Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:35:39.610522 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:35:40.960511 containerd[1958]: time="2025-03-17T17:35:40.960440936Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:35:41.608865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992522388.mount: Deactivated successfully. Mar 17 17:35:43.753031 containerd[1958]: time="2025-03-17T17:35:43.752846098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:43.754989 containerd[1958]: time="2025-03-17T17:35:43.754915606Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793524" Mar 17 17:35:43.755923 containerd[1958]: time="2025-03-17T17:35:43.755837146Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:43.762375 containerd[1958]: time="2025-03-17T17:35:43.762278410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:43.765001 containerd[1958]: time="2025-03-17T17:35:43.764397046Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.803893206s" Mar 17 17:35:43.765001 containerd[1958]: time="2025-03-17T17:35:43.764454010Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:35:43.803614 containerd[1958]: time="2025-03-17T17:35:43.803541118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:35:46.140217 containerd[1958]: time="2025-03-17T17:35:46.140128438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:46.142532 containerd[1958]: time="2025-03-17T17:35:46.142191586Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861167" Mar 17 17:35:46.143595 containerd[1958]: time="2025-03-17T17:35:46.143508970Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:46.149012 containerd[1958]: time="2025-03-17T17:35:46.148901518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:46.151406 containerd[1958]: time="2025-03-17T17:35:46.151226038Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 2.347621692s" Mar 17 17:35:46.151406 containerd[1958]: time="2025-03-17T17:35:46.151278250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:35:46.193385 containerd[1958]: time="2025-03-17T17:35:46.193321798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:35:46.533822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:35:46.542332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:46.848249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:46.848568 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:35:46.937946 kubelet[2575]: E0317 17:35:46.937855 2575 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:35:46.944408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:35:46.944707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:35:46.945866 systemd[1]: kubelet.service: Consumed 296ms CPU time, 92.7M memory peak. Mar 17 17:35:47.765470 containerd[1958]: time="2025-03-17T17:35:47.765411170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:47.768101 containerd[1958]: time="2025-03-17T17:35:47.768021050Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264636" Mar 17 17:35:47.769164 containerd[1958]: time="2025-03-17T17:35:47.769082738Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:47.774841 containerd[1958]: time="2025-03-17T17:35:47.774731786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:47.777417 containerd[1958]: time="2025-03-17T17:35:47.777218042Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.583474048s" Mar 17 17:35:47.777417 containerd[1958]: time="2025-03-17T17:35:47.777276878Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:35:47.818003 containerd[1958]: time="2025-03-17T17:35:47.817695854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:35:49.094643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545350598.mount: Deactivated successfully. Mar 17 17:35:49.675962 containerd[1958]: time="2025-03-17T17:35:49.675673107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:49.677086 containerd[1958]: time="2025-03-17T17:35:49.677019819Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771848" Mar 17 17:35:49.678107 containerd[1958]: time="2025-03-17T17:35:49.678015087Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:49.681725 containerd[1958]: time="2025-03-17T17:35:49.681642568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:49.683346 containerd[1958]: time="2025-03-17T17:35:49.683161492Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.865412238s" Mar 17 17:35:49.683346 containerd[1958]: time="2025-03-17T17:35:49.683212888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:35:49.725340 containerd[1958]: time="2025-03-17T17:35:49.725294992Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:35:50.301545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362999649.mount: Deactivated successfully. Mar 17 17:35:51.351774 containerd[1958]: time="2025-03-17T17:35:51.350343892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:51.352534 containerd[1958]: time="2025-03-17T17:35:51.352471000Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 17 17:35:51.358163 containerd[1958]: time="2025-03-17T17:35:51.358112800Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:51.364959 containerd[1958]: time="2025-03-17T17:35:51.364903504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:51.367052 containerd[1958]: time="2025-03-17T17:35:51.366954088Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.641403424s" Mar 17 17:35:51.367157 containerd[1958]: time="2025-03-17T17:35:51.367050112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:35:51.413188 containerd[1958]: time="2025-03-17T17:35:51.413132728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:35:51.971118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381991400.mount: Deactivated successfully. Mar 17 17:35:51.978800 containerd[1958]: time="2025-03-17T17:35:51.978499135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:51.979560 containerd[1958]: time="2025-03-17T17:35:51.979411051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Mar 17 17:35:51.981742 containerd[1958]: time="2025-03-17T17:35:51.981652183Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:51.986698 containerd[1958]: time="2025-03-17T17:35:51.986608387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:51.988427 containerd[1958]: time="2025-03-17T17:35:51.988239079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 575.046327ms" Mar 17 17:35:51.988427 containerd[1958]: time="2025-03-17T17:35:51.988292971Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:35:52.026775 containerd[1958]: time="2025-03-17T17:35:52.026662335Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:35:52.545593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount833994834.mount: Deactivated successfully. Mar 17 17:35:55.843448 containerd[1958]: time="2025-03-17T17:35:55.843377134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:55.845155 containerd[1958]: time="2025-03-17T17:35:55.845051566Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Mar 17 17:35:55.847218 containerd[1958]: time="2025-03-17T17:35:55.846263758Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:55.852228 containerd[1958]: time="2025-03-17T17:35:55.852166678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:35:55.854980 containerd[1958]: time="2025-03-17T17:35:55.854900782Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.828183463s" Mar 17 17:35:55.854980 containerd[1958]: time="2025-03-17T17:35:55.854959966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:35:57.190756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:35:57.203141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:35:57.495246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:35:57.498029 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:35:57.580911 kubelet[2773]: E0317 17:35:57.580853 2773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:35:57.586491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:35:57.587017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:35:57.587999 systemd[1]: kubelet.service: Consumed 269ms CPU time, 98.5M memory peak. Mar 17 17:36:02.891780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:02.892697 systemd[1]: kubelet.service: Consumed 269ms CPU time, 98.5M memory peak. Mar 17 17:36:02.904461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:02.952887 systemd[1]: Reload requested from client PID 2787 ('systemctl') (unit session-7.scope)... Mar 17 17:36:02.952922 systemd[1]: Reloading... Mar 17 17:36:03.213011 zram_generator::config[2836]: No configuration found. Mar 17 17:36:03.450613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:36:03.675510 systemd[1]: Reloading finished in 721 ms. Mar 17 17:36:03.711400 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:36:03.768388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:03.784491 (kubelet)[2891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:36:03.786874 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:03.790184 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:36:03.790667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:03.790764 systemd[1]: kubelet.service: Consumed 205ms CPU time, 82.3M memory peak. Mar 17 17:36:03.796611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:04.088310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:04.102490 (kubelet)[2903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:36:04.183346 kubelet[2903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:36:04.185452 kubelet[2903]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:36:04.185452 kubelet[2903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:36:04.185452 kubelet[2903]: I0317 17:36:04.184037 2903 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:36:04.841699 kubelet[2903]: I0317 17:36:04.841637 2903 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:36:04.841699 kubelet[2903]: I0317 17:36:04.841684 2903 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:36:04.842179 kubelet[2903]: I0317 17:36:04.842137 2903 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:36:04.878240 kubelet[2903]: E0317 17:36:04.878182 2903 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.879398 kubelet[2903]: I0317 17:36:04.879212 2903 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:36:04.893310 kubelet[2903]: I0317 17:36:04.893267 2903 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:36:04.895989 kubelet[2903]: I0317 17:36:04.895905 2903 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:36:04.896290 kubelet[2903]: I0317 17:36:04.895998 2903 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:36:04.896491 kubelet[2903]: I0317 17:36:04.896314 2903 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:36:04.896491 kubelet[2903]: I0317 17:36:04.896368 2903 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:36:04.896679 kubelet[2903]: I0317 17:36:04.896638 2903 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:36:04.898348 kubelet[2903]: I0317 17:36:04.898296 2903 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:36:04.898348 kubelet[2903]: I0317 17:36:04.898336 2903 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:36:04.898508 kubelet[2903]: I0317 17:36:04.898422 2903 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:36:04.898508 kubelet[2903]: I0317 17:36:04.898465 2903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:36:04.901280 kubelet[2903]: W0317 17:36:04.901190 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.901422 kubelet[2903]: E0317 17:36:04.901293 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.901527 kubelet[2903]: I0317 17:36:04.901490 2903 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:36:04.902030 kubelet[2903]: I0317 17:36:04.901960 2903 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:36:04.902136 kubelet[2903]: W0317 17:36:04.902089 2903 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:36:04.906099 kubelet[2903]: W0317 17:36:04.905995 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-49&limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.906334 kubelet[2903]: E0317 17:36:04.906310 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-49&limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.908365 kubelet[2903]: I0317 17:36:04.908330 2903 server.go:1264] "Started kubelet" Mar 17 17:36:04.915562 kubelet[2903]: I0317 17:36:04.915498 2903 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:36:04.918914 kubelet[2903]: I0317 17:36:04.918875 2903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:36:04.920744 kubelet[2903]: E0317 17:36:04.919815 2903 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.49:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-49.182da7a69d559723 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-49,UID:ip-172-31-28-49,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-49,},FirstTimestamp:2025-03-17 17:36:04.908291875 +0000 UTC m=+0.798793757,LastTimestamp:2025-03-17 17:36:04.908291875 +0000 UTC m=+0.798793757,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-49,}" Mar 17 17:36:04.920744 kubelet[2903]: I0317 17:36:04.920098 2903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:36:04.920744 kubelet[2903]: I0317 17:36:04.920603 2903 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:36:04.925663 kubelet[2903]: I0317 17:36:04.925622 2903 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:36:04.931344 kubelet[2903]: E0317 17:36:04.931302 2903 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-28-49\" not found" Mar 17 17:36:04.931617 kubelet[2903]: I0317 17:36:04.931595 2903 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:36:04.931891 kubelet[2903]: I0317 17:36:04.931868 2903 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:36:04.932151 kubelet[2903]: I0317 17:36:04.932130 2903 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:36:04.932824 kubelet[2903]: W0317 17:36:04.932762 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.933028 kubelet[2903]: E0317 17:36:04.933004 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.934790 kubelet[2903]: E0317 17:36:04.934402 2903 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:36:04.935622 kubelet[2903]: E0317 17:36:04.935538 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-49?timeout=10s\": dial tcp 172.31.28.49:6443: connect: connection refused" interval="200ms" Mar 17 17:36:04.936069 kubelet[2903]: I0317 17:36:04.936029 2903 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:36:04.936242 kubelet[2903]: I0317 17:36:04.936198 2903 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:36:04.939074 kubelet[2903]: I0317 17:36:04.939026 2903 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:36:04.958360 kubelet[2903]: I0317 17:36:04.958099 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:36:04.960400 kubelet[2903]: I0317 17:36:04.960350 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:36:04.961199 kubelet[2903]: I0317 17:36:04.960604 2903 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:36:04.961199 kubelet[2903]: I0317 17:36:04.960642 2903 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:36:04.961199 kubelet[2903]: E0317 17:36:04.960716 2903 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:36:04.973803 kubelet[2903]: W0317 17:36:04.973731 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.974354 kubelet[2903]: E0317 17:36:04.974328 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:04.987898 kubelet[2903]: I0317 17:36:04.987840 2903 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:36:04.987898 kubelet[2903]: I0317 17:36:04.987873 2903 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:36:04.987898 kubelet[2903]: I0317 17:36:04.987906 2903 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:36:04.990805 kubelet[2903]: I0317 17:36:04.990754 2903 policy_none.go:49] "None policy: Start" Mar 17 17:36:04.991856 kubelet[2903]: I0317 17:36:04.991812 2903 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:36:04.991940 kubelet[2903]: I0317 17:36:04.991860 2903 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:36:05.001471 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:36:05.017119 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:36:05.024932 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:36:05.035512 kubelet[2903]: I0317 17:36:05.034674 2903 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:36:05.035512 kubelet[2903]: I0317 17:36:05.034994 2903 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:36:05.035512 kubelet[2903]: I0317 17:36:05.035172 2903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:36:05.038726 kubelet[2903]: I0317 17:36:05.037262 2903 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-49" Mar 17 17:36:05.038726 kubelet[2903]: E0317 17:36:05.037761 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.49:6443/api/v1/nodes\": dial tcp 172.31.28.49:6443: connect: connection refused" node="ip-172-31-28-49" Mar 17 17:36:05.039709 kubelet[2903]: E0317 17:36:05.038948 2903 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-49\" not found" Mar 17 17:36:05.061561 kubelet[2903]: I0317 17:36:05.061497 2903 topology_manager.go:215] "Topology Admit Handler" podUID="713ae5b7ea6a084d6b6e367663784892" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-49" Mar 17 17:36:05.064354 kubelet[2903]: I0317 17:36:05.064165 2903 topology_manager.go:215] "Topology Admit Handler" podUID="aa1c3b744c2a52131ca66189adfb4eca" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:05.067563 kubelet[2903]: I0317 17:36:05.067086 2903 topology_manager.go:215] "Topology Admit Handler" podUID="6f272b1952144d7204d34c8ebbd8a98f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-49" Mar 17 17:36:05.079659 systemd[1]: Created slice kubepods-burstable-pod713ae5b7ea6a084d6b6e367663784892.slice - libcontainer container kubepods-burstable-pod713ae5b7ea6a084d6b6e367663784892.slice. Mar 17 17:36:05.104465 systemd[1]: Created slice kubepods-burstable-podaa1c3b744c2a52131ca66189adfb4eca.slice - libcontainer container kubepods-burstable-podaa1c3b744c2a52131ca66189adfb4eca.slice. Mar 17 17:36:05.112936 systemd[1]: Created slice kubepods-burstable-pod6f272b1952144d7204d34c8ebbd8a98f.slice - libcontainer container kubepods-burstable-pod6f272b1952144d7204d34c8ebbd8a98f.slice. Mar 17 17:36:05.133539 kubelet[2903]: I0317 17:36:05.133459 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:05.133539 kubelet[2903]: I0317 17:36:05.133525 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f272b1952144d7204d34c8ebbd8a98f-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-49\" (UID: \"6f272b1952144d7204d34c8ebbd8a98f\") " pod="kube-system/kube-scheduler-ip-172-31-28-49" Mar 17 17:36:05.133539 kubelet[2903]: I0317 17:36:05.133565 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/713ae5b7ea6a084d6b6e367663784892-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-49\" (UID: \"713ae5b7ea6a084d6b6e367663784892\") " pod="kube-system/kube-apiserver-ip-172-31-28-49" Mar 17 17:36:05.134072 kubelet[2903]: I0317 17:36:05.133624 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/713ae5b7ea6a084d6b6e367663784892-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-49\" (UID: \"713ae5b7ea6a084d6b6e367663784892\") " pod="kube-system/kube-apiserver-ip-172-31-28-49" Mar 17 17:36:05.134072 kubelet[2903]: I0317 17:36:05.133662 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:05.134072 kubelet[2903]: I0317 17:36:05.133719 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:05.134072 kubelet[2903]: I0317 17:36:05.133753 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:05.134072 kubelet[2903]: I0317 17:36:05.133795 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:05.134316 kubelet[2903]: I0317 17:36:05.133842 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/713ae5b7ea6a084d6b6e367663784892-ca-certs\") pod \"kube-apiserver-ip-172-31-28-49\" (UID: \"713ae5b7ea6a084d6b6e367663784892\") " pod="kube-system/kube-apiserver-ip-172-31-28-49" Mar 17 17:36:05.137013 kubelet[2903]: E0317 17:36:05.136935 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-49?timeout=10s\": dial tcp 172.31.28.49:6443: connect: connection refused" interval="400ms" Mar 17 17:36:05.241054 kubelet[2903]: I0317 17:36:05.240510 2903 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-49" Mar 17 17:36:05.241054 kubelet[2903]: E0317 17:36:05.241006 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.49:6443/api/v1/nodes\": dial tcp 172.31.28.49:6443: connect: connection refused" node="ip-172-31-28-49" Mar 17 17:36:05.399501 containerd[1958]: time="2025-03-17T17:36:05.399339186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-49,Uid:713ae5b7ea6a084d6b6e367663784892,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:05.411738 containerd[1958]: time="2025-03-17T17:36:05.411346338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-49,Uid:aa1c3b744c2a52131ca66189adfb4eca,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:05.419755 containerd[1958]: time="2025-03-17T17:36:05.419681334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-49,Uid:6f272b1952144d7204d34c8ebbd8a98f,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:05.538501 kubelet[2903]: E0317 17:36:05.538431 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-49?timeout=10s\": dial tcp 172.31.28.49:6443: connect: connection refused" interval="800ms" Mar 17 17:36:05.643586 kubelet[2903]: I0317 17:36:05.642996 2903 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-49" Mar 17 17:36:05.643586 kubelet[2903]: E0317 17:36:05.643431 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.49:6443/api/v1/nodes\": dial tcp 172.31.28.49:6443: connect: connection refused" node="ip-172-31-28-49" Mar 17 17:36:05.883340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471680097.mount: Deactivated successfully. Mar 17 17:36:05.889055 containerd[1958]: time="2025-03-17T17:36:05.888588416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:36:05.892017 containerd[1958]: time="2025-03-17T17:36:05.891120332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:36:05.899627 containerd[1958]: time="2025-03-17T17:36:05.899558396Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:36:05.902283 containerd[1958]: time="2025-03-17T17:36:05.902213372Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:36:05.903504 containerd[1958]: time="2025-03-17T17:36:05.903432200Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:36:05.906482 containerd[1958]: time="2025-03-17T17:36:05.906302264Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:36:05.907272 containerd[1958]: time="2025-03-17T17:36:05.907202600Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:36:05.912433 containerd[1958]: time="2025-03-17T17:36:05.912338972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:36:05.915830 containerd[1958]: time="2025-03-17T17:36:05.915525188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.068738ms" Mar 17 17:36:05.917002 containerd[1958]: time="2025-03-17T17:36:05.916927184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.463834ms" Mar 17 17:36:05.922221 containerd[1958]: time="2025-03-17T17:36:05.922071800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.27583ms" Mar 17 17:36:05.970481 kubelet[2903]: W0317 17:36:05.970421 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:05.970615 kubelet[2903]: E0317 17:36:05.970493 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:06.119539 kubelet[2903]: W0317 17:36:06.119393 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-49&limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:06.119539 kubelet[2903]: E0317 17:36:06.119492 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-49&limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:06.129093 containerd[1958]: time="2025-03-17T17:36:06.128622737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:06.129093 containerd[1958]: time="2025-03-17T17:36:06.128747429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:06.129093 containerd[1958]: time="2025-03-17T17:36:06.128800445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:06.129093 containerd[1958]: time="2025-03-17T17:36:06.129000701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140519441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140607605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140642729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140778761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140501669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140612045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140647697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:06.141342 containerd[1958]: time="2025-03-17T17:36:06.140946677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:06.183576 systemd[1]: Started cri-containerd-0cd4f9775684ca808d9ca79bb17c024a7c3c97ca2f8e1d4272fc4c70fed71dc6.scope - libcontainer container 0cd4f9775684ca808d9ca79bb17c024a7c3c97ca2f8e1d4272fc4c70fed71dc6. Mar 17 17:36:06.207255 systemd[1]: Started cri-containerd-120ca89bad56145e24db2837558204c4ba534678833bbd0834494a51fbce3a4f.scope - libcontainer container 120ca89bad56145e24db2837558204c4ba534678833bbd0834494a51fbce3a4f. Mar 17 17:36:06.217654 kubelet[2903]: W0317 17:36:06.217448 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:06.217654 kubelet[2903]: E0317 17:36:06.217552 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:06.222193 systemd[1]: Started cri-containerd-7f204ad61451c226e378ab19c369d6ce26b9c1ff1fd58f92024d51a095bcc1a7.scope - libcontainer container 7f204ad61451c226e378ab19c369d6ce26b9c1ff1fd58f92024d51a095bcc1a7. Mar 17 17:36:06.334425 containerd[1958]: time="2025-03-17T17:36:06.334051086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-49,Uid:aa1c3b744c2a52131ca66189adfb4eca,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cd4f9775684ca808d9ca79bb17c024a7c3c97ca2f8e1d4272fc4c70fed71dc6\"" Mar 17 17:36:06.339680 containerd[1958]: time="2025-03-17T17:36:06.339440862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-49,Uid:713ae5b7ea6a084d6b6e367663784892,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f204ad61451c226e378ab19c369d6ce26b9c1ff1fd58f92024d51a095bcc1a7\"" Mar 17 17:36:06.340642 kubelet[2903]: E0317 17:36:06.340470 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-49?timeout=10s\": dial tcp 172.31.28.49:6443: connect: connection refused" interval="1.6s" Mar 17 17:36:06.348006 containerd[1958]: time="2025-03-17T17:36:06.346323606Z" level=info msg="CreateContainer within sandbox \"0cd4f9775684ca808d9ca79bb17c024a7c3c97ca2f8e1d4272fc4c70fed71dc6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:36:06.350447 containerd[1958]: time="2025-03-17T17:36:06.350377566Z" level=info msg="CreateContainer within sandbox \"7f204ad61451c226e378ab19c369d6ce26b9c1ff1fd58f92024d51a095bcc1a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:36:06.356694 containerd[1958]: time="2025-03-17T17:36:06.356638230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-49,Uid:6f272b1952144d7204d34c8ebbd8a98f,Namespace:kube-system,Attempt:0,} returns sandbox id \"120ca89bad56145e24db2837558204c4ba534678833bbd0834494a51fbce3a4f\"" Mar 17 17:36:06.363212 containerd[1958]: time="2025-03-17T17:36:06.363127002Z" level=info msg="CreateContainer within sandbox \"120ca89bad56145e24db2837558204c4ba534678833bbd0834494a51fbce3a4f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:36:06.380764 containerd[1958]: time="2025-03-17T17:36:06.380675538Z" level=info msg="CreateContainer within sandbox \"7f204ad61451c226e378ab19c369d6ce26b9c1ff1fd58f92024d51a095bcc1a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3dca95a61544be6fa7b6cb616a9932f0719f2ef11f1eb977e3dae500523ffa8\"" Mar 17 17:36:06.381891 containerd[1958]: time="2025-03-17T17:36:06.381801114Z" level=info msg="StartContainer for \"a3dca95a61544be6fa7b6cb616a9932f0719f2ef11f1eb977e3dae500523ffa8\"" Mar 17 17:36:06.384688 containerd[1958]: time="2025-03-17T17:36:06.382404750Z" level=info msg="CreateContainer within sandbox \"0cd4f9775684ca808d9ca79bb17c024a7c3c97ca2f8e1d4272fc4c70fed71dc6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628\"" Mar 17 17:36:06.385189 containerd[1958]: time="2025-03-17T17:36:06.385148358Z" level=info msg="StartContainer for \"ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628\"" Mar 17 17:36:06.397838 containerd[1958]: time="2025-03-17T17:36:06.397581607Z" level=info msg="CreateContainer within sandbox \"120ca89bad56145e24db2837558204c4ba534678833bbd0834494a51fbce3a4f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602\"" Mar 17 17:36:06.399997 containerd[1958]: time="2025-03-17T17:36:06.399924271Z" level=info msg="StartContainer for \"9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602\"" Mar 17 17:36:06.449126 kubelet[2903]: I0317 17:36:06.448893 2903 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-49" Mar 17 17:36:06.452419 kubelet[2903]: E0317 17:36:06.451215 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.28.49:6443/api/v1/nodes\": dial tcp 172.31.28.49:6443: connect: connection refused" node="ip-172-31-28-49" Mar 17 17:36:06.459080 systemd[1]: Started cri-containerd-a3dca95a61544be6fa7b6cb616a9932f0719f2ef11f1eb977e3dae500523ffa8.scope - libcontainer container a3dca95a61544be6fa7b6cb616a9932f0719f2ef11f1eb977e3dae500523ffa8. Mar 17 17:36:06.476621 systemd[1]: Started cri-containerd-ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628.scope - libcontainer container ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628. Mar 17 17:36:06.501294 systemd[1]: Started cri-containerd-9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602.scope - libcontainer container 9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602. Mar 17 17:36:06.504536 kubelet[2903]: W0317 17:36:06.504356 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:06.504989 kubelet[2903]: E0317 17:36:06.504828 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.49:6443: connect: connection refused Mar 17 17:36:06.625548 containerd[1958]: time="2025-03-17T17:36:06.625411052Z" level=info msg="StartContainer for \"a3dca95a61544be6fa7b6cb616a9932f0719f2ef11f1eb977e3dae500523ffa8\" returns successfully" Mar 17 17:36:06.625667 containerd[1958]: time="2025-03-17T17:36:06.625577768Z" level=info msg="StartContainer for \"9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602\" returns successfully" Mar 17 17:36:06.632997 containerd[1958]: time="2025-03-17T17:36:06.632722364Z" level=info msg="StartContainer for \"ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628\" returns successfully" Mar 17 17:36:08.055490 kubelet[2903]: I0317 17:36:08.055435 2903 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-49" Mar 17 17:36:10.433279 kubelet[2903]: E0317 17:36:10.433217 2903 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-49\" not found" node="ip-172-31-28-49" Mar 17 17:36:10.531834 kubelet[2903]: I0317 17:36:10.531629 2903 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-49" Mar 17 17:36:10.901765 kubelet[2903]: I0317 17:36:10.901633 2903 apiserver.go:52] "Watching apiserver" Mar 17 17:36:10.933028 kubelet[2903]: I0317 17:36:10.932934 2903 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:36:12.577343 systemd[1]: Reload requested from client PID 3176 ('systemctl') (unit session-7.scope)... Mar 17 17:36:12.577369 systemd[1]: Reloading... Mar 17 17:36:12.789012 zram_generator::config[3227]: No configuration found. Mar 17 17:36:13.015832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:36:13.273286 systemd[1]: Reloading finished in 695 ms. Mar 17 17:36:13.329805 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:13.345738 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:36:13.346947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:13.347061 systemd[1]: kubelet.service: Consumed 1.484s CPU time, 113.7M memory peak. Mar 17 17:36:13.354504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:36:13.652011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:36:13.668636 (kubelet)[3281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:36:13.760005 kubelet[3281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:36:13.760005 kubelet[3281]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:36:13.760005 kubelet[3281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:36:13.760545 kubelet[3281]: I0317 17:36:13.760143 3281 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:36:13.769354 kubelet[3281]: I0317 17:36:13.769299 3281 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:36:13.769354 kubelet[3281]: I0317 17:36:13.769340 3281 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:36:13.770040 kubelet[3281]: I0317 17:36:13.769691 3281 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:36:13.772672 kubelet[3281]: I0317 17:36:13.772617 3281 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:36:13.775546 kubelet[3281]: I0317 17:36:13.775055 3281 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:36:13.802349 kubelet[3281]: I0317 17:36:13.802251 3281 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:36:13.803254 kubelet[3281]: I0317 17:36:13.803167 3281 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:36:13.803737 kubelet[3281]: I0317 17:36:13.803236 3281 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-49","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:36:13.804959 kubelet[3281]: I0317 17:36:13.803571 3281 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:36:13.804959 kubelet[3281]: I0317 17:36:13.804736 3281 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:36:13.806024 kubelet[3281]: I0317 17:36:13.805387 3281 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:36:13.808472 kubelet[3281]: I0317 17:36:13.808204 3281 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:36:13.808472 kubelet[3281]: I0317 17:36:13.808255 3281 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:36:13.808472 kubelet[3281]: I0317 17:36:13.808329 3281 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:36:13.808472 kubelet[3281]: I0317 17:36:13.808365 3281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:36:13.813061 kubelet[3281]: I0317 17:36:13.812506 3281 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:36:13.813061 kubelet[3281]: I0317 17:36:13.812845 3281 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:36:13.815537 kubelet[3281]: I0317 17:36:13.815495 3281 server.go:1264] "Started kubelet" Mar 17 17:36:13.830061 kubelet[3281]: I0317 17:36:13.827931 3281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:36:13.830061 kubelet[3281]: I0317 17:36:13.829160 3281 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:36:13.830061 kubelet[3281]: I0317 17:36:13.829219 3281 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:36:13.830061 kubelet[3281]: I0317 17:36:13.829842 3281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:36:13.833032 kubelet[3281]: I0317 17:36:13.831786 3281 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:36:13.838278 kubelet[3281]: I0317 17:36:13.838222 3281 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:36:13.839060 kubelet[3281]: I0317 17:36:13.839011 3281 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:36:13.839328 kubelet[3281]: I0317 17:36:13.839292 3281 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:36:13.854307 kubelet[3281]: I0317 17:36:13.853538 3281 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:36:13.854889 kubelet[3281]: I0317 17:36:13.854419 3281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:36:13.871683 sudo[3298]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:36:13.873742 sudo[3298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:36:13.879766 kubelet[3281]: E0317 17:36:13.879387 3281 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:36:13.924933 kubelet[3281]: I0317 17:36:13.924791 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:36:13.928918 kubelet[3281]: I0317 17:36:13.928682 3281 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:36:13.936561 kubelet[3281]: I0317 17:36:13.936150 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:36:13.936561 kubelet[3281]: I0317 17:36:13.936251 3281 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:36:13.936561 kubelet[3281]: I0317 17:36:13.936311 3281 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:36:13.936561 kubelet[3281]: E0317 17:36:13.936414 3281 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:36:14.003038 kubelet[3281]: E0317 17:36:14.002672 3281 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Mar 17 17:36:14.026298 kubelet[3281]: I0317 17:36:14.021694 3281 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-28-49" Mar 17 17:36:14.045534 kubelet[3281]: E0317 17:36:14.045296 3281 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:36:14.099021 kubelet[3281]: I0317 17:36:14.096026 3281 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-28-49" Mar 17 17:36:14.099021 kubelet[3281]: I0317 17:36:14.096147 3281 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-28-49" Mar 17 17:36:14.205960 kubelet[3281]: I0317 17:36:14.205842 3281 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:36:14.206228 kubelet[3281]: I0317 17:36:14.206205 3281 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:36:14.206381 kubelet[3281]: I0317 17:36:14.206362 3281 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:36:14.206879 kubelet[3281]: I0317 17:36:14.206854 3281 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:36:14.207032 kubelet[3281]: I0317 17:36:14.206991 3281 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:36:14.207150 kubelet[3281]: I0317 17:36:14.207131 3281 policy_none.go:49] "None policy: Start" Mar 17 17:36:14.209440 kubelet[3281]: I0317 17:36:14.209405 3281 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:36:14.209992 kubelet[3281]: I0317 17:36:14.209944 3281 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:36:14.211206 kubelet[3281]: I0317 17:36:14.211147 3281 state_mem.go:75] "Updated machine memory state" Mar 17 17:36:14.225895 kubelet[3281]: I0317 17:36:14.225859 3281 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:36:14.227956 kubelet[3281]: I0317 17:36:14.227333 3281 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:36:14.231749 kubelet[3281]: I0317 17:36:14.231666 3281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:36:14.249046 kubelet[3281]: I0317 17:36:14.248219 3281 topology_manager.go:215] "Topology Admit Handler" podUID="713ae5b7ea6a084d6b6e367663784892" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-28-49" Mar 17 17:36:14.249046 kubelet[3281]: I0317 17:36:14.248370 3281 topology_manager.go:215] "Topology Admit Handler" podUID="aa1c3b744c2a52131ca66189adfb4eca" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:14.249046 kubelet[3281]: I0317 17:36:14.248445 3281 topology_manager.go:215] "Topology Admit Handler" podUID="6f272b1952144d7204d34c8ebbd8a98f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-28-49" Mar 17 17:36:14.300945 kubelet[3281]: E0317 17:36:14.300899 3281 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-49\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-49" Mar 17 17:36:14.344657 kubelet[3281]: I0317 17:36:14.344600 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:14.345484 kubelet[3281]: I0317 17:36:14.345110 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:14.345484 kubelet[3281]: I0317 17:36:14.345165 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/713ae5b7ea6a084d6b6e367663784892-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-49\" (UID: \"713ae5b7ea6a084d6b6e367663784892\") " pod="kube-system/kube-apiserver-ip-172-31-28-49" Mar 17 17:36:14.345484 kubelet[3281]: I0317 17:36:14.345204 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/713ae5b7ea6a084d6b6e367663784892-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-49\" (UID: \"713ae5b7ea6a084d6b6e367663784892\") " pod="kube-system/kube-apiserver-ip-172-31-28-49" Mar 17 17:36:14.345484 kubelet[3281]: I0317 17:36:14.345243 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:14.345484 kubelet[3281]: I0317 17:36:14.345297 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:14.345749 kubelet[3281]: I0317 17:36:14.345335 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f272b1952144d7204d34c8ebbd8a98f-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-49\" (UID: \"6f272b1952144d7204d34c8ebbd8a98f\") " pod="kube-system/kube-scheduler-ip-172-31-28-49" Mar 17 17:36:14.345749 kubelet[3281]: I0317 17:36:14.345382 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/713ae5b7ea6a084d6b6e367663784892-ca-certs\") pod \"kube-apiserver-ip-172-31-28-49\" (UID: \"713ae5b7ea6a084d6b6e367663784892\") " pod="kube-system/kube-apiserver-ip-172-31-28-49" Mar 17 17:36:14.346148 kubelet[3281]: I0317 17:36:14.345421 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa1c3b744c2a52131ca66189adfb4eca-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-49\" (UID: \"aa1c3b744c2a52131ca66189adfb4eca\") " pod="kube-system/kube-controller-manager-ip-172-31-28-49" Mar 17 17:36:14.800837 sudo[3298]: pam_unix(sudo:session): session closed for user root Mar 17 17:36:14.810041 kubelet[3281]: I0317 17:36:14.809778 3281 apiserver.go:52] "Watching apiserver" Mar 17 17:36:14.839997 kubelet[3281]: I0317 17:36:14.839910 3281 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:36:14.964856 kubelet[3281]: I0317 17:36:14.964757 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-49" podStartSLOduration=0.964735097 podStartE2EDuration="964.735097ms" podCreationTimestamp="2025-03-17 17:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:14.947572253 +0000 UTC m=+1.271462371" watchObservedRunningTime="2025-03-17 17:36:14.964735097 +0000 UTC m=+1.288625203" Mar 17 17:36:14.986663 kubelet[3281]: I0317 17:36:14.986570 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-49" podStartSLOduration=0.986546741 podStartE2EDuration="986.546741ms" podCreationTimestamp="2025-03-17 17:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:14.965243681 +0000 UTC m=+1.289133787" watchObservedRunningTime="2025-03-17 17:36:14.986546741 +0000 UTC m=+1.310436847" Mar 17 17:36:15.013611 kubelet[3281]: I0317 17:36:15.013442 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-49" podStartSLOduration=4.013392469 podStartE2EDuration="4.013392469s" podCreationTimestamp="2025-03-17 17:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:14.987922517 +0000 UTC m=+1.311812611" watchObservedRunningTime="2025-03-17 17:36:15.013392469 +0000 UTC m=+1.337282575" Mar 17 17:36:17.311529 sudo[2287]: pam_unix(sudo:session): session closed for user root Mar 17 17:36:17.335040 sshd[2286]: Connection closed by 147.75.109.163 port 59696 Mar 17 17:36:17.335839 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:17.342513 systemd[1]: sshd@6-172.31.28.49:22-147.75.109.163:59696.service: Deactivated successfully. Mar 17 17:36:17.348333 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:36:17.348814 systemd[1]: session-7.scope: Consumed 10.896s CPU time, 293.1M memory peak. Mar 17 17:36:17.351561 systemd-logind[1938]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:36:17.354283 systemd-logind[1938]: Removed session 7. Mar 17 17:36:17.385496 update_engine[1939]: I20250317 17:36:17.385407 1939 update_attempter.cc:509] Updating boot flags... Mar 17 17:36:17.470961 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3366) Mar 17 17:36:17.786095 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3365) Mar 17 17:36:18.123706 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3365) Mar 17 17:36:27.228429 kubelet[3281]: I0317 17:36:27.228385 3281 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:36:27.229694 containerd[1958]: time="2025-03-17T17:36:27.229636430Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:36:27.230250 kubelet[3281]: I0317 17:36:27.230179 3281 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:36:28.094001 kubelet[3281]: I0317 17:36:28.093772 3281 topology_manager.go:215] "Topology Admit Handler" podUID="5c9ad6ca-2160-47a2-801b-b7d0952c2bea" podNamespace="kube-system" podName="kube-proxy-7jlpk" Mar 17 17:36:28.112591 systemd[1]: Created slice kubepods-besteffort-pod5c9ad6ca_2160_47a2_801b_b7d0952c2bea.slice - libcontainer container kubepods-besteffort-pod5c9ad6ca_2160_47a2_801b_b7d0952c2bea.slice. Mar 17 17:36:28.121793 kubelet[3281]: I0317 17:36:28.121550 3281 topology_manager.go:215] "Topology Admit Handler" podUID="14597fff-e0ca-423a-a062-5519920f1786" podNamespace="kube-system" podName="cilium-dbcfg" Mar 17 17:36:28.137999 kubelet[3281]: I0317 17:36:28.137764 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14597fff-e0ca-423a-a062-5519920f1786-clustermesh-secrets\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.137999 kubelet[3281]: I0317 17:36:28.137834 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14597fff-e0ca-423a-a062-5519920f1786-cilium-config-path\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.137999 kubelet[3281]: I0317 17:36:28.137892 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptpvh\" (UniqueName: \"kubernetes.io/projected/5c9ad6ca-2160-47a2-801b-b7d0952c2bea-kube-api-access-ptpvh\") pod \"kube-proxy-7jlpk\" (UID: \"5c9ad6ca-2160-47a2-801b-b7d0952c2bea\") " pod="kube-system/kube-proxy-7jlpk" Mar 17 17:36:28.137999 kubelet[3281]: I0317 17:36:28.137949 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cni-path\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139384 kubelet[3281]: I0317 17:36:28.138854 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-xtables-lock\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139384 kubelet[3281]: I0317 17:36:28.138929 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-hostproc\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139384 kubelet[3281]: I0317 17:36:28.139008 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c9ad6ca-2160-47a2-801b-b7d0952c2bea-kube-proxy\") pod \"kube-proxy-7jlpk\" (UID: \"5c9ad6ca-2160-47a2-801b-b7d0952c2bea\") " pod="kube-system/kube-proxy-7jlpk" Mar 17 17:36:28.139384 kubelet[3281]: I0317 17:36:28.139061 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-run\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139384 kubelet[3281]: I0317 17:36:28.139116 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c9ad6ca-2160-47a2-801b-b7d0952c2bea-lib-modules\") pod \"kube-proxy-7jlpk\" (UID: \"5c9ad6ca-2160-47a2-801b-b7d0952c2bea\") " pod="kube-system/kube-proxy-7jlpk" Mar 17 17:36:28.139384 kubelet[3281]: I0317 17:36:28.139307 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-bpf-maps\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139787 kubelet[3281]: I0317 17:36:28.139363 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-cgroup\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139787 kubelet[3281]: I0317 17:36:28.139403 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-net\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139787 kubelet[3281]: I0317 17:36:28.139475 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-kernel\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139787 kubelet[3281]: I0317 17:36:28.139529 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c9ad6ca-2160-47a2-801b-b7d0952c2bea-xtables-lock\") pod \"kube-proxy-7jlpk\" (UID: \"5c9ad6ca-2160-47a2-801b-b7d0952c2bea\") " pod="kube-system/kube-proxy-7jlpk" Mar 17 17:36:28.139787 kubelet[3281]: I0317 17:36:28.139580 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-etc-cni-netd\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.139787 kubelet[3281]: I0317 17:36:28.139625 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-lib-modules\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.154172 systemd[1]: Created slice kubepods-burstable-pod14597fff_e0ca_423a_a062_5519920f1786.slice - libcontainer container kubepods-burstable-pod14597fff_e0ca_423a_a062_5519920f1786.slice. Mar 17 17:36:28.242301 kubelet[3281]: I0317 17:36:28.240324 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-hubble-tls\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.242301 kubelet[3281]: I0317 17:36:28.240511 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5p95\" (UniqueName: \"kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-kube-api-access-s5p95\") pod \"cilium-dbcfg\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " pod="kube-system/cilium-dbcfg" Mar 17 17:36:28.381076 kubelet[3281]: I0317 17:36:28.379920 3281 topology_manager.go:215] "Topology Admit Handler" podUID="2db1b07e-a522-45f2-97ac-0acdeb5d9d09" podNamespace="kube-system" podName="cilium-operator-599987898-tq4s9" Mar 17 17:36:28.398902 systemd[1]: Created slice kubepods-besteffort-pod2db1b07e_a522_45f2_97ac_0acdeb5d9d09.slice - libcontainer container kubepods-besteffort-pod2db1b07e_a522_45f2_97ac_0acdeb5d9d09.slice. Mar 17 17:36:28.437040 containerd[1958]: time="2025-03-17T17:36:28.436942360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jlpk,Uid:5c9ad6ca-2160-47a2-801b-b7d0952c2bea,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:28.465362 containerd[1958]: time="2025-03-17T17:36:28.465289648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbcfg,Uid:14597fff-e0ca-423a-a062-5519920f1786,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:28.526714 containerd[1958]: time="2025-03-17T17:36:28.526302616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:28.526714 containerd[1958]: time="2025-03-17T17:36:28.526412272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:28.526714 containerd[1958]: time="2025-03-17T17:36:28.526442056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:28.526714 containerd[1958]: time="2025-03-17T17:36:28.526630468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:28.544719 kubelet[3281]: I0317 17:36:28.544177 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4bh5\" (UniqueName: \"kubernetes.io/projected/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-kube-api-access-z4bh5\") pod \"cilium-operator-599987898-tq4s9\" (UID: \"2db1b07e-a522-45f2-97ac-0acdeb5d9d09\") " pod="kube-system/cilium-operator-599987898-tq4s9" Mar 17 17:36:28.544719 kubelet[3281]: I0317 17:36:28.544265 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-cilium-config-path\") pod \"cilium-operator-599987898-tq4s9\" (UID: \"2db1b07e-a522-45f2-97ac-0acdeb5d9d09\") " pod="kube-system/cilium-operator-599987898-tq4s9" Mar 17 17:36:28.565324 systemd[1]: Started cri-containerd-c373cbceccf19a7d4688f9f2b8034786f991902f3f0f2f3b9bfa76947937ae35.scope - libcontainer container c373cbceccf19a7d4688f9f2b8034786f991902f3f0f2f3b9bfa76947937ae35. Mar 17 17:36:28.569465 containerd[1958]: time="2025-03-17T17:36:28.568765685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:28.569707 containerd[1958]: time="2025-03-17T17:36:28.569496761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:28.571021 containerd[1958]: time="2025-03-17T17:36:28.570457109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:28.571021 containerd[1958]: time="2025-03-17T17:36:28.570638321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:28.610518 systemd[1]: Started cri-containerd-9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a.scope - libcontainer container 9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a. Mar 17 17:36:28.636956 containerd[1958]: time="2025-03-17T17:36:28.635184881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7jlpk,Uid:5c9ad6ca-2160-47a2-801b-b7d0952c2bea,Namespace:kube-system,Attempt:0,} returns sandbox id \"c373cbceccf19a7d4688f9f2b8034786f991902f3f0f2f3b9bfa76947937ae35\"" Mar 17 17:36:28.650346 containerd[1958]: time="2025-03-17T17:36:28.649816973Z" level=info msg="CreateContainer within sandbox \"c373cbceccf19a7d4688f9f2b8034786f991902f3f0f2f3b9bfa76947937ae35\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:36:28.696393 containerd[1958]: time="2025-03-17T17:36:28.696324113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbcfg,Uid:14597fff-e0ca-423a-a062-5519920f1786,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\"" Mar 17 17:36:28.700395 containerd[1958]: time="2025-03-17T17:36:28.700321949Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:36:28.701767 containerd[1958]: time="2025-03-17T17:36:28.701697041Z" level=info msg="CreateContainer within sandbox \"c373cbceccf19a7d4688f9f2b8034786f991902f3f0f2f3b9bfa76947937ae35\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5ad613ac21fb76b79deed86c4145640b626b5b7b64558a4007a5994e1c12ded6\"" Mar 17 17:36:28.702614 containerd[1958]: time="2025-03-17T17:36:28.702562313Z" level=info msg="StartContainer for \"5ad613ac21fb76b79deed86c4145640b626b5b7b64558a4007a5994e1c12ded6\"" Mar 17 17:36:28.711154 containerd[1958]: time="2025-03-17T17:36:28.710687885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tq4s9,Uid:2db1b07e-a522-45f2-97ac-0acdeb5d9d09,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:28.758462 systemd[1]: Started cri-containerd-5ad613ac21fb76b79deed86c4145640b626b5b7b64558a4007a5994e1c12ded6.scope - libcontainer container 5ad613ac21fb76b79deed86c4145640b626b5b7b64558a4007a5994e1c12ded6. Mar 17 17:36:28.782535 containerd[1958]: time="2025-03-17T17:36:28.781995894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:28.782535 containerd[1958]: time="2025-03-17T17:36:28.782110146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:28.782535 containerd[1958]: time="2025-03-17T17:36:28.782146434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:28.785830 containerd[1958]: time="2025-03-17T17:36:28.782551866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:28.822774 systemd[1]: Started cri-containerd-4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5.scope - libcontainer container 4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5. Mar 17 17:36:28.871126 containerd[1958]: time="2025-03-17T17:36:28.871015770Z" level=info msg="StartContainer for \"5ad613ac21fb76b79deed86c4145640b626b5b7b64558a4007a5994e1c12ded6\" returns successfully" Mar 17 17:36:28.931831 containerd[1958]: time="2025-03-17T17:36:28.930891798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-tq4s9,Uid:2db1b07e-a522-45f2-97ac-0acdeb5d9d09,Namespace:kube-system,Attempt:0,} returns sandbox id \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\"" Mar 17 17:36:33.960639 kubelet[3281]: I0317 17:36:33.960294 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7jlpk" podStartSLOduration=5.960271379 podStartE2EDuration="5.960271379s" podCreationTimestamp="2025-03-17 17:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:29.177616228 +0000 UTC m=+15.501506370" watchObservedRunningTime="2025-03-17 17:36:33.960271379 +0000 UTC m=+20.284161485" Mar 17 17:36:35.760014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430403862.mount: Deactivated successfully. Mar 17 17:36:38.280375 containerd[1958]: time="2025-03-17T17:36:38.280317349Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:36:38.282963 containerd[1958]: time="2025-03-17T17:36:38.282899449Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:36:38.284226 containerd[1958]: time="2025-03-17T17:36:38.284180797Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:36:38.295753 containerd[1958]: time="2025-03-17T17:36:38.295628809Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.594142152s" Mar 17 17:36:38.295753 containerd[1958]: time="2025-03-17T17:36:38.295694365Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:36:38.299535 containerd[1958]: time="2025-03-17T17:36:38.299236873Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:36:38.302985 containerd[1958]: time="2025-03-17T17:36:38.302460253Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:36:38.357420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418465244.mount: Deactivated successfully. Mar 17 17:36:38.377029 containerd[1958]: time="2025-03-17T17:36:38.376937497Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\"" Mar 17 17:36:38.379398 containerd[1958]: time="2025-03-17T17:36:38.379342237Z" level=info msg="StartContainer for \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\"" Mar 17 17:36:38.436052 systemd[1]: run-containerd-runc-k8s.io-9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882-runc.gCu9qG.mount: Deactivated successfully. Mar 17 17:36:38.450275 systemd[1]: Started cri-containerd-9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882.scope - libcontainer container 9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882. Mar 17 17:36:38.500193 containerd[1958]: time="2025-03-17T17:36:38.500107490Z" level=info msg="StartContainer for \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\" returns successfully" Mar 17 17:36:38.518344 systemd[1]: cri-containerd-9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882.scope: Deactivated successfully. Mar 17 17:36:39.342338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882-rootfs.mount: Deactivated successfully. Mar 17 17:36:39.540025 containerd[1958]: time="2025-03-17T17:36:39.539900631Z" level=info msg="shim disconnected" id=9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882 namespace=k8s.io Mar 17 17:36:39.540025 containerd[1958]: time="2025-03-17T17:36:39.540012123Z" level=warning msg="cleaning up after shim disconnected" id=9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882 namespace=k8s.io Mar 17 17:36:39.541023 containerd[1958]: time="2025-03-17T17:36:39.540033699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:36:40.213644 containerd[1958]: time="2025-03-17T17:36:40.213591339Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:36:40.246938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574248146.mount: Deactivated successfully. Mar 17 17:36:40.254699 containerd[1958]: time="2025-03-17T17:36:40.254568915Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\"" Mar 17 17:36:40.257132 containerd[1958]: time="2025-03-17T17:36:40.256512063Z" level=info msg="StartContainer for \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\"" Mar 17 17:36:40.311303 systemd[1]: Started cri-containerd-3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490.scope - libcontainer container 3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490. Mar 17 17:36:40.365455 containerd[1958]: time="2025-03-17T17:36:40.365324907Z" level=info msg="StartContainer for \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\" returns successfully" Mar 17 17:36:40.390042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:36:40.390544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:36:40.391710 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:36:40.398147 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:36:40.403800 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:36:40.406539 systemd[1]: cri-containerd-3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490.scope: Deactivated successfully. Mar 17 17:36:40.454128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:36:40.467829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490-rootfs.mount: Deactivated successfully. Mar 17 17:36:40.475209 containerd[1958]: time="2025-03-17T17:36:40.475064452Z" level=info msg="shim disconnected" id=3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490 namespace=k8s.io Mar 17 17:36:40.475209 containerd[1958]: time="2025-03-17T17:36:40.475138084Z" level=warning msg="cleaning up after shim disconnected" id=3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490 namespace=k8s.io Mar 17 17:36:40.475209 containerd[1958]: time="2025-03-17T17:36:40.475157560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:36:41.213143 containerd[1958]: time="2025-03-17T17:36:41.212522235Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:36:41.250950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2908834225.mount: Deactivated successfully. Mar 17 17:36:41.278800 containerd[1958]: time="2025-03-17T17:36:41.278739748Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\"" Mar 17 17:36:41.282053 containerd[1958]: time="2025-03-17T17:36:41.281570272Z" level=info msg="StartContainer for \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\"" Mar 17 17:36:41.349251 systemd[1]: Started cri-containerd-796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906.scope - libcontainer container 796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906. Mar 17 17:36:41.444152 containerd[1958]: time="2025-03-17T17:36:41.444083117Z" level=info msg="StartContainer for \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\" returns successfully" Mar 17 17:36:41.445565 systemd[1]: cri-containerd-796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906.scope: Deactivated successfully. Mar 17 17:36:41.517444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906-rootfs.mount: Deactivated successfully. Mar 17 17:36:41.542445 containerd[1958]: time="2025-03-17T17:36:41.541749293Z" level=info msg="shim disconnected" id=796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906 namespace=k8s.io Mar 17 17:36:41.542445 containerd[1958]: time="2025-03-17T17:36:41.542181005Z" level=warning msg="cleaning up after shim disconnected" id=796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906 namespace=k8s.io Mar 17 17:36:41.542445 containerd[1958]: time="2025-03-17T17:36:41.542201381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:36:42.008052 containerd[1958]: time="2025-03-17T17:36:42.007675743Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:36:42.010593 containerd[1958]: time="2025-03-17T17:36:42.010503735Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:36:42.013512 containerd[1958]: time="2025-03-17T17:36:42.013090215Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:36:42.017855 containerd[1958]: time="2025-03-17T17:36:42.017800059Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.718495638s" Mar 17 17:36:42.018118 containerd[1958]: time="2025-03-17T17:36:42.018082959Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:36:42.023950 containerd[1958]: time="2025-03-17T17:36:42.023882739Z" level=info msg="CreateContainer within sandbox \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:36:42.057325 containerd[1958]: time="2025-03-17T17:36:42.057266860Z" level=info msg="CreateContainer within sandbox \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\"" Mar 17 17:36:42.060020 containerd[1958]: time="2025-03-17T17:36:42.058586392Z" level=info msg="StartContainer for \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\"" Mar 17 17:36:42.105335 systemd[1]: Started cri-containerd-973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c.scope - libcontainer container 973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c. Mar 17 17:36:42.155073 containerd[1958]: time="2025-03-17T17:36:42.154953700Z" level=info msg="StartContainer for \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\" returns successfully" Mar 17 17:36:42.239720 kubelet[3281]: I0317 17:36:42.239619 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-tq4s9" podStartSLOduration=1.156031756 podStartE2EDuration="14.239596457s" podCreationTimestamp="2025-03-17 17:36:28 +0000 UTC" firstStartedPulling="2025-03-17 17:36:28.936229566 +0000 UTC m=+15.260119672" lastFinishedPulling="2025-03-17 17:36:42.019794279 +0000 UTC m=+28.343684373" observedRunningTime="2025-03-17 17:36:42.238907693 +0000 UTC m=+28.562797823" watchObservedRunningTime="2025-03-17 17:36:42.239596457 +0000 UTC m=+28.563486599" Mar 17 17:36:42.246015 containerd[1958]: time="2025-03-17T17:36:42.244648709Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:36:42.292795 containerd[1958]: time="2025-03-17T17:36:42.292123949Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\"" Mar 17 17:36:42.294546 containerd[1958]: time="2025-03-17T17:36:42.294459293Z" level=info msg="StartContainer for \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\"" Mar 17 17:36:42.365722 systemd[1]: Started cri-containerd-6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78.scope - libcontainer container 6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78. Mar 17 17:36:42.443089 systemd[1]: cri-containerd-6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78.scope: Deactivated successfully. Mar 17 17:36:42.456163 containerd[1958]: time="2025-03-17T17:36:42.453946662Z" level=info msg="StartContainer for \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\" returns successfully" Mar 17 17:36:42.507789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78-rootfs.mount: Deactivated successfully. Mar 17 17:36:42.575747 containerd[1958]: time="2025-03-17T17:36:42.575304774Z" level=info msg="shim disconnected" id=6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78 namespace=k8s.io Mar 17 17:36:42.575747 containerd[1958]: time="2025-03-17T17:36:42.575381082Z" level=warning msg="cleaning up after shim disconnected" id=6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78 namespace=k8s.io Mar 17 17:36:42.575747 containerd[1958]: time="2025-03-17T17:36:42.575401890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:36:43.253835 containerd[1958]: time="2025-03-17T17:36:43.253535562Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:36:43.294596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051820784.mount: Deactivated successfully. Mar 17 17:36:43.298399 containerd[1958]: time="2025-03-17T17:36:43.298093914Z" level=info msg="CreateContainer within sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\"" Mar 17 17:36:43.299515 containerd[1958]: time="2025-03-17T17:36:43.299354370Z" level=info msg="StartContainer for \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\"" Mar 17 17:36:43.388283 systemd[1]: Started cri-containerd-e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596.scope - libcontainer container e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596. Mar 17 17:36:43.525696 containerd[1958]: time="2025-03-17T17:36:43.524663551Z" level=info msg="StartContainer for \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\" returns successfully" Mar 17 17:36:43.936309 kubelet[3281]: I0317 17:36:43.934909 3281 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:36:44.007490 kubelet[3281]: I0317 17:36:44.006769 3281 topology_manager.go:215] "Topology Admit Handler" podUID="792c685d-e9d3-445f-880d-e0fcc8e58c03" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mwwxn" Mar 17 17:36:44.016136 kubelet[3281]: I0317 17:36:44.016053 3281 topology_manager.go:215] "Topology Admit Handler" podUID="615fc2d4-e595-412a-a46b-9206b243e316" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5vklt" Mar 17 17:36:44.032381 systemd[1]: Created slice kubepods-burstable-pod792c685d_e9d3_445f_880d_e0fcc8e58c03.slice - libcontainer container kubepods-burstable-pod792c685d_e9d3_445f_880d_e0fcc8e58c03.slice. Mar 17 17:36:44.050560 systemd[1]: Created slice kubepods-burstable-pod615fc2d4_e595_412a_a46b_9206b243e316.slice - libcontainer container kubepods-burstable-pod615fc2d4_e595_412a_a46b_9206b243e316.slice. Mar 17 17:36:44.074400 kubelet[3281]: I0317 17:36:44.074353 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdwk\" (UniqueName: \"kubernetes.io/projected/792c685d-e9d3-445f-880d-e0fcc8e58c03-kube-api-access-xjdwk\") pod \"coredns-7db6d8ff4d-mwwxn\" (UID: \"792c685d-e9d3-445f-880d-e0fcc8e58c03\") " pod="kube-system/coredns-7db6d8ff4d-mwwxn" Mar 17 17:36:44.074782 kubelet[3281]: I0317 17:36:44.074636 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4nxw\" (UniqueName: \"kubernetes.io/projected/615fc2d4-e595-412a-a46b-9206b243e316-kube-api-access-j4nxw\") pod \"coredns-7db6d8ff4d-5vklt\" (UID: \"615fc2d4-e595-412a-a46b-9206b243e316\") " pod="kube-system/coredns-7db6d8ff4d-5vklt" Mar 17 17:36:44.075031 kubelet[3281]: I0317 17:36:44.074761 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/792c685d-e9d3-445f-880d-e0fcc8e58c03-config-volume\") pod \"coredns-7db6d8ff4d-mwwxn\" (UID: \"792c685d-e9d3-445f-880d-e0fcc8e58c03\") " pod="kube-system/coredns-7db6d8ff4d-mwwxn" Mar 17 17:36:44.075031 kubelet[3281]: I0317 17:36:44.074960 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/615fc2d4-e595-412a-a46b-9206b243e316-config-volume\") pod \"coredns-7db6d8ff4d-5vklt\" (UID: \"615fc2d4-e595-412a-a46b-9206b243e316\") " pod="kube-system/coredns-7db6d8ff4d-5vklt" Mar 17 17:36:44.347065 containerd[1958]: time="2025-03-17T17:36:44.346963867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mwwxn,Uid:792c685d-e9d3-445f-880d-e0fcc8e58c03,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:44.361428 containerd[1958]: time="2025-03-17T17:36:44.361353523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5vklt,Uid:615fc2d4-e595-412a-a46b-9206b243e316,Namespace:kube-system,Attempt:0,}" Mar 17 17:36:46.853847 (udev-worker)[4343]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:36:46.860046 systemd-networkd[1870]: cilium_host: Link UP Mar 17 17:36:46.860421 systemd-networkd[1870]: cilium_net: Link UP Mar 17 17:36:46.860727 systemd-networkd[1870]: cilium_net: Gained carrier Mar 17 17:36:46.861092 systemd-networkd[1870]: cilium_host: Gained carrier Mar 17 17:36:46.863402 (udev-worker)[4344]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:36:47.032628 (udev-worker)[4384]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:36:47.040305 systemd-networkd[1870]: cilium_host: Gained IPv6LL Mar 17 17:36:47.046105 systemd-networkd[1870]: cilium_vxlan: Link UP Mar 17 17:36:47.046125 systemd-networkd[1870]: cilium_vxlan: Gained carrier Mar 17 17:36:47.104459 systemd-networkd[1870]: cilium_net: Gained IPv6LL Mar 17 17:36:47.536169 kernel: NET: Registered PF_ALG protocol family Mar 17 17:36:48.194119 systemd-networkd[1870]: cilium_vxlan: Gained IPv6LL Mar 17 17:36:48.881212 systemd-networkd[1870]: lxc_health: Link UP Mar 17 17:36:48.905537 systemd-networkd[1870]: lxc_health: Gained carrier Mar 17 17:36:49.458054 kernel: eth0: renamed from tmp15c41 Mar 17 17:36:49.455632 systemd-networkd[1870]: lxcea6ac266709c: Link UP Mar 17 17:36:49.465554 systemd-networkd[1870]: lxcea6ac266709c: Gained carrier Mar 17 17:36:49.533372 systemd-networkd[1870]: lxce25a11287ae4: Link UP Mar 17 17:36:49.539369 kernel: eth0: renamed from tmp9ada5 Mar 17 17:36:49.544237 systemd-networkd[1870]: lxce25a11287ae4: Gained carrier Mar 17 17:36:50.496261 systemd-networkd[1870]: lxc_health: Gained IPv6LL Mar 17 17:36:50.504244 kubelet[3281]: I0317 17:36:50.503754 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dbcfg" podStartSLOduration=12.905009682 podStartE2EDuration="22.50373245s" podCreationTimestamp="2025-03-17 17:36:28 +0000 UTC" firstStartedPulling="2025-03-17 17:36:28.698593061 +0000 UTC m=+15.022483167" lastFinishedPulling="2025-03-17 17:36:38.297315769 +0000 UTC m=+24.621205935" observedRunningTime="2025-03-17 17:36:44.299316259 +0000 UTC m=+30.623206389" watchObservedRunningTime="2025-03-17 17:36:50.50373245 +0000 UTC m=+36.827622556" Mar 17 17:36:51.008195 systemd-networkd[1870]: lxcea6ac266709c: Gained IPv6LL Mar 17 17:36:51.328454 systemd-networkd[1870]: lxce25a11287ae4: Gained IPv6LL Mar 17 17:36:52.368175 systemd[1]: Started sshd@7-172.31.28.49:22-147.75.109.163:53916.service - OpenSSH per-connection server daemon (147.75.109.163:53916). Mar 17 17:36:52.563225 sshd[4736]: Accepted publickey for core from 147.75.109.163 port 53916 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:36:52.565843 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:52.574800 systemd-logind[1938]: New session 8 of user core. Mar 17 17:36:52.582631 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:36:52.898005 sshd[4740]: Connection closed by 147.75.109.163 port 53916 Mar 17 17:36:52.902150 sshd-session[4736]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:52.911288 systemd-logind[1938]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:36:52.912513 systemd[1]: sshd@7-172.31.28.49:22-147.75.109.163:53916.service: Deactivated successfully. Mar 17 17:36:52.920920 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:36:52.930103 systemd-logind[1938]: Removed session 8. Mar 17 17:36:54.210811 ntpd[1932]: Listen normally on 7 cilium_host 192.168.0.112:123 Mar 17 17:36:54.210948 ntpd[1932]: Listen normally on 8 cilium_net [fe80::c4b8:cdff:fe54:7055%4]:123 Mar 17 17:36:54.211476 ntpd[1932]: 17 Mar 17:36:54 ntpd[1932]: Listen normally on 7 cilium_host 192.168.0.112:123 Mar 17 17:36:54.211476 ntpd[1932]: 17 Mar 17:36:54 ntpd[1932]: Listen normally on 8 cilium_net [fe80::c4b8:cdff:fe54:7055%4]:123 Mar 17 17:36:54.211476 ntpd[1932]: 17 Mar 17:36:54 ntpd[1932]: Listen normally on 9 cilium_host [fe80::54b6:55ff:feed:e024%5]:123 Mar 17 17:36:54.211476 ntpd[1932]: 17 Mar 17:36:54 ntpd[1932]: Listen normally on 10 cilium_vxlan [fe80::a872:90ff:fe46:f71f%6]:123 Mar 17 17:36:54.211476 ntpd[1932]: 17 Mar 17:36:54 ntpd[1932]: Listen normally on 11 lxc_health [fe80::ace1:ebff:fea3:d531%8]:123 Mar 17 17:36:54.211476 ntpd[1932]: 17 Mar 17:36:54 ntpd[1932]: Listen normally on 12 lxcea6ac266709c [fe80::8067:cdff:fe65:bf07%10]:123 Mar 17 17:36:54.211476 ntpd[1932]: 17 Mar 17:36:54 ntpd[1932]: Listen normally on 13 lxce25a11287ae4 [fe80::9073:3aff:fe51:b3dc%12]:123 Mar 17 17:36:54.211056 ntpd[1932]: Listen normally on 9 cilium_host [fe80::54b6:55ff:feed:e024%5]:123 Mar 17 17:36:54.211129 ntpd[1932]: Listen normally on 10 cilium_vxlan [fe80::a872:90ff:fe46:f71f%6]:123 Mar 17 17:36:54.211201 ntpd[1932]: Listen normally on 11 lxc_health [fe80::ace1:ebff:fea3:d531%8]:123 Mar 17 17:36:54.211269 ntpd[1932]: Listen normally on 12 lxcea6ac266709c [fe80::8067:cdff:fe65:bf07%10]:123 Mar 17 17:36:54.211356 ntpd[1932]: Listen normally on 13 lxce25a11287ae4 [fe80::9073:3aff:fe51:b3dc%12]:123 Mar 17 17:36:55.704316 kubelet[3281]: I0317 17:36:55.704245 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:36:57.946541 systemd[1]: Started sshd@8-172.31.28.49:22-147.75.109.163:57788.service - OpenSSH per-connection server daemon (147.75.109.163:57788). Mar 17 17:36:58.168043 sshd[4759]: Accepted publickey for core from 147.75.109.163 port 57788 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:36:58.172543 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:36:58.189157 systemd-logind[1938]: New session 9 of user core. Mar 17 17:36:58.196311 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:36:58.228573 containerd[1958]: time="2025-03-17T17:36:58.225782900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:58.228573 containerd[1958]: time="2025-03-17T17:36:58.225886952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:58.228573 containerd[1958]: time="2025-03-17T17:36:58.225925508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:58.228573 containerd[1958]: time="2025-03-17T17:36:58.227888960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:58.253474 containerd[1958]: time="2025-03-17T17:36:58.251749376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:36:58.253474 containerd[1958]: time="2025-03-17T17:36:58.251853416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:36:58.253474 containerd[1958]: time="2025-03-17T17:36:58.251916632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:58.253474 containerd[1958]: time="2025-03-17T17:36:58.252112352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:36:58.304564 systemd[1]: run-containerd-runc-k8s.io-9ada53e880d120c238c68c8db63fa86b9b6de34067dabe1bb33ba7259bdd93c2-runc.Rj56f6.mount: Deactivated successfully. Mar 17 17:36:58.328307 systemd[1]: Started cri-containerd-9ada53e880d120c238c68c8db63fa86b9b6de34067dabe1bb33ba7259bdd93c2.scope - libcontainer container 9ada53e880d120c238c68c8db63fa86b9b6de34067dabe1bb33ba7259bdd93c2. Mar 17 17:36:58.366287 systemd[1]: Started cri-containerd-15c4130321f64bf7f640368121e9e491ed9dd90345e528371c440e37a7ebcde0.scope - libcontainer container 15c4130321f64bf7f640368121e9e491ed9dd90345e528371c440e37a7ebcde0. Mar 17 17:36:58.501173 containerd[1958]: time="2025-03-17T17:36:58.500993889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5vklt,Uid:615fc2d4-e595-412a-a46b-9206b243e316,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ada53e880d120c238c68c8db63fa86b9b6de34067dabe1bb33ba7259bdd93c2\"" Mar 17 17:36:58.518717 containerd[1958]: time="2025-03-17T17:36:58.518629449Z" level=info msg="CreateContainer within sandbox \"9ada53e880d120c238c68c8db63fa86b9b6de34067dabe1bb33ba7259bdd93c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:36:58.574126 containerd[1958]: time="2025-03-17T17:36:58.569032546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mwwxn,Uid:792c685d-e9d3-445f-880d-e0fcc8e58c03,Namespace:kube-system,Attempt:0,} returns sandbox id \"15c4130321f64bf7f640368121e9e491ed9dd90345e528371c440e37a7ebcde0\"" Mar 17 17:36:58.585024 containerd[1958]: time="2025-03-17T17:36:58.584230534Z" level=info msg="CreateContainer within sandbox \"15c4130321f64bf7f640368121e9e491ed9dd90345e528371c440e37a7ebcde0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:36:58.598023 sshd[4775]: Connection closed by 147.75.109.163 port 57788 Mar 17 17:36:58.599110 sshd-session[4759]: pam_unix(sshd:session): session closed for user core Mar 17 17:36:58.610205 containerd[1958]: time="2025-03-17T17:36:58.610028338Z" level=info msg="CreateContainer within sandbox \"9ada53e880d120c238c68c8db63fa86b9b6de34067dabe1bb33ba7259bdd93c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fb34ddec34ed3fc3caa8ffb5891f667f86ae022a435cbcd136ee52dfc5fd2d6\"" Mar 17 17:36:58.613764 containerd[1958]: time="2025-03-17T17:36:58.613634386Z" level=info msg="StartContainer for \"1fb34ddec34ed3fc3caa8ffb5891f667f86ae022a435cbcd136ee52dfc5fd2d6\"" Mar 17 17:36:58.613779 systemd[1]: sshd@8-172.31.28.49:22-147.75.109.163:57788.service: Deactivated successfully. Mar 17 17:36:58.622447 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:36:58.627429 systemd-logind[1938]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:36:58.633922 systemd-logind[1938]: Removed session 9. Mar 17 17:36:58.644137 containerd[1958]: time="2025-03-17T17:36:58.641637922Z" level=info msg="CreateContainer within sandbox \"15c4130321f64bf7f640368121e9e491ed9dd90345e528371c440e37a7ebcde0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c04bd9262c6cca0da414820f7e9036b4c160e633b22a370ca372501b1e83bf3e\"" Mar 17 17:36:58.646015 containerd[1958]: time="2025-03-17T17:36:58.644662294Z" level=info msg="StartContainer for \"c04bd9262c6cca0da414820f7e9036b4c160e633b22a370ca372501b1e83bf3e\"" Mar 17 17:36:58.724673 systemd[1]: Started cri-containerd-1fb34ddec34ed3fc3caa8ffb5891f667f86ae022a435cbcd136ee52dfc5fd2d6.scope - libcontainer container 1fb34ddec34ed3fc3caa8ffb5891f667f86ae022a435cbcd136ee52dfc5fd2d6. Mar 17 17:36:58.745432 systemd[1]: Started cri-containerd-c04bd9262c6cca0da414820f7e9036b4c160e633b22a370ca372501b1e83bf3e.scope - libcontainer container c04bd9262c6cca0da414820f7e9036b4c160e633b22a370ca372501b1e83bf3e. Mar 17 17:36:58.846735 containerd[1958]: time="2025-03-17T17:36:58.846560447Z" level=info msg="StartContainer for \"c04bd9262c6cca0da414820f7e9036b4c160e633b22a370ca372501b1e83bf3e\" returns successfully" Mar 17 17:36:58.848769 containerd[1958]: time="2025-03-17T17:36:58.848696651Z" level=info msg="StartContainer for \"1fb34ddec34ed3fc3caa8ffb5891f667f86ae022a435cbcd136ee52dfc5fd2d6\" returns successfully" Mar 17 17:36:59.341153 kubelet[3281]: I0317 17:36:59.341041 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5vklt" podStartSLOduration=31.34099999 podStartE2EDuration="31.34099999s" podCreationTimestamp="2025-03-17 17:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:59.339508342 +0000 UTC m=+45.663398472" watchObservedRunningTime="2025-03-17 17:36:59.34099999 +0000 UTC m=+45.664890096" Mar 17 17:36:59.366956 kubelet[3281]: I0317 17:36:59.366849 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mwwxn" podStartSLOduration=31.366826498000002 podStartE2EDuration="31.366826498s" podCreationTimestamp="2025-03-17 17:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:36:59.36483547 +0000 UTC m=+45.688725588" watchObservedRunningTime="2025-03-17 17:36:59.366826498 +0000 UTC m=+45.690716616" Mar 17 17:37:03.645543 systemd[1]: Started sshd@9-172.31.28.49:22-147.75.109.163:57798.service - OpenSSH per-connection server daemon (147.75.109.163:57798). Mar 17 17:37:03.839500 sshd[4943]: Accepted publickey for core from 147.75.109.163 port 57798 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:03.842186 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:03.851369 systemd-logind[1938]: New session 10 of user core. Mar 17 17:37:03.858267 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:37:04.101711 sshd[4945]: Connection closed by 147.75.109.163 port 57798 Mar 17 17:37:04.102629 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:04.109857 systemd[1]: sshd@9-172.31.28.49:22-147.75.109.163:57798.service: Deactivated successfully. Mar 17 17:37:04.114246 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:37:04.116392 systemd-logind[1938]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:37:04.118365 systemd-logind[1938]: Removed session 10. Mar 17 17:37:09.144491 systemd[1]: Started sshd@10-172.31.28.49:22-147.75.109.163:43366.service - OpenSSH per-connection server daemon (147.75.109.163:43366). Mar 17 17:37:09.333753 sshd[4962]: Accepted publickey for core from 147.75.109.163 port 43366 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:09.336283 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:09.346210 systemd-logind[1938]: New session 11 of user core. Mar 17 17:37:09.355396 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:37:09.597013 sshd[4964]: Connection closed by 147.75.109.163 port 43366 Mar 17 17:37:09.597861 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:09.602937 systemd-logind[1938]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:37:09.604429 systemd[1]: sshd@10-172.31.28.49:22-147.75.109.163:43366.service: Deactivated successfully. Mar 17 17:37:09.608370 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:37:09.612554 systemd-logind[1938]: Removed session 11. Mar 17 17:37:09.634560 systemd[1]: Started sshd@11-172.31.28.49:22-147.75.109.163:43370.service - OpenSSH per-connection server daemon (147.75.109.163:43370). Mar 17 17:37:09.824307 sshd[4977]: Accepted publickey for core from 147.75.109.163 port 43370 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:09.827043 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:09.836028 systemd-logind[1938]: New session 12 of user core. Mar 17 17:37:09.841274 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:37:10.167680 sshd[4979]: Connection closed by 147.75.109.163 port 43370 Mar 17 17:37:10.168914 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:10.180268 systemd[1]: sshd@11-172.31.28.49:22-147.75.109.163:43370.service: Deactivated successfully. Mar 17 17:37:10.187806 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:37:10.189858 systemd-logind[1938]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:37:10.216887 systemd[1]: Started sshd@12-172.31.28.49:22-147.75.109.163:43386.service - OpenSSH per-connection server daemon (147.75.109.163:43386). Mar 17 17:37:10.218927 systemd-logind[1938]: Removed session 12. Mar 17 17:37:10.412751 sshd[4988]: Accepted publickey for core from 147.75.109.163 port 43386 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:10.415275 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:10.426271 systemd-logind[1938]: New session 13 of user core. Mar 17 17:37:10.433282 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:37:10.680798 sshd[4991]: Connection closed by 147.75.109.163 port 43386 Mar 17 17:37:10.681696 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:10.688836 systemd[1]: sshd@12-172.31.28.49:22-147.75.109.163:43386.service: Deactivated successfully. Mar 17 17:37:10.694142 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:37:10.696523 systemd-logind[1938]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:37:10.699966 systemd-logind[1938]: Removed session 13. Mar 17 17:37:15.730494 systemd[1]: Started sshd@13-172.31.28.49:22-147.75.109.163:59844.service - OpenSSH per-connection server daemon (147.75.109.163:59844). Mar 17 17:37:15.920347 sshd[5005]: Accepted publickey for core from 147.75.109.163 port 59844 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:15.922783 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:15.930600 systemd-logind[1938]: New session 14 of user core. Mar 17 17:37:15.936261 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:37:16.184457 sshd[5007]: Connection closed by 147.75.109.163 port 59844 Mar 17 17:37:16.185362 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:16.192504 systemd[1]: sshd@13-172.31.28.49:22-147.75.109.163:59844.service: Deactivated successfully. Mar 17 17:37:16.196759 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:37:16.198691 systemd-logind[1938]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:37:16.201113 systemd-logind[1938]: Removed session 14. Mar 17 17:37:21.230550 systemd[1]: Started sshd@14-172.31.28.49:22-147.75.109.163:59846.service - OpenSSH per-connection server daemon (147.75.109.163:59846). Mar 17 17:37:21.417963 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 59846 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:21.420568 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:21.428589 systemd-logind[1938]: New session 15 of user core. Mar 17 17:37:21.435283 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:37:21.677957 sshd[5021]: Connection closed by 147.75.109.163 port 59846 Mar 17 17:37:21.678832 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:21.685352 systemd[1]: sshd@14-172.31.28.49:22-147.75.109.163:59846.service: Deactivated successfully. Mar 17 17:37:21.691530 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:37:21.693574 systemd-logind[1938]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:37:21.695757 systemd-logind[1938]: Removed session 15. Mar 17 17:37:26.719576 systemd[1]: Started sshd@15-172.31.28.49:22-147.75.109.163:42930.service - OpenSSH per-connection server daemon (147.75.109.163:42930). Mar 17 17:37:26.910787 sshd[5033]: Accepted publickey for core from 147.75.109.163 port 42930 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:26.913396 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:26.924309 systemd-logind[1938]: New session 16 of user core. Mar 17 17:37:26.933281 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:37:27.177928 sshd[5035]: Connection closed by 147.75.109.163 port 42930 Mar 17 17:37:27.178902 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:27.185273 systemd[1]: sshd@15-172.31.28.49:22-147.75.109.163:42930.service: Deactivated successfully. Mar 17 17:37:27.188961 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:37:27.190726 systemd-logind[1938]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:37:27.193338 systemd-logind[1938]: Removed session 16. Mar 17 17:37:32.221587 systemd[1]: Started sshd@16-172.31.28.49:22-147.75.109.163:42934.service - OpenSSH per-connection server daemon (147.75.109.163:42934). Mar 17 17:37:32.408197 sshd[5053]: Accepted publickey for core from 147.75.109.163 port 42934 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:32.410738 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:32.421312 systemd-logind[1938]: New session 17 of user core. Mar 17 17:37:32.428284 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:37:32.675006 sshd[5055]: Connection closed by 147.75.109.163 port 42934 Mar 17 17:37:32.674162 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:32.681508 systemd[1]: sshd@16-172.31.28.49:22-147.75.109.163:42934.service: Deactivated successfully. Mar 17 17:37:32.685778 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:37:32.687222 systemd-logind[1938]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:37:32.689184 systemd-logind[1938]: Removed session 17. Mar 17 17:37:32.717520 systemd[1]: Started sshd@17-172.31.28.49:22-147.75.109.163:42946.service - OpenSSH per-connection server daemon (147.75.109.163:42946). Mar 17 17:37:32.903833 sshd[5067]: Accepted publickey for core from 147.75.109.163 port 42946 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:32.906372 sshd-session[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:32.917290 systemd-logind[1938]: New session 18 of user core. Mar 17 17:37:32.924228 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:37:33.222529 sshd[5069]: Connection closed by 147.75.109.163 port 42946 Mar 17 17:37:33.223663 sshd-session[5067]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:33.229037 systemd[1]: sshd@17-172.31.28.49:22-147.75.109.163:42946.service: Deactivated successfully. Mar 17 17:37:33.233855 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:37:33.237421 systemd-logind[1938]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:37:33.239942 systemd-logind[1938]: Removed session 18. Mar 17 17:37:33.266491 systemd[1]: Started sshd@18-172.31.28.49:22-147.75.109.163:42954.service - OpenSSH per-connection server daemon (147.75.109.163:42954). Mar 17 17:37:33.449779 sshd[5079]: Accepted publickey for core from 147.75.109.163 port 42954 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:33.452893 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:33.461670 systemd-logind[1938]: New session 19 of user core. Mar 17 17:37:33.467253 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:37:36.145555 sshd[5081]: Connection closed by 147.75.109.163 port 42954 Mar 17 17:37:36.149431 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:36.158068 systemd[1]: sshd@18-172.31.28.49:22-147.75.109.163:42954.service: Deactivated successfully. Mar 17 17:37:36.167567 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:37:36.168719 systemd[1]: session-19.scope: Consumed 866ms CPU time, 66.7M memory peak. Mar 17 17:37:36.172410 systemd-logind[1938]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:37:36.203543 systemd[1]: Started sshd@19-172.31.28.49:22-147.75.109.163:51436.service - OpenSSH per-connection server daemon (147.75.109.163:51436). Mar 17 17:37:36.205363 systemd-logind[1938]: Removed session 19. Mar 17 17:37:36.391893 sshd[5097]: Accepted publickey for core from 147.75.109.163 port 51436 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:36.394565 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:36.403268 systemd-logind[1938]: New session 20 of user core. Mar 17 17:37:36.408265 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:37:36.887944 sshd[5100]: Connection closed by 147.75.109.163 port 51436 Mar 17 17:37:36.888820 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:36.899555 systemd[1]: sshd@19-172.31.28.49:22-147.75.109.163:51436.service: Deactivated successfully. Mar 17 17:37:36.905540 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:37:36.910371 systemd-logind[1938]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:37:36.936578 systemd[1]: Started sshd@20-172.31.28.49:22-147.75.109.163:51438.service - OpenSSH per-connection server daemon (147.75.109.163:51438). Mar 17 17:37:36.939433 systemd-logind[1938]: Removed session 20. Mar 17 17:37:37.119661 sshd[5110]: Accepted publickey for core from 147.75.109.163 port 51438 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:37.122754 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:37.132429 systemd-logind[1938]: New session 21 of user core. Mar 17 17:37:37.137249 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:37:37.373484 sshd[5113]: Connection closed by 147.75.109.163 port 51438 Mar 17 17:37:37.374363 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:37.380612 systemd[1]: sshd@20-172.31.28.49:22-147.75.109.163:51438.service: Deactivated successfully. Mar 17 17:37:37.385074 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:37:37.387253 systemd-logind[1938]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:37:37.389266 systemd-logind[1938]: Removed session 21. Mar 17 17:37:42.422542 systemd[1]: Started sshd@21-172.31.28.49:22-147.75.109.163:51444.service - OpenSSH per-connection server daemon (147.75.109.163:51444). Mar 17 17:37:42.623109 sshd[5126]: Accepted publickey for core from 147.75.109.163 port 51444 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:42.625691 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:42.636693 systemd-logind[1938]: New session 22 of user core. Mar 17 17:37:42.642288 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:37:42.896020 sshd[5131]: Connection closed by 147.75.109.163 port 51444 Mar 17 17:37:42.897115 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:42.904048 systemd[1]: sshd@21-172.31.28.49:22-147.75.109.163:51444.service: Deactivated successfully. Mar 17 17:37:42.910488 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:37:42.912437 systemd-logind[1938]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:37:42.914863 systemd-logind[1938]: Removed session 22. Mar 17 17:37:47.946479 systemd[1]: Started sshd@22-172.31.28.49:22-147.75.109.163:44944.service - OpenSSH per-connection server daemon (147.75.109.163:44944). Mar 17 17:37:48.146073 sshd[5142]: Accepted publickey for core from 147.75.109.163 port 44944 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:48.149938 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:48.160208 systemd-logind[1938]: New session 23 of user core. Mar 17 17:37:48.167271 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:37:48.414141 sshd[5144]: Connection closed by 147.75.109.163 port 44944 Mar 17 17:37:48.415172 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:48.423473 systemd[1]: sshd@22-172.31.28.49:22-147.75.109.163:44944.service: Deactivated successfully. Mar 17 17:37:48.429475 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:37:48.432462 systemd-logind[1938]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:37:48.435051 systemd-logind[1938]: Removed session 23. Mar 17 17:37:53.458087 systemd[1]: Started sshd@23-172.31.28.49:22-147.75.109.163:44950.service - OpenSSH per-connection server daemon (147.75.109.163:44950). Mar 17 17:37:53.654909 sshd[5156]: Accepted publickey for core from 147.75.109.163 port 44950 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:53.658138 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:53.666730 systemd-logind[1938]: New session 24 of user core. Mar 17 17:37:53.672248 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:37:53.931701 sshd[5158]: Connection closed by 147.75.109.163 port 44950 Mar 17 17:37:53.934053 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:53.943903 systemd[1]: sshd@23-172.31.28.49:22-147.75.109.163:44950.service: Deactivated successfully. Mar 17 17:37:53.948776 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:37:53.950572 systemd-logind[1938]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:37:53.952713 systemd-logind[1938]: Removed session 24. Mar 17 17:37:58.974502 systemd[1]: Started sshd@24-172.31.28.49:22-147.75.109.163:36902.service - OpenSSH per-connection server daemon (147.75.109.163:36902). Mar 17 17:37:59.163448 sshd[5171]: Accepted publickey for core from 147.75.109.163 port 36902 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:59.166162 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:59.174698 systemd-logind[1938]: New session 25 of user core. Mar 17 17:37:59.182241 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:37:59.423837 sshd[5175]: Connection closed by 147.75.109.163 port 36902 Mar 17 17:37:59.424771 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Mar 17 17:37:59.431562 systemd[1]: sshd@24-172.31.28.49:22-147.75.109.163:36902.service: Deactivated successfully. Mar 17 17:37:59.437768 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:37:59.440472 systemd-logind[1938]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:37:59.443037 systemd-logind[1938]: Removed session 25. Mar 17 17:37:59.468371 systemd[1]: Started sshd@25-172.31.28.49:22-147.75.109.163:36916.service - OpenSSH per-connection server daemon (147.75.109.163:36916). Mar 17 17:37:59.654877 sshd[5186]: Accepted publickey for core from 147.75.109.163 port 36916 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:37:59.657486 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:37:59.666093 systemd-logind[1938]: New session 26 of user core. Mar 17 17:37:59.673312 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:38:02.307837 containerd[1958]: time="2025-03-17T17:38:02.307409014Z" level=info msg="StopContainer for \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\" with timeout 30 (s)" Mar 17 17:38:02.309529 containerd[1958]: time="2025-03-17T17:38:02.308718730Z" level=info msg="Stop container \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\" with signal terminated" Mar 17 17:38:02.337202 systemd[1]: cri-containerd-973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c.scope: Deactivated successfully. Mar 17 17:38:02.359195 containerd[1958]: time="2025-03-17T17:38:02.359125991Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:38:02.374852 containerd[1958]: time="2025-03-17T17:38:02.374797955Z" level=info msg="StopContainer for \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\" with timeout 2 (s)" Mar 17 17:38:02.375411 containerd[1958]: time="2025-03-17T17:38:02.375291803Z" level=info msg="Stop container \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\" with signal terminated" Mar 17 17:38:02.392813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c-rootfs.mount: Deactivated successfully. Mar 17 17:38:02.399443 systemd-networkd[1870]: lxc_health: Link DOWN Mar 17 17:38:02.399462 systemd-networkd[1870]: lxc_health: Lost carrier Mar 17 17:38:02.422676 containerd[1958]: time="2025-03-17T17:38:02.422552471Z" level=info msg="shim disconnected" id=973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c namespace=k8s.io Mar 17 17:38:02.422676 containerd[1958]: time="2025-03-17T17:38:02.422642183Z" level=warning msg="cleaning up after shim disconnected" id=973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c namespace=k8s.io Mar 17 17:38:02.422676 containerd[1958]: time="2025-03-17T17:38:02.422664779Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:02.431306 systemd[1]: cri-containerd-e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596.scope: Deactivated successfully. Mar 17 17:38:02.432429 systemd[1]: cri-containerd-e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596.scope: Consumed 14.573s CPU time, 125.1M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:38:02.469292 containerd[1958]: time="2025-03-17T17:38:02.469235663Z" level=info msg="StopContainer for \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\" returns successfully" Mar 17 17:38:02.470327 containerd[1958]: time="2025-03-17T17:38:02.470278043Z" level=info msg="StopPodSandbox for \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\"" Mar 17 17:38:02.470478 containerd[1958]: time="2025-03-17T17:38:02.470341715Z" level=info msg="Container to stop \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:38:02.480754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5-shm.mount: Deactivated successfully. Mar 17 17:38:02.492050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596-rootfs.mount: Deactivated successfully. Mar 17 17:38:02.496804 containerd[1958]: time="2025-03-17T17:38:02.496718975Z" level=info msg="shim disconnected" id=e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596 namespace=k8s.io Mar 17 17:38:02.496804 containerd[1958]: time="2025-03-17T17:38:02.496796531Z" level=warning msg="cleaning up after shim disconnected" id=e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596 namespace=k8s.io Mar 17 17:38:02.497081 containerd[1958]: time="2025-03-17T17:38:02.496818083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:02.504338 systemd[1]: cri-containerd-4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5.scope: Deactivated successfully. Mar 17 17:38:02.539200 containerd[1958]: time="2025-03-17T17:38:02.539126255Z" level=info msg="StopContainer for \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\" returns successfully" Mar 17 17:38:02.539956 containerd[1958]: time="2025-03-17T17:38:02.539889419Z" level=info msg="StopPodSandbox for \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\"" Mar 17 17:38:02.540130 containerd[1958]: time="2025-03-17T17:38:02.540050123Z" level=info msg="Container to stop \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:38:02.540130 containerd[1958]: time="2025-03-17T17:38:02.540081011Z" level=info msg="Container to stop \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:38:02.540130 containerd[1958]: time="2025-03-17T17:38:02.540116135Z" level=info msg="Container to stop \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:38:02.540296 containerd[1958]: time="2025-03-17T17:38:02.540140879Z" level=info msg="Container to stop \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:38:02.540296 containerd[1958]: time="2025-03-17T17:38:02.540246683Z" level=info msg="Container to stop \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:38:02.547511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a-shm.mount: Deactivated successfully. Mar 17 17:38:02.569938 systemd[1]: cri-containerd-9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a.scope: Deactivated successfully. Mar 17 17:38:02.589807 containerd[1958]: time="2025-03-17T17:38:02.589731792Z" level=info msg="shim disconnected" id=4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5 namespace=k8s.io Mar 17 17:38:02.590408 containerd[1958]: time="2025-03-17T17:38:02.590074344Z" level=warning msg="cleaning up after shim disconnected" id=4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5 namespace=k8s.io Mar 17 17:38:02.590408 containerd[1958]: time="2025-03-17T17:38:02.590101404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:02.629528 containerd[1958]: time="2025-03-17T17:38:02.629328348Z" level=info msg="TearDown network for sandbox \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" successfully" Mar 17 17:38:02.629528 containerd[1958]: time="2025-03-17T17:38:02.629373720Z" level=info msg="StopPodSandbox for \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" returns successfully" Mar 17 17:38:02.631340 containerd[1958]: time="2025-03-17T17:38:02.629465856Z" level=info msg="shim disconnected" id=9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a namespace=k8s.io Mar 17 17:38:02.631340 containerd[1958]: time="2025-03-17T17:38:02.629628972Z" level=warning msg="cleaning up after shim disconnected" id=9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a namespace=k8s.io Mar 17 17:38:02.631340 containerd[1958]: time="2025-03-17T17:38:02.629648376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:02.658476 containerd[1958]: time="2025-03-17T17:38:02.658311900Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:38:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:38:02.662714 containerd[1958]: time="2025-03-17T17:38:02.662647416Z" level=info msg="TearDown network for sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" successfully" Mar 17 17:38:02.662714 containerd[1958]: time="2025-03-17T17:38:02.662701428Z" level=info msg="StopPodSandbox for \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" returns successfully" Mar 17 17:38:02.700402 kubelet[3281]: I0317 17:38:02.700321 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-cilium-config-path\") pod \"2db1b07e-a522-45f2-97ac-0acdeb5d9d09\" (UID: \"2db1b07e-a522-45f2-97ac-0acdeb5d9d09\") " Mar 17 17:38:02.700402 kubelet[3281]: I0317 17:38:02.700405 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4bh5\" (UniqueName: \"kubernetes.io/projected/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-kube-api-access-z4bh5\") pod \"2db1b07e-a522-45f2-97ac-0acdeb5d9d09\" (UID: \"2db1b07e-a522-45f2-97ac-0acdeb5d9d09\") " Mar 17 17:38:02.707343 kubelet[3281]: I0317 17:38:02.707261 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2db1b07e-a522-45f2-97ac-0acdeb5d9d09" (UID: "2db1b07e-a522-45f2-97ac-0acdeb5d9d09"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:38:02.711865 kubelet[3281]: I0317 17:38:02.711809 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-kube-api-access-z4bh5" (OuterVolumeSpecName: "kube-api-access-z4bh5") pod "2db1b07e-a522-45f2-97ac-0acdeb5d9d09" (UID: "2db1b07e-a522-45f2-97ac-0acdeb5d9d09"). InnerVolumeSpecName "kube-api-access-z4bh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:38:02.801452 kubelet[3281]: I0317 17:38:02.801383 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-xtables-lock\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.801626 kubelet[3281]: I0317 17:38:02.801459 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14597fff-e0ca-423a-a062-5519920f1786-cilium-config-path\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.801626 kubelet[3281]: I0317 17:38:02.801497 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-net\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.801626 kubelet[3281]: I0317 17:38:02.801540 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5p95\" (UniqueName: \"kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-kube-api-access-s5p95\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.801626 kubelet[3281]: I0317 17:38:02.801573 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-etc-cni-netd\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.801626 kubelet[3281]: I0317 17:38:02.801608 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cni-path\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.802120 kubelet[3281]: I0317 17:38:02.801641 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-run\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.802120 kubelet[3281]: I0317 17:38:02.801672 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-bpf-maps\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.802120 kubelet[3281]: I0317 17:38:02.801711 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14597fff-e0ca-423a-a062-5519920f1786-clustermesh-secrets\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.802120 kubelet[3281]: I0317 17:38:02.801744 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-hostproc\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.802120 kubelet[3281]: I0317 17:38:02.801775 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-kernel\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.802120 kubelet[3281]: I0317 17:38:02.801812 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-hubble-tls\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.803344 kubelet[3281]: I0317 17:38:02.801845 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-lib-modules\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.803344 kubelet[3281]: I0317 17:38:02.801878 3281 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-cgroup\") pod \"14597fff-e0ca-423a-a062-5519920f1786\" (UID: \"14597fff-e0ca-423a-a062-5519920f1786\") " Mar 17 17:38:02.803344 kubelet[3281]: I0317 17:38:02.801939 3281 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-z4bh5\" (UniqueName: \"kubernetes.io/projected/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-kube-api-access-z4bh5\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.803344 kubelet[3281]: I0317 17:38:02.801965 3281 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2db1b07e-a522-45f2-97ac-0acdeb5d9d09-cilium-config-path\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.803344 kubelet[3281]: I0317 17:38:02.802060 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.803344 kubelet[3281]: I0317 17:38:02.802177 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.803647 kubelet[3281]: I0317 17:38:02.802347 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.803647 kubelet[3281]: I0317 17:38:02.802394 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.804944 kubelet[3281]: I0317 17:38:02.804317 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.804944 kubelet[3281]: I0317 17:38:02.804399 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cni-path" (OuterVolumeSpecName: "cni-path") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.804944 kubelet[3281]: I0317 17:38:02.804437 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.806243 kubelet[3281]: I0317 17:38:02.806199 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-hostproc" (OuterVolumeSpecName: "hostproc") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.806485 kubelet[3281]: I0317 17:38:02.806367 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.806629 kubelet[3281]: I0317 17:38:02.806602 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:38:02.815169 kubelet[3281]: I0317 17:38:02.814904 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-kube-api-access-s5p95" (OuterVolumeSpecName: "kube-api-access-s5p95") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "kube-api-access-s5p95". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:38:02.815836 kubelet[3281]: I0317 17:38:02.815770 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14597fff-e0ca-423a-a062-5519920f1786-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:38:02.817336 kubelet[3281]: I0317 17:38:02.817264 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14597fff-e0ca-423a-a062-5519920f1786-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:38:02.819450 kubelet[3281]: I0317 17:38:02.819386 3281 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "14597fff-e0ca-423a-a062-5519920f1786" (UID: "14597fff-e0ca-423a-a062-5519920f1786"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902229 3281 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-xtables-lock\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902276 3281 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14597fff-e0ca-423a-a062-5519920f1786-cilium-config-path\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902302 3281 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-net\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902322 3281 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s5p95\" (UniqueName: \"kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-kube-api-access-s5p95\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902343 3281 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-etc-cni-netd\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902363 3281 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cni-path\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902381 3281 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-run\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903304 kubelet[3281]: I0317 17:38:02.902400 3281 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-bpf-maps\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903801 kubelet[3281]: I0317 17:38:02.902423 3281 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/14597fff-e0ca-423a-a062-5519920f1786-clustermesh-secrets\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903801 kubelet[3281]: I0317 17:38:02.902444 3281 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-hostproc\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903801 kubelet[3281]: I0317 17:38:02.902463 3281 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-host-proc-sys-kernel\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903801 kubelet[3281]: I0317 17:38:02.902481 3281 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/14597fff-e0ca-423a-a062-5519920f1786-hubble-tls\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903801 kubelet[3281]: I0317 17:38:02.902499 3281 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-lib-modules\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:02.903801 kubelet[3281]: I0317 17:38:02.902517 3281 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/14597fff-e0ca-423a-a062-5519920f1786-cilium-cgroup\") on node \"ip-172-31-28-49\" DevicePath \"\"" Mar 17 17:38:03.322143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5-rootfs.mount: Deactivated successfully. Mar 17 17:38:03.322327 systemd[1]: var-lib-kubelet-pods-2db1b07e\x2da522\x2d45f2\x2d97ac\x2d0acdeb5d9d09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz4bh5.mount: Deactivated successfully. Mar 17 17:38:03.322475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a-rootfs.mount: Deactivated successfully. Mar 17 17:38:03.322609 systemd[1]: var-lib-kubelet-pods-14597fff\x2de0ca\x2d423a\x2da062\x2d5519920f1786-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds5p95.mount: Deactivated successfully. Mar 17 17:38:03.322745 systemd[1]: var-lib-kubelet-pods-14597fff\x2de0ca\x2d423a\x2da062\x2d5519920f1786-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:38:03.322880 systemd[1]: var-lib-kubelet-pods-14597fff\x2de0ca\x2d423a\x2da062\x2d5519920f1786-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:38:03.507003 kubelet[3281]: I0317 17:38:03.506160 3281 scope.go:117] "RemoveContainer" containerID="973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c" Mar 17 17:38:03.511023 containerd[1958]: time="2025-03-17T17:38:03.510515088Z" level=info msg="RemoveContainer for \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\"" Mar 17 17:38:03.519276 systemd[1]: Removed slice kubepods-besteffort-pod2db1b07e_a522_45f2_97ac_0acdeb5d9d09.slice - libcontainer container kubepods-besteffort-pod2db1b07e_a522_45f2_97ac_0acdeb5d9d09.slice. Mar 17 17:38:03.528154 containerd[1958]: time="2025-03-17T17:38:03.528082884Z" level=info msg="RemoveContainer for \"973e23ec70f3fcfc0ed68ff077388216871fa06f4c0e29360c037c202bca657c\" returns successfully" Mar 17 17:38:03.533513 kubelet[3281]: I0317 17:38:03.533450 3281 scope.go:117] "RemoveContainer" containerID="e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596" Mar 17 17:38:03.537073 containerd[1958]: time="2025-03-17T17:38:03.536888496Z" level=info msg="RemoveContainer for \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\"" Mar 17 17:38:03.537429 systemd[1]: Removed slice kubepods-burstable-pod14597fff_e0ca_423a_a062_5519920f1786.slice - libcontainer container kubepods-burstable-pod14597fff_e0ca_423a_a062_5519920f1786.slice. Mar 17 17:38:03.537915 systemd[1]: kubepods-burstable-pod14597fff_e0ca_423a_a062_5519920f1786.slice: Consumed 14.727s CPU time, 125.6M memory peak, 144K read from disk, 12.9M written to disk. Mar 17 17:38:03.545351 containerd[1958]: time="2025-03-17T17:38:03.544916160Z" level=info msg="RemoveContainer for \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\" returns successfully" Mar 17 17:38:03.546040 kubelet[3281]: I0317 17:38:03.545733 3281 scope.go:117] "RemoveContainer" containerID="6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78" Mar 17 17:38:03.549723 containerd[1958]: time="2025-03-17T17:38:03.549595908Z" level=info msg="RemoveContainer for \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\"" Mar 17 17:38:03.556272 containerd[1958]: time="2025-03-17T17:38:03.556176912Z" level=info msg="RemoveContainer for \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\" returns successfully" Mar 17 17:38:03.556892 kubelet[3281]: I0317 17:38:03.556632 3281 scope.go:117] "RemoveContainer" containerID="796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906" Mar 17 17:38:03.559230 containerd[1958]: time="2025-03-17T17:38:03.559163256Z" level=info msg="RemoveContainer for \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\"" Mar 17 17:38:03.567497 containerd[1958]: time="2025-03-17T17:38:03.567424945Z" level=info msg="RemoveContainer for \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\" returns successfully" Mar 17 17:38:03.570733 kubelet[3281]: I0317 17:38:03.570571 3281 scope.go:117] "RemoveContainer" containerID="3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490" Mar 17 17:38:03.574964 containerd[1958]: time="2025-03-17T17:38:03.574576537Z" level=info msg="RemoveContainer for \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\"" Mar 17 17:38:03.582953 containerd[1958]: time="2025-03-17T17:38:03.582823549Z" level=info msg="RemoveContainer for \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\" returns successfully" Mar 17 17:38:03.586914 kubelet[3281]: I0317 17:38:03.586824 3281 scope.go:117] "RemoveContainer" containerID="9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882" Mar 17 17:38:03.591027 containerd[1958]: time="2025-03-17T17:38:03.590245249Z" level=info msg="RemoveContainer for \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\"" Mar 17 17:38:03.598415 containerd[1958]: time="2025-03-17T17:38:03.598343089Z" level=info msg="RemoveContainer for \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\" returns successfully" Mar 17 17:38:03.599155 kubelet[3281]: I0317 17:38:03.598728 3281 scope.go:117] "RemoveContainer" containerID="e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596" Mar 17 17:38:03.600101 containerd[1958]: time="2025-03-17T17:38:03.599426437Z" level=error msg="ContainerStatus for \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\": not found" Mar 17 17:38:03.600226 kubelet[3281]: E0317 17:38:03.599742 3281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\": not found" containerID="e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596" Mar 17 17:38:03.600226 kubelet[3281]: I0317 17:38:03.599792 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596"} err="failed to get container status \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\": rpc error: code = NotFound desc = an error occurred when try to find container \"e37473aa06ba6ff5672aba0a10ea52c42d4ca5da16a1713594cf143463386596\": not found" Mar 17 17:38:03.600226 kubelet[3281]: I0317 17:38:03.599921 3281 scope.go:117] "RemoveContainer" containerID="6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78" Mar 17 17:38:03.600422 containerd[1958]: time="2025-03-17T17:38:03.600319249Z" level=error msg="ContainerStatus for \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\": not found" Mar 17 17:38:03.601663 kubelet[3281]: E0317 17:38:03.600668 3281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\": not found" containerID="6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78" Mar 17 17:38:03.601663 kubelet[3281]: I0317 17:38:03.600781 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78"} err="failed to get container status \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c64b879e26adff9253346d4a65e09ed2950a25408699955b09b8a3e043e1e78\": not found" Mar 17 17:38:03.601663 kubelet[3281]: I0317 17:38:03.600842 3281 scope.go:117] "RemoveContainer" containerID="796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906" Mar 17 17:38:03.602853 containerd[1958]: time="2025-03-17T17:38:03.601444753Z" level=error msg="ContainerStatus for \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\": not found" Mar 17 17:38:03.602853 containerd[1958]: time="2025-03-17T17:38:03.602261257Z" level=error msg="ContainerStatus for \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\": not found" Mar 17 17:38:03.603307 kubelet[3281]: E0317 17:38:03.601712 3281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\": not found" containerID="796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906" Mar 17 17:38:03.603307 kubelet[3281]: I0317 17:38:03.601757 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906"} err="failed to get container status \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\": rpc error: code = NotFound desc = an error occurred when try to find container \"796378bcb921d3697b76ce7bae31fb6403757a47d869e3602e8068532f313906\": not found" Mar 17 17:38:03.603307 kubelet[3281]: I0317 17:38:03.601792 3281 scope.go:117] "RemoveContainer" containerID="3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490" Mar 17 17:38:03.603307 kubelet[3281]: E0317 17:38:03.602924 3281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\": not found" containerID="3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490" Mar 17 17:38:03.603307 kubelet[3281]: I0317 17:38:03.603002 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490"} err="failed to get container status \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f58b87cf10524ab5b3633382ae86f6c80105e1767e782a8ca8423786f498490\": not found" Mar 17 17:38:03.603307 kubelet[3281]: I0317 17:38:03.603049 3281 scope.go:117] "RemoveContainer" containerID="9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882" Mar 17 17:38:03.603639 containerd[1958]: time="2025-03-17T17:38:03.603537601Z" level=error msg="ContainerStatus for \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\": not found" Mar 17 17:38:03.604190 kubelet[3281]: E0317 17:38:03.604142 3281 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\": not found" containerID="9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882" Mar 17 17:38:03.604272 kubelet[3281]: I0317 17:38:03.604197 3281 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882"} err="failed to get container status \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\": rpc error: code = NotFound desc = an error occurred when try to find container \"9723f222417192e3608d024bba5118557a47229ca5de14f7e2013cba0e745882\": not found" Mar 17 17:38:03.941244 kubelet[3281]: I0317 17:38:03.941111 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14597fff-e0ca-423a-a062-5519920f1786" path="/var/lib/kubelet/pods/14597fff-e0ca-423a-a062-5519920f1786/volumes" Mar 17 17:38:03.944182 kubelet[3281]: I0317 17:38:03.943660 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db1b07e-a522-45f2-97ac-0acdeb5d9d09" path="/var/lib/kubelet/pods/2db1b07e-a522-45f2-97ac-0acdeb5d9d09/volumes" Mar 17 17:38:04.241622 sshd[5188]: Connection closed by 147.75.109.163 port 36916 Mar 17 17:38:04.242786 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:04.249013 systemd-logind[1938]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:38:04.250047 systemd[1]: sshd@25-172.31.28.49:22-147.75.109.163:36916.service: Deactivated successfully. Mar 17 17:38:04.256513 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:38:04.257287 systemd[1]: session-26.scope: Consumed 1.895s CPU time, 23.5M memory peak. Mar 17 17:38:04.262533 systemd-logind[1938]: Removed session 26. Mar 17 17:38:04.266683 kubelet[3281]: E0317 17:38:04.266618 3281 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:38:04.287490 systemd[1]: Started sshd@26-172.31.28.49:22-147.75.109.163:35336.service - OpenSSH per-connection server daemon (147.75.109.163:35336). Mar 17 17:38:04.471384 sshd[5348]: Accepted publickey for core from 147.75.109.163 port 35336 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:38:04.474084 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:04.482181 systemd-logind[1938]: New session 27 of user core. Mar 17 17:38:04.493332 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:38:05.211321 ntpd[1932]: Deleting interface #11 lxc_health, fe80::ace1:ebff:fea3:d531%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Mar 17 17:38:05.211872 ntpd[1932]: 17 Mar 17:38:05 ntpd[1932]: Deleting interface #11 lxc_health, fe80::ace1:ebff:fea3:d531%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Mar 17 17:38:05.998068 sshd[5350]: Connection closed by 147.75.109.163 port 35336 Mar 17 17:38:05.999826 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:06.010955 systemd-logind[1938]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:38:06.012776 systemd[1]: sshd@26-172.31.28.49:22-147.75.109.163:35336.service: Deactivated successfully. Mar 17 17:38:06.021436 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:38:06.022218 systemd[1]: session-27.scope: Consumed 1.297s CPU time, 23.5M memory peak. Mar 17 17:38:06.042462 systemd-logind[1938]: Removed session 27. Mar 17 17:38:06.058664 systemd[1]: Started sshd@27-172.31.28.49:22-147.75.109.163:35338.service - OpenSSH per-connection server daemon (147.75.109.163:35338). Mar 17 17:38:06.122997 kubelet[3281]: I0317 17:38:06.122693 3281 topology_manager.go:215] "Topology Admit Handler" podUID="026bee81-7095-43b7-864e-345aef12b417" podNamespace="kube-system" podName="cilium-8ddm4" Mar 17 17:38:06.124058 kubelet[3281]: E0317 17:38:06.123046 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="14597fff-e0ca-423a-a062-5519920f1786" containerName="clean-cilium-state" Mar 17 17:38:06.124058 kubelet[3281]: E0317 17:38:06.123203 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="14597fff-e0ca-423a-a062-5519920f1786" containerName="cilium-agent" Mar 17 17:38:06.124058 kubelet[3281]: E0317 17:38:06.123225 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="14597fff-e0ca-423a-a062-5519920f1786" containerName="mount-cgroup" Mar 17 17:38:06.124058 kubelet[3281]: E0317 17:38:06.123240 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="14597fff-e0ca-423a-a062-5519920f1786" containerName="apply-sysctl-overwrites" Mar 17 17:38:06.124058 kubelet[3281]: E0317 17:38:06.123282 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2db1b07e-a522-45f2-97ac-0acdeb5d9d09" containerName="cilium-operator" Mar 17 17:38:06.124058 kubelet[3281]: E0317 17:38:06.123301 3281 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="14597fff-e0ca-423a-a062-5519920f1786" containerName="mount-bpf-fs" Mar 17 17:38:06.124058 kubelet[3281]: I0317 17:38:06.123405 3281 memory_manager.go:354] "RemoveStaleState removing state" podUID="14597fff-e0ca-423a-a062-5519920f1786" containerName="cilium-agent" Mar 17 17:38:06.124058 kubelet[3281]: I0317 17:38:06.123568 3281 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db1b07e-a522-45f2-97ac-0acdeb5d9d09" containerName="cilium-operator" Mar 17 17:38:06.144492 systemd[1]: Created slice kubepods-burstable-pod026bee81_7095_43b7_864e_345aef12b417.slice - libcontainer container kubepods-burstable-pod026bee81_7095_43b7_864e_345aef12b417.slice. Mar 17 17:38:06.224937 kubelet[3281]: I0317 17:38:06.224146 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/026bee81-7095-43b7-864e-345aef12b417-clustermesh-secrets\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.224937 kubelet[3281]: I0317 17:38:06.224218 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdtm\" (UniqueName: \"kubernetes.io/projected/026bee81-7095-43b7-864e-345aef12b417-kube-api-access-6tdtm\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.224937 kubelet[3281]: I0317 17:38:06.224260 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-cilium-run\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.224937 kubelet[3281]: I0317 17:38:06.224300 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-lib-modules\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.224937 kubelet[3281]: I0317 17:38:06.224336 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-host-proc-sys-kernel\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.224937 kubelet[3281]: I0317 17:38:06.224376 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-hostproc\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225393 kubelet[3281]: I0317 17:38:06.224412 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/026bee81-7095-43b7-864e-345aef12b417-cilium-ipsec-secrets\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225393 kubelet[3281]: I0317 17:38:06.224451 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-bpf-maps\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225393 kubelet[3281]: I0317 17:38:06.224484 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/026bee81-7095-43b7-864e-345aef12b417-hubble-tls\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225393 kubelet[3281]: I0317 17:38:06.224519 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-etc-cni-netd\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225393 kubelet[3281]: I0317 17:38:06.224560 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-xtables-lock\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225393 kubelet[3281]: I0317 17:38:06.224627 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-host-proc-sys-net\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225717 kubelet[3281]: I0317 17:38:06.224670 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-cni-path\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225717 kubelet[3281]: I0317 17:38:06.224704 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/026bee81-7095-43b7-864e-345aef12b417-cilium-config-path\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.225717 kubelet[3281]: I0317 17:38:06.224742 3281 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/026bee81-7095-43b7-864e-345aef12b417-cilium-cgroup\") pod \"cilium-8ddm4\" (UID: \"026bee81-7095-43b7-864e-345aef12b417\") " pod="kube-system/cilium-8ddm4" Mar 17 17:38:06.272767 sshd[5360]: Accepted publickey for core from 147.75.109.163 port 35338 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:38:06.274570 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:06.285492 systemd-logind[1938]: New session 28 of user core. Mar 17 17:38:06.297809 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:38:06.431831 sshd[5363]: Connection closed by 147.75.109.163 port 35338 Mar 17 17:38:06.432681 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:06.439267 systemd[1]: sshd@27-172.31.28.49:22-147.75.109.163:35338.service: Deactivated successfully. Mar 17 17:38:06.443637 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:38:06.445757 systemd-logind[1938]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:38:06.447731 systemd-logind[1938]: Removed session 28. Mar 17 17:38:06.462938 containerd[1958]: time="2025-03-17T17:38:06.462888015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8ddm4,Uid:026bee81-7095-43b7-864e-345aef12b417,Namespace:kube-system,Attempt:0,}" Mar 17 17:38:06.472573 systemd[1]: Started sshd@28-172.31.28.49:22-147.75.109.163:35350.service - OpenSSH per-connection server daemon (147.75.109.163:35350). Mar 17 17:38:06.518320 containerd[1958]: time="2025-03-17T17:38:06.518157867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:38:06.518320 containerd[1958]: time="2025-03-17T17:38:06.518266971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:38:06.519025 containerd[1958]: time="2025-03-17T17:38:06.518304279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:06.519153 containerd[1958]: time="2025-03-17T17:38:06.518500299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:38:06.562312 systemd[1]: Started cri-containerd-82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796.scope - libcontainer container 82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796. Mar 17 17:38:06.607329 containerd[1958]: time="2025-03-17T17:38:06.607125292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8ddm4,Uid:026bee81-7095-43b7-864e-345aef12b417,Namespace:kube-system,Attempt:0,} returns sandbox id \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\"" Mar 17 17:38:06.615484 containerd[1958]: time="2025-03-17T17:38:06.614999104Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:38:06.638739 containerd[1958]: time="2025-03-17T17:38:06.638660308Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4\"" Mar 17 17:38:06.639992 containerd[1958]: time="2025-03-17T17:38:06.639927172Z" level=info msg="StartContainer for \"b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4\"" Mar 17 17:38:06.678092 sshd[5374]: Accepted publickey for core from 147.75.109.163 port 35350 ssh2: RSA SHA256:ZojDIC/G58L0+jq9L9mXrF63bfJyKUKgfaEnlQehzO4 Mar 17 17:38:06.683816 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:38:06.693557 systemd[1]: Started cri-containerd-b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4.scope - libcontainer container b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4. Mar 17 17:38:06.704563 systemd-logind[1938]: New session 29 of user core. Mar 17 17:38:06.712270 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:38:06.752117 containerd[1958]: time="2025-03-17T17:38:06.752049064Z" level=info msg="StartContainer for \"b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4\" returns successfully" Mar 17 17:38:06.767652 systemd[1]: cri-containerd-b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4.scope: Deactivated successfully. Mar 17 17:38:06.802880 kubelet[3281]: I0317 17:38:06.802432 3281 setters.go:580] "Node became not ready" node="ip-172-31-28-49" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:38:06Z","lastTransitionTime":"2025-03-17T17:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:38:06.839506 containerd[1958]: time="2025-03-17T17:38:06.839148149Z" level=info msg="shim disconnected" id=b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4 namespace=k8s.io Mar 17 17:38:06.839719 containerd[1958]: time="2025-03-17T17:38:06.839532053Z" level=warning msg="cleaning up after shim disconnected" id=b4714028e2ee9d6b3bb73068c4b0b251c1907b25fcd53a788df2f068290117a4 namespace=k8s.io Mar 17 17:38:06.839719 containerd[1958]: time="2025-03-17T17:38:06.839559257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:07.547451 containerd[1958]: time="2025-03-17T17:38:07.547366348Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:38:07.585329 containerd[1958]: time="2025-03-17T17:38:07.585252532Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2\"" Mar 17 17:38:07.588036 containerd[1958]: time="2025-03-17T17:38:07.587904076Z" level=info msg="StartContainer for \"529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2\"" Mar 17 17:38:07.661306 systemd[1]: Started cri-containerd-529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2.scope - libcontainer container 529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2. Mar 17 17:38:07.707101 containerd[1958]: time="2025-03-17T17:38:07.706930061Z" level=info msg="StartContainer for \"529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2\" returns successfully" Mar 17 17:38:07.721581 systemd[1]: cri-containerd-529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2.scope: Deactivated successfully. Mar 17 17:38:07.765791 containerd[1958]: time="2025-03-17T17:38:07.765686189Z" level=info msg="shim disconnected" id=529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2 namespace=k8s.io Mar 17 17:38:07.765791 containerd[1958]: time="2025-03-17T17:38:07.765761297Z" level=warning msg="cleaning up after shim disconnected" id=529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2 namespace=k8s.io Mar 17 17:38:07.765791 containerd[1958]: time="2025-03-17T17:38:07.765779837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:08.342390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-529ddda60997eb56175e311a609b56c1b1c448dc4e5006fe122b6d81c525cab2-rootfs.mount: Deactivated successfully. Mar 17 17:38:08.547963 containerd[1958]: time="2025-03-17T17:38:08.547890497Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:38:08.585306 containerd[1958]: time="2025-03-17T17:38:08.584321357Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742\"" Mar 17 17:38:08.585584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650779890.mount: Deactivated successfully. Mar 17 17:38:08.587839 containerd[1958]: time="2025-03-17T17:38:08.587670785Z" level=info msg="StartContainer for \"d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742\"" Mar 17 17:38:08.652304 systemd[1]: Started cri-containerd-d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742.scope - libcontainer container d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742. Mar 17 17:38:08.711600 containerd[1958]: time="2025-03-17T17:38:08.711380466Z" level=info msg="StartContainer for \"d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742\" returns successfully" Mar 17 17:38:08.715019 systemd[1]: cri-containerd-d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742.scope: Deactivated successfully. Mar 17 17:38:08.762373 containerd[1958]: time="2025-03-17T17:38:08.762292134Z" level=info msg="shim disconnected" id=d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742 namespace=k8s.io Mar 17 17:38:08.762373 containerd[1958]: time="2025-03-17T17:38:08.762368574Z" level=warning msg="cleaning up after shim disconnected" id=d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742 namespace=k8s.io Mar 17 17:38:08.762845 containerd[1958]: time="2025-03-17T17:38:08.762390078Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:09.267526 kubelet[3281]: E0317 17:38:09.267461 3281 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:38:09.342713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3806aaee1059dd60f46e7c714535b46d7236a4bc11f938662cfc81f92d86742-rootfs.mount: Deactivated successfully. Mar 17 17:38:09.554898 containerd[1958]: time="2025-03-17T17:38:09.554313750Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:38:09.587200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611093470.mount: Deactivated successfully. Mar 17 17:38:09.587695 containerd[1958]: time="2025-03-17T17:38:09.587626590Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9\"" Mar 17 17:38:09.592835 containerd[1958]: time="2025-03-17T17:38:09.592333998Z" level=info msg="StartContainer for \"d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9\"" Mar 17 17:38:09.671279 systemd[1]: Started cri-containerd-d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9.scope - libcontainer container d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9. Mar 17 17:38:09.719883 systemd[1]: cri-containerd-d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9.scope: Deactivated successfully. Mar 17 17:38:09.723517 containerd[1958]: time="2025-03-17T17:38:09.723452779Z" level=info msg="StartContainer for \"d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9\" returns successfully" Mar 17 17:38:09.771949 containerd[1958]: time="2025-03-17T17:38:09.771815779Z" level=info msg="shim disconnected" id=d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9 namespace=k8s.io Mar 17 17:38:09.772300 containerd[1958]: time="2025-03-17T17:38:09.771963883Z" level=warning msg="cleaning up after shim disconnected" id=d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9 namespace=k8s.io Mar 17 17:38:09.772300 containerd[1958]: time="2025-03-17T17:38:09.772011895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:10.342730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5f992a0621e867fd3a78132aca7eceb1b54a37cb28b1955fb51eb62165f39f9-rootfs.mount: Deactivated successfully. Mar 17 17:38:10.563535 containerd[1958]: time="2025-03-17T17:38:10.563444683Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:38:10.609723 containerd[1958]: time="2025-03-17T17:38:10.608906336Z" level=info msg="CreateContainer within sandbox \"82cfd957250dec146c00e0a664a3a8529756e82c8dc4c7f49d0298123c783796\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c71707e5d8cf0e60971873959693b3797cd0a9486fb29675b0a8954bf0bc4aa1\"" Mar 17 17:38:10.610773 containerd[1958]: time="2025-03-17T17:38:10.610723640Z" level=info msg="StartContainer for \"c71707e5d8cf0e60971873959693b3797cd0a9486fb29675b0a8954bf0bc4aa1\"" Mar 17 17:38:10.691297 systemd[1]: Started cri-containerd-c71707e5d8cf0e60971873959693b3797cd0a9486fb29675b0a8954bf0bc4aa1.scope - libcontainer container c71707e5d8cf0e60971873959693b3797cd0a9486fb29675b0a8954bf0bc4aa1. Mar 17 17:38:10.762717 containerd[1958]: time="2025-03-17T17:38:10.762550568Z" level=info msg="StartContainer for \"c71707e5d8cf0e60971873959693b3797cd0a9486fb29675b0a8954bf0bc4aa1\" returns successfully" Mar 17 17:38:11.343935 systemd[1]: run-containerd-runc-k8s.io-c71707e5d8cf0e60971873959693b3797cd0a9486fb29675b0a8954bf0bc4aa1-runc.J78q4t.mount: Deactivated successfully. Mar 17 17:38:11.590188 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:38:11.608249 kubelet[3281]: I0317 17:38:11.607665 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8ddm4" podStartSLOduration=5.607643396 podStartE2EDuration="5.607643396s" podCreationTimestamp="2025-03-17 17:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:38:11.60716936 +0000 UTC m=+117.931059490" watchObservedRunningTime="2025-03-17 17:38:11.607643396 +0000 UTC m=+117.931533514" Mar 17 17:38:12.939015 kubelet[3281]: E0317 17:38:12.936696 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-mwwxn" podUID="792c685d-e9d3-445f-880d-e0fcc8e58c03" Mar 17 17:38:13.328834 kubelet[3281]: E0317 17:38:13.328581 3281 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50242->127.0.0.1:34215: write tcp 127.0.0.1:50242->127.0.0.1:34215: write: broken pipe Mar 17 17:38:13.880805 containerd[1958]: time="2025-03-17T17:38:13.880736436Z" level=info msg="StopPodSandbox for \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\"" Mar 17 17:38:13.881377 containerd[1958]: time="2025-03-17T17:38:13.880888932Z" level=info msg="TearDown network for sandbox \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" successfully" Mar 17 17:38:13.881377 containerd[1958]: time="2025-03-17T17:38:13.880912284Z" level=info msg="StopPodSandbox for \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" returns successfully" Mar 17 17:38:13.883048 containerd[1958]: time="2025-03-17T17:38:13.881916396Z" level=info msg="RemovePodSandbox for \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\"" Mar 17 17:38:13.883048 containerd[1958]: time="2025-03-17T17:38:13.882005568Z" level=info msg="Forcibly stopping sandbox \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\"" Mar 17 17:38:13.883048 containerd[1958]: time="2025-03-17T17:38:13.882114528Z" level=info msg="TearDown network for sandbox \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" successfully" Mar 17 17:38:13.889415 containerd[1958]: time="2025-03-17T17:38:13.889345992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:38:13.889566 containerd[1958]: time="2025-03-17T17:38:13.889450572Z" level=info msg="RemovePodSandbox \"4674bbe30149869f0478ca71296130b7df9448cc34aaf24fdcc5a968156e3fa5\" returns successfully" Mar 17 17:38:13.890183 containerd[1958]: time="2025-03-17T17:38:13.890136648Z" level=info msg="StopPodSandbox for \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\"" Mar 17 17:38:13.890312 containerd[1958]: time="2025-03-17T17:38:13.890277852Z" level=info msg="TearDown network for sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" successfully" Mar 17 17:38:13.890312 containerd[1958]: time="2025-03-17T17:38:13.890300268Z" level=info msg="StopPodSandbox for \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" returns successfully" Mar 17 17:38:13.890766 containerd[1958]: time="2025-03-17T17:38:13.890718996Z" level=info msg="RemovePodSandbox for \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\"" Mar 17 17:38:13.890843 containerd[1958]: time="2025-03-17T17:38:13.890766564Z" level=info msg="Forcibly stopping sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\"" Mar 17 17:38:13.890892 containerd[1958]: time="2025-03-17T17:38:13.890854044Z" level=info msg="TearDown network for sandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" successfully" Mar 17 17:38:13.901249 containerd[1958]: time="2025-03-17T17:38:13.901169832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:38:13.901458 containerd[1958]: time="2025-03-17T17:38:13.901274748Z" level=info msg="RemovePodSandbox \"9cbc062a50f2356f03417635b822f004d77332b1b4b2e22fc3376da9f5a6496a\" returns successfully" Mar 17 17:38:15.863955 systemd-networkd[1870]: lxc_health: Link UP Mar 17 17:38:15.872885 systemd-networkd[1870]: lxc_health: Gained carrier Mar 17 17:38:15.879622 (udev-worker)[6220]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:38:17.408248 systemd-networkd[1870]: lxc_health: Gained IPv6LL Mar 17 17:38:20.211165 ntpd[1932]: Listen normally on 14 lxc_health [fe80::e0d0:89ff:fecf:c802%14]:123 Mar 17 17:38:20.212847 ntpd[1932]: 17 Mar 17:38:20 ntpd[1932]: Listen normally on 14 lxc_health [fe80::e0d0:89ff:fecf:c802%14]:123 Mar 17 17:38:22.508619 systemd[1]: run-containerd-runc-k8s.io-c71707e5d8cf0e60971873959693b3797cd0a9486fb29675b0a8954bf0bc4aa1-runc.yFx4sB.mount: Deactivated successfully. Mar 17 17:38:22.623525 sshd[5443]: Connection closed by 147.75.109.163 port 35350 Mar 17 17:38:22.625326 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Mar 17 17:38:22.632453 systemd[1]: sshd@28-172.31.28.49:22-147.75.109.163:35350.service: Deactivated successfully. Mar 17 17:38:22.637807 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:38:22.644907 systemd-logind[1938]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:38:22.649510 systemd-logind[1938]: Removed session 29. Mar 17 17:38:48.336069 systemd[1]: cri-containerd-ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628.scope: Deactivated successfully. Mar 17 17:38:48.336685 systemd[1]: cri-containerd-ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628.scope: Consumed 4.971s CPU time, 55.9M memory peak. Mar 17 17:38:48.375529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628-rootfs.mount: Deactivated successfully. Mar 17 17:38:48.384255 containerd[1958]: time="2025-03-17T17:38:48.384137467Z" level=info msg="shim disconnected" id=ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628 namespace=k8s.io Mar 17 17:38:48.384255 containerd[1958]: time="2025-03-17T17:38:48.384241315Z" level=warning msg="cleaning up after shim disconnected" id=ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628 namespace=k8s.io Mar 17 17:38:48.384892 containerd[1958]: time="2025-03-17T17:38:48.384262819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:48.669153 kubelet[3281]: I0317 17:38:48.668277 3281 scope.go:117] "RemoveContainer" containerID="ef86e97ba3b1aeb9d08b67e9eefabe0e88349c624d7037d742adf07c87b3d628" Mar 17 17:38:48.673034 containerd[1958]: time="2025-03-17T17:38:48.672878553Z" level=info msg="CreateContainer within sandbox \"0cd4f9775684ca808d9ca79bb17c024a7c3c97ca2f8e1d4272fc4c70fed71dc6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 17:38:48.694622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618031731.mount: Deactivated successfully. Mar 17 17:38:48.701940 containerd[1958]: time="2025-03-17T17:38:48.701870445Z" level=info msg="CreateContainer within sandbox \"0cd4f9775684ca808d9ca79bb17c024a7c3c97ca2f8e1d4272fc4c70fed71dc6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a4929f8b1b1ed3f6161fefd30635ce1c8b1eacdb3c879cad082b30d8dfdccebd\"" Mar 17 17:38:48.703014 containerd[1958]: time="2025-03-17T17:38:48.702945501Z" level=info msg="StartContainer for \"a4929f8b1b1ed3f6161fefd30635ce1c8b1eacdb3c879cad082b30d8dfdccebd\"" Mar 17 17:38:48.763588 systemd[1]: Started cri-containerd-a4929f8b1b1ed3f6161fefd30635ce1c8b1eacdb3c879cad082b30d8dfdccebd.scope - libcontainer container a4929f8b1b1ed3f6161fefd30635ce1c8b1eacdb3c879cad082b30d8dfdccebd. Mar 17 17:38:48.841353 containerd[1958]: time="2025-03-17T17:38:48.841207545Z" level=info msg="StartContainer for \"a4929f8b1b1ed3f6161fefd30635ce1c8b1eacdb3c879cad082b30d8dfdccebd\" returns successfully" Mar 17 17:38:54.141421 systemd[1]: cri-containerd-9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602.scope: Deactivated successfully. Mar 17 17:38:54.142484 systemd[1]: cri-containerd-9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602.scope: Consumed 3.757s CPU time, 22.3M memory peak. Mar 17 17:38:54.185509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602-rootfs.mount: Deactivated successfully. Mar 17 17:38:54.195255 containerd[1958]: time="2025-03-17T17:38:54.195099852Z" level=info msg="shim disconnected" id=9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602 namespace=k8s.io Mar 17 17:38:54.195878 containerd[1958]: time="2025-03-17T17:38:54.195840396Z" level=warning msg="cleaning up after shim disconnected" id=9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602 namespace=k8s.io Mar 17 17:38:54.195943 containerd[1958]: time="2025-03-17T17:38:54.195899112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:38:54.689753 kubelet[3281]: I0317 17:38:54.689671 3281 scope.go:117] "RemoveContainer" containerID="9bca4478941c5830bbe82ee0135990f9dbcd81fbd62974e1dd8a61685ed6d602" Mar 17 17:38:54.694300 containerd[1958]: time="2025-03-17T17:38:54.694232834Z" level=info msg="CreateContainer within sandbox \"120ca89bad56145e24db2837558204c4ba534678833bbd0834494a51fbce3a4f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 17:38:54.720651 containerd[1958]: time="2025-03-17T17:38:54.720506283Z" level=info msg="CreateContainer within sandbox \"120ca89bad56145e24db2837558204c4ba534678833bbd0834494a51fbce3a4f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2f89cc910ba56b28f29757ee203b17b6f38ee3801b10f859275f8ce0db381577\"" Mar 17 17:38:54.722019 containerd[1958]: time="2025-03-17T17:38:54.721185387Z" level=info msg="StartContainer for \"2f89cc910ba56b28f29757ee203b17b6f38ee3801b10f859275f8ce0db381577\"" Mar 17 17:38:54.782287 systemd[1]: Started cri-containerd-2f89cc910ba56b28f29757ee203b17b6f38ee3801b10f859275f8ce0db381577.scope - libcontainer container 2f89cc910ba56b28f29757ee203b17b6f38ee3801b10f859275f8ce0db381577. Mar 17 17:38:54.846035 containerd[1958]: time="2025-03-17T17:38:54.845921235Z" level=info msg="StartContainer for \"2f89cc910ba56b28f29757ee203b17b6f38ee3801b10f859275f8ce0db381577\" returns successfully" Mar 17 17:38:56.314479 kubelet[3281]: E0317 17:38:56.314356 3281 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-49?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 17 17:39:06.315333 kubelet[3281]: E0317 17:39:06.315248 3281 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-49?timeout=10s\": context deadline exceeded"