Mar 17 17:24:05.189182 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 17 17:24:05.190545 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Mar 17 16:05:23 -00 2025 Mar 17 17:24:05.190591 kernel: KASLR disabled due to lack of seed Mar 17 17:24:05.190608 kernel: efi: EFI v2.7 by EDK II Mar 17 17:24:05.190624 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Mar 17 17:24:05.190654 kernel: secureboot: Secure boot disabled Mar 17 17:24:05.190677 kernel: ACPI: Early table checksum verification disabled Mar 17 17:24:05.190693 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 17 17:24:05.190709 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 17 17:24:05.190725 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 17 17:24:05.190746 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Mar 17 17:24:05.190762 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 17 17:24:05.190777 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 17 17:24:05.190793 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 17 17:24:05.190811 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 17 17:24:05.190831 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 17 17:24:05.190849 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 17 17:24:05.190865 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 17 17:24:05.190881 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 17 17:24:05.190897 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 17 17:24:05.190913 kernel: printk: bootconsole [uart0] enabled Mar 17 17:24:05.190930 kernel: NUMA: Failed to initialise from firmware Mar 17 17:24:05.190948 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:24:05.190965 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 17 17:24:05.190983 kernel: Zone ranges: Mar 17 17:24:05.191000 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 17 17:24:05.191021 kernel: DMA32 empty Mar 17 17:24:05.191038 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 17 17:24:05.191054 kernel: Movable zone start for each node Mar 17 17:24:05.191070 kernel: Early memory node ranges Mar 17 17:24:05.191086 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 17 17:24:05.191103 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 17 17:24:05.191119 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 17 17:24:05.191135 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 17 17:24:05.191151 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 17 17:24:05.191167 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 17 17:24:05.191183 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 17 17:24:05.191199 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 17 17:24:05.191219 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 17 17:24:05.191264 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 17 17:24:05.191291 kernel: psci: probing for conduit method from ACPI. Mar 17 17:24:05.191308 kernel: psci: PSCIv1.0 detected in firmware. Mar 17 17:24:05.191325 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 17:24:05.191346 kernel: psci: Trusted OS migration not required Mar 17 17:24:05.191363 kernel: psci: SMC Calling Convention v1.1 Mar 17 17:24:05.191380 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 17 17:24:05.191396 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 17 17:24:05.191414 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 17 17:24:05.191430 kernel: Detected PIPT I-cache on CPU0 Mar 17 17:24:05.191448 kernel: CPU features: detected: GIC system register CPU interface Mar 17 17:24:05.191464 kernel: CPU features: detected: Spectre-v2 Mar 17 17:24:05.191481 kernel: CPU features: detected: Spectre-v3a Mar 17 17:24:05.191497 kernel: CPU features: detected: Spectre-BHB Mar 17 17:24:05.191514 kernel: CPU features: detected: ARM erratum 1742098 Mar 17 17:24:05.191531 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 17 17:24:05.191552 kernel: alternatives: applying boot alternatives Mar 17 17:24:05.191571 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:05.191590 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:24:05.191607 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:24:05.191624 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:24:05.191640 kernel: Fallback order for Node 0: 0 Mar 17 17:24:05.191657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 17 17:24:05.191674 kernel: Policy zone: Normal Mar 17 17:24:05.191691 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:24:05.191707 kernel: software IO TLB: area num 2. Mar 17 17:24:05.191728 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 17 17:24:05.191746 kernel: Memory: 3819896K/4030464K available (10240K kernel code, 2186K rwdata, 8100K rodata, 39744K init, 897K bss, 210568K reserved, 0K cma-reserved) Mar 17 17:24:05.191763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:24:05.191780 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:24:05.191798 kernel: rcu: RCU event tracing is enabled. Mar 17 17:24:05.191816 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:24:05.191833 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:24:05.191850 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:24:05.191868 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:24:05.191884 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:24:05.191901 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 17:24:05.191922 kernel: GICv3: 96 SPIs implemented Mar 17 17:24:05.191939 kernel: GICv3: 0 Extended SPIs implemented Mar 17 17:24:05.191956 kernel: Root IRQ handler: gic_handle_irq Mar 17 17:24:05.191973 kernel: GICv3: GICv3 features: 16 PPIs Mar 17 17:24:05.191989 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 17 17:24:05.192006 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 17 17:24:05.192023 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 17:24:05.192040 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 17:24:05.192057 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 17 17:24:05.192074 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 17 17:24:05.192091 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 17 17:24:05.192108 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:24:05.192129 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 17 17:24:05.192146 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 17 17:24:05.192163 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 17 17:24:05.192180 kernel: Console: colour dummy device 80x25 Mar 17 17:24:05.192198 kernel: printk: console [tty1] enabled Mar 17 17:24:05.192215 kernel: ACPI: Core revision 20230628 Mar 17 17:24:05.192916 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 17 17:24:05.192941 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:24:05.192959 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:24:05.192977 kernel: landlock: Up and running. Mar 17 17:24:05.193003 kernel: SELinux: Initializing. Mar 17 17:24:05.193021 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:05.193039 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:24:05.193057 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:05.193074 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:24:05.193092 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:24:05.193110 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:24:05.193128 kernel: Platform MSI: ITS@0x10080000 domain created Mar 17 17:24:05.193150 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 17 17:24:05.193167 kernel: Remapping and enabling EFI services. Mar 17 17:24:05.193185 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:24:05.193202 kernel: Detected PIPT I-cache on CPU1 Mar 17 17:24:05.193220 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 17 17:24:05.193270 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 17 17:24:05.193289 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 17 17:24:05.193307 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:24:05.193325 kernel: SMP: Total of 2 processors activated. Mar 17 17:24:05.193343 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 17:24:05.193367 kernel: CPU features: detected: 32-bit EL1 Support Mar 17 17:24:05.193385 kernel: CPU features: detected: CRC32 instructions Mar 17 17:24:05.193413 kernel: CPU: All CPU(s) started at EL1 Mar 17 17:24:05.193436 kernel: alternatives: applying system-wide alternatives Mar 17 17:24:05.193454 kernel: devtmpfs: initialized Mar 17 17:24:05.193472 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:24:05.193490 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:24:05.193508 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:24:05.193527 kernel: SMBIOS 3.0.0 present. Mar 17 17:24:05.193549 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 17 17:24:05.193567 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:24:05.193586 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 17:24:05.193604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 17:24:05.193623 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 17:24:05.193641 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:24:05.193660 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Mar 17 17:24:05.193682 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:24:05.193700 kernel: cpuidle: using governor menu Mar 17 17:24:05.193719 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 17:24:05.193737 kernel: ASID allocator initialised with 65536 entries Mar 17 17:24:05.193755 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:24:05.193773 kernel: Serial: AMBA PL011 UART driver Mar 17 17:24:05.193792 kernel: Modules: 17424 pages in range for non-PLT usage Mar 17 17:24:05.193810 kernel: Modules: 508944 pages in range for PLT usage Mar 17 17:24:05.193828 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:24:05.193850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:24:05.193869 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 17:24:05.193887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 17 17:24:05.193905 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:24:05.193924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:24:05.193942 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 17:24:05.193961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 17 17:24:05.193979 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:24:05.193998 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:24:05.194021 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:24:05.194041 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:24:05.196058 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:24:05.196089 kernel: ACPI: Interpreter enabled Mar 17 17:24:05.196108 kernel: ACPI: Using GIC for interrupt routing Mar 17 17:24:05.196126 kernel: ACPI: MCFG table detected, 1 entries Mar 17 17:24:05.196145 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Mar 17 17:24:05.196486 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:24:05.196701 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 17:24:05.196903 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 17:24:05.197109 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Mar 17 17:24:05.199404 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Mar 17 17:24:05.199448 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 17 17:24:05.199468 kernel: acpiphp: Slot [1] registered Mar 17 17:24:05.199487 kernel: acpiphp: Slot [2] registered Mar 17 17:24:05.199505 kernel: acpiphp: Slot [3] registered Mar 17 17:24:05.199533 kernel: acpiphp: Slot [4] registered Mar 17 17:24:05.199552 kernel: acpiphp: Slot [5] registered Mar 17 17:24:05.199570 kernel: acpiphp: Slot [6] registered Mar 17 17:24:05.199587 kernel: acpiphp: Slot [7] registered Mar 17 17:24:05.199605 kernel: acpiphp: Slot [8] registered Mar 17 17:24:05.199623 kernel: acpiphp: Slot [9] registered Mar 17 17:24:05.199641 kernel: acpiphp: Slot [10] registered Mar 17 17:24:05.199660 kernel: acpiphp: Slot [11] registered Mar 17 17:24:05.199678 kernel: acpiphp: Slot [12] registered Mar 17 17:24:05.199695 kernel: acpiphp: Slot [13] registered Mar 17 17:24:05.199718 kernel: acpiphp: Slot [14] registered Mar 17 17:24:05.199736 kernel: acpiphp: Slot [15] registered Mar 17 17:24:05.199754 kernel: acpiphp: Slot [16] registered Mar 17 17:24:05.199772 kernel: acpiphp: Slot [17] registered Mar 17 17:24:05.199790 kernel: acpiphp: Slot [18] registered Mar 17 17:24:05.199807 kernel: acpiphp: Slot [19] registered Mar 17 17:24:05.199826 kernel: acpiphp: Slot [20] registered Mar 17 17:24:05.199844 kernel: acpiphp: Slot [21] registered Mar 17 17:24:05.199862 kernel: acpiphp: Slot [22] registered Mar 17 17:24:05.199884 kernel: acpiphp: Slot [23] registered Mar 17 17:24:05.199902 kernel: acpiphp: Slot [24] registered Mar 17 17:24:05.199920 kernel: acpiphp: Slot [25] registered Mar 17 17:24:05.199937 kernel: acpiphp: Slot [26] registered Mar 17 17:24:05.199955 kernel: acpiphp: Slot [27] registered Mar 17 17:24:05.199973 kernel: acpiphp: Slot [28] registered Mar 17 17:24:05.199991 kernel: acpiphp: Slot [29] registered Mar 17 17:24:05.200009 kernel: acpiphp: Slot [30] registered Mar 17 17:24:05.200027 kernel: acpiphp: Slot [31] registered Mar 17 17:24:05.200045 kernel: PCI host bridge to bus 0000:00 Mar 17 17:24:05.200295 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 17 17:24:05.200489 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 17:24:05.200674 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 17 17:24:05.200858 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Mar 17 17:24:05.201088 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 17 17:24:05.203462 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 17 17:24:05.203707 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 17 17:24:05.203929 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 17 17:24:05.204130 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 17 17:24:05.204360 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:24:05.204591 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 17 17:24:05.204802 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 17 17:24:05.205013 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 17 17:24:05.207304 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 17 17:24:05.207713 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 17 17:24:05.207928 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Mar 17 17:24:05.208207 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Mar 17 17:24:05.208554 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Mar 17 17:24:05.208798 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Mar 17 17:24:05.209039 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Mar 17 17:24:05.209309 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 17 17:24:05.209498 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 17:24:05.209677 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 17 17:24:05.209702 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 17:24:05.209721 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 17:24:05.209740 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 17:24:05.209758 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 17:24:05.209776 kernel: iommu: Default domain type: Translated Mar 17 17:24:05.209802 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 17:24:05.209821 kernel: efivars: Registered efivars operations Mar 17 17:24:05.209839 kernel: vgaarb: loaded Mar 17 17:24:05.209857 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 17:24:05.209874 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:24:05.209892 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:24:05.209910 kernel: pnp: PnP ACPI init Mar 17 17:24:05.210114 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 17 17:24:05.210146 kernel: pnp: PnP ACPI: found 1 devices Mar 17 17:24:05.210165 kernel: NET: Registered PF_INET protocol family Mar 17 17:24:05.210183 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:24:05.210201 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:24:05.210220 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:24:05.211277 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:24:05.211297 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:24:05.211316 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:24:05.211334 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:05.211358 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:24:05.211377 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:24:05.211395 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:24:05.211413 kernel: kvm [1]: HYP mode not available Mar 17 17:24:05.211431 kernel: Initialise system trusted keyrings Mar 17 17:24:05.211449 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:24:05.211468 kernel: Key type asymmetric registered Mar 17 17:24:05.211486 kernel: Asymmetric key parser 'x509' registered Mar 17 17:24:05.211504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 17 17:24:05.211526 kernel: io scheduler mq-deadline registered Mar 17 17:24:05.211544 kernel: io scheduler kyber registered Mar 17 17:24:05.211562 kernel: io scheduler bfq registered Mar 17 17:24:05.211806 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 17 17:24:05.211836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 17:24:05.211855 kernel: ACPI: button: Power Button [PWRB] Mar 17 17:24:05.211873 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 17 17:24:05.211892 kernel: ACPI: button: Sleep Button [SLPB] Mar 17 17:24:05.211917 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:24:05.211936 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 17 17:24:05.212150 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 17 17:24:05.212176 kernel: printk: console [ttyS0] disabled Mar 17 17:24:05.212195 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 17 17:24:05.212214 kernel: printk: console [ttyS0] enabled Mar 17 17:24:05.212271 kernel: printk: bootconsole [uart0] disabled Mar 17 17:24:05.212292 kernel: thunder_xcv, ver 1.0 Mar 17 17:24:05.212311 kernel: thunder_bgx, ver 1.0 Mar 17 17:24:05.212329 kernel: nicpf, ver 1.0 Mar 17 17:24:05.212354 kernel: nicvf, ver 1.0 Mar 17 17:24:05.212569 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 17:24:05.212762 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T17:24:04 UTC (1742232244) Mar 17 17:24:05.212787 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 17:24:05.212806 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 17 17:24:05.212824 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 17 17:24:05.212843 kernel: watchdog: Hard watchdog permanently disabled Mar 17 17:24:05.212866 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:24:05.212885 kernel: Segment Routing with IPv6 Mar 17 17:24:05.212905 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:24:05.212923 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:24:05.212941 kernel: Key type dns_resolver registered Mar 17 17:24:05.212959 kernel: registered taskstats version 1 Mar 17 17:24:05.212977 kernel: Loading compiled-in X.509 certificates Mar 17 17:24:05.212996 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 74c9b4f5dfad711856d7363c976664fc02c1e24c' Mar 17 17:24:05.213014 kernel: Key type .fscrypt registered Mar 17 17:24:05.213032 kernel: Key type fscrypt-provisioning registered Mar 17 17:24:05.213055 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:24:05.213073 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:24:05.213092 kernel: ima: No architecture policies found Mar 17 17:24:05.213110 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 17:24:05.213128 kernel: clk: Disabling unused clocks Mar 17 17:24:05.213146 kernel: Freeing unused kernel memory: 39744K Mar 17 17:24:05.213164 kernel: Run /init as init process Mar 17 17:24:05.213182 kernel: with arguments: Mar 17 17:24:05.213200 kernel: /init Mar 17 17:24:05.213223 kernel: with environment: Mar 17 17:24:05.213302 kernel: HOME=/ Mar 17 17:24:05.213321 kernel: TERM=linux Mar 17 17:24:05.213339 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:24:05.213362 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:05.213386 systemd[1]: Detected virtualization amazon. Mar 17 17:24:05.213406 systemd[1]: Detected architecture arm64. Mar 17 17:24:05.213431 systemd[1]: Running in initrd. Mar 17 17:24:05.213451 systemd[1]: No hostname configured, using default hostname. Mar 17 17:24:05.213470 systemd[1]: Hostname set to . Mar 17 17:24:05.213490 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:24:05.213509 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:24:05.213529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:05.213550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:05.213572 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:24:05.213598 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:05.213620 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:24:05.213649 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:24:05.213675 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:24:05.213709 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:24:05.213735 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:05.213756 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:05.213783 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:24:05.213803 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:05.213822 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:05.213842 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:24:05.213861 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:05.213881 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:05.213901 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:24:05.213920 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:24:05.213940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:05.213964 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:05.213984 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:05.214003 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:24:05.214023 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:24:05.214043 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:05.214062 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:24:05.214081 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:24:05.214101 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:05.214125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:05.214145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:05.214164 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:05.214184 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:05.214203 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:24:05.214336 systemd-journald[251]: Collecting audit messages is disabled. Mar 17 17:24:05.214388 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:24:05.214408 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:24:05.214427 systemd-journald[251]: Journal started Mar 17 17:24:05.214468 systemd-journald[251]: Runtime Journal (/run/log/journal/ec25de4a601b53e412ec71132e6dc7ac) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:24:05.171217 systemd-modules-load[252]: Inserted module 'overlay' Mar 17 17:24:05.224303 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:05.226016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:05.239530 kernel: Bridge firewalling registered Mar 17 17:24:05.230992 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:05.237319 systemd-modules-load[252]: Inserted module 'br_netfilter' Mar 17 17:24:05.242391 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:05.257501 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:05.266724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:05.273549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:05.286698 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:24:05.312725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:05.323429 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:05.341106 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:05.342840 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:05.358675 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:24:05.373527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:24:05.397579 dracut-cmdline[287]: dracut-dracut-053 Mar 17 17:24:05.403599 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=31b104f73129b84fa679201ebe02fbfd197d071bbf0576d6ccc5c5442bcbb405 Mar 17 17:24:05.454702 systemd-resolved[290]: Positive Trust Anchors: Mar 17 17:24:05.454760 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:24:05.454823 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:24:05.540272 kernel: SCSI subsystem initialized Mar 17 17:24:05.545260 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:24:05.558271 kernel: iscsi: registered transport (tcp) Mar 17 17:24:05.580315 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:24:05.580388 kernel: QLogic iSCSI HBA Driver Mar 17 17:24:05.665272 kernel: random: crng init done Mar 17 17:24:05.665530 systemd-resolved[290]: Defaulting to hostname 'linux'. Mar 17 17:24:05.668954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:24:05.671302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:05.695552 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:05.711623 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:24:05.742367 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:24:05.742483 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:24:05.742511 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:24:05.809274 kernel: raid6: neonx8 gen() 6725 MB/s Mar 17 17:24:05.826262 kernel: raid6: neonx4 gen() 6583 MB/s Mar 17 17:24:05.843262 kernel: raid6: neonx2 gen() 5484 MB/s Mar 17 17:24:05.860261 kernel: raid6: neonx1 gen() 3988 MB/s Mar 17 17:24:05.877263 kernel: raid6: int64x8 gen() 3827 MB/s Mar 17 17:24:05.894268 kernel: raid6: int64x4 gen() 3710 MB/s Mar 17 17:24:05.911261 kernel: raid6: int64x2 gen() 3609 MB/s Mar 17 17:24:05.929033 kernel: raid6: int64x1 gen() 2774 MB/s Mar 17 17:24:05.929066 kernel: raid6: using algorithm neonx8 gen() 6725 MB/s Mar 17 17:24:05.947032 kernel: raid6: .... xor() 4858 MB/s, rmw enabled Mar 17 17:24:05.947069 kernel: raid6: using neon recovery algorithm Mar 17 17:24:05.955532 kernel: xor: measuring software checksum speed Mar 17 17:24:05.955587 kernel: 8regs : 10624 MB/sec Mar 17 17:24:05.956646 kernel: 32regs : 11937 MB/sec Mar 17 17:24:05.957815 kernel: arm64_neon : 9569 MB/sec Mar 17 17:24:05.957856 kernel: xor: using function: 32regs (11937 MB/sec) Mar 17 17:24:06.042326 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:24:06.060145 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:06.070550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:06.113247 systemd-udevd[473]: Using default interface naming scheme 'v255'. Mar 17 17:24:06.122862 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:06.135599 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:24:06.169299 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Mar 17 17:24:06.225712 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:06.247502 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:06.362116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:06.376453 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:24:06.418448 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:06.420931 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:06.436692 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:06.453380 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:06.462514 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:24:06.503430 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:06.574375 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 17:24:06.574439 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 17 17:24:06.591822 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 17 17:24:06.592080 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 17 17:24:06.592344 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f9:c4:0d:11:85 Mar 17 17:24:06.592168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:06.592425 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:06.601300 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:06.603529 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:24:06.614760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:06.616883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:06.621162 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:06.643338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:06.665374 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 17 17:24:06.665439 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 17 17:24:06.674277 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 17 17:24:06.684259 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:24:06.684330 kernel: GPT:9289727 != 16777215 Mar 17 17:24:06.684355 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:24:06.684379 kernel: GPT:9289727 != 16777215 Mar 17 17:24:06.684403 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:24:06.684426 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:06.686894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:06.697561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:24:06.742596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:06.811261 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (525) Mar 17 17:24:06.819282 kernel: BTRFS: device fsid c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (542) Mar 17 17:24:06.863019 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 17 17:24:06.928379 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 17 17:24:06.943881 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 17 17:24:06.949482 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 17 17:24:06.966518 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:24:06.986602 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:24:06.999263 disk-uuid[662]: Primary Header is updated. Mar 17 17:24:06.999263 disk-uuid[662]: Secondary Entries is updated. Mar 17 17:24:06.999263 disk-uuid[662]: Secondary Header is updated. Mar 17 17:24:07.009269 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:08.029288 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 17 17:24:08.029489 disk-uuid[664]: The operation has completed successfully. Mar 17 17:24:08.218788 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:24:08.219397 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:24:08.256527 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:24:08.277166 sh[924]: Success Mar 17 17:24:08.302264 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 17:24:08.429941 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:24:08.435295 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:24:08.438580 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:24:08.476368 kernel: BTRFS info (device dm-0): first mount of filesystem c0c482e3-6885-4a4e-b31c-6bc8f8c403e7 Mar 17 17:24:08.476430 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:08.478116 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:24:08.479418 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:24:08.480486 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:24:08.599283 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 17 17:24:08.635014 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:24:08.636526 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:24:08.653500 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:24:08.659514 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:24:08.687811 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:08.687888 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:08.687926 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:08.696275 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:08.712550 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:24:08.715035 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:08.726705 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:24:08.737604 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:24:08.840733 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:08.859549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:24:08.905529 systemd-networkd[1116]: lo: Link UP Mar 17 17:24:08.905554 systemd-networkd[1116]: lo: Gained carrier Mar 17 17:24:08.910520 systemd-networkd[1116]: Enumeration completed Mar 17 17:24:08.912106 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:24:08.916425 systemd[1]: Reached target network.target - Network. Mar 17 17:24:08.920618 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:08.920641 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:24:08.930011 systemd-networkd[1116]: eth0: Link UP Mar 17 17:24:08.930035 systemd-networkd[1116]: eth0: Gained carrier Mar 17 17:24:08.930054 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:08.953326 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.17.190/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:24:09.144098 ignition[1027]: Ignition 2.20.0 Mar 17 17:24:09.144120 ignition[1027]: Stage: fetch-offline Mar 17 17:24:09.144978 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:09.145002 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:09.145488 ignition[1027]: Ignition finished successfully Mar 17 17:24:09.152543 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:24:09.173192 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:24:09.195467 ignition[1126]: Ignition 2.20.0 Mar 17 17:24:09.195961 ignition[1126]: Stage: fetch Mar 17 17:24:09.196617 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:09.196642 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:09.196838 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:09.211113 ignition[1126]: PUT result: OK Mar 17 17:24:09.214124 ignition[1126]: parsed url from cmdline: "" Mar 17 17:24:09.214288 ignition[1126]: no config URL provided Mar 17 17:24:09.214308 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:24:09.215633 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:24:09.215668 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:09.216343 ignition[1126]: PUT result: OK Mar 17 17:24:09.216422 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 17 17:24:09.220438 ignition[1126]: GET result: OK Mar 17 17:24:09.220581 ignition[1126]: parsing config with SHA512: 10db651bd35f1dcd21a925a2e485433d0656541e97fa146c31ce4ef0e5569f24e25174341a4be680291113676ce805a6f0f65bb18bae998c5df4e60db451256d Mar 17 17:24:09.238532 unknown[1126]: fetched base config from "system" Mar 17 17:24:09.238569 unknown[1126]: fetched base config from "system" Mar 17 17:24:09.238585 unknown[1126]: fetched user config from "aws" Mar 17 17:24:09.245195 ignition[1126]: fetch: fetch complete Mar 17 17:24:09.246582 ignition[1126]: fetch: fetch passed Mar 17 17:24:09.246721 ignition[1126]: Ignition finished successfully Mar 17 17:24:09.252785 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:24:09.263624 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:24:09.294152 ignition[1133]: Ignition 2.20.0 Mar 17 17:24:09.294181 ignition[1133]: Stage: kargs Mar 17 17:24:09.295544 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:09.295574 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:09.295740 ignition[1133]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:09.296784 ignition[1133]: PUT result: OK Mar 17 17:24:09.307403 ignition[1133]: kargs: kargs passed Mar 17 17:24:09.308697 ignition[1133]: Ignition finished successfully Mar 17 17:24:09.312394 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:24:09.325623 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:24:09.349124 ignition[1139]: Ignition 2.20.0 Mar 17 17:24:09.349147 ignition[1139]: Stage: disks Mar 17 17:24:09.350457 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:09.350718 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:09.350872 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:09.354326 ignition[1139]: PUT result: OK Mar 17 17:24:09.362612 ignition[1139]: disks: disks passed Mar 17 17:24:09.362728 ignition[1139]: Ignition finished successfully Mar 17 17:24:09.364792 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:24:09.371809 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:24:09.374062 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:24:09.376372 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:24:09.378580 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:24:09.382329 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:24:09.405645 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:24:09.444537 systemd-fsck[1147]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:24:09.453651 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:24:09.464443 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:24:09.555265 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 6b579bf2-7716-4d59-98eb-b92ea668693e r/w with ordered data mode. Quota mode: none. Mar 17 17:24:09.556337 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:24:09.559760 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:24:09.580399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:24:09.587570 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:24:09.591501 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:24:09.591594 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:24:09.591641 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:24:09.614290 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1166) Mar 17 17:24:09.617189 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:24:09.623644 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:09.623683 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:09.623709 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:09.631567 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:24:09.641500 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:09.644091 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:24:10.127713 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:24:10.148522 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:24:10.156880 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:24:10.165644 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:24:10.224380 systemd-networkd[1116]: eth0: Gained IPv6LL Mar 17 17:24:10.514124 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:24:10.522411 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:24:10.532529 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:24:10.550067 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:24:10.552457 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:10.588999 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:24:10.599391 ignition[1278]: INFO : Ignition 2.20.0 Mar 17 17:24:10.599391 ignition[1278]: INFO : Stage: mount Mar 17 17:24:10.602595 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:10.602595 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:10.602595 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:10.609215 ignition[1278]: INFO : PUT result: OK Mar 17 17:24:10.614709 ignition[1278]: INFO : mount: mount passed Mar 17 17:24:10.617376 ignition[1278]: INFO : Ignition finished successfully Mar 17 17:24:10.619765 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:24:10.629453 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:24:10.658664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:24:10.685713 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1290) Mar 17 17:24:10.685777 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3dbd9b64-bd31-4292-be10-51551993b53f Mar 17 17:24:10.685804 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 17 17:24:10.688405 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 17 17:24:10.693263 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 17 17:24:10.696571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:24:10.736566 ignition[1307]: INFO : Ignition 2.20.0 Mar 17 17:24:10.736566 ignition[1307]: INFO : Stage: files Mar 17 17:24:10.739761 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:10.739761 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:10.743816 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:10.746864 ignition[1307]: INFO : PUT result: OK Mar 17 17:24:10.751182 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:24:10.754874 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:24:10.754874 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:24:10.790589 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:24:10.793338 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:24:10.796373 unknown[1307]: wrote ssh authorized keys file for user: core Mar 17 17:24:10.798610 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:24:10.802989 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:24:10.802989 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:24:10.802989 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:24:10.802989 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 17:24:10.896936 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 17:24:11.065617 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 17:24:11.065617 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:24:11.072747 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 17:24:11.528429 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 17:24:11.675203 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 17:24:11.678704 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:24:11.678704 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:24:11.678704 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:24:11.678704 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:24:11.678704 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:24:11.678704 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:24:11.678704 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:24:11.701166 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:24:11.701166 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:24:11.701166 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:24:11.701166 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:11.701166 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:11.701166 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:11.701166 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 17:24:12.108997 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 17:24:12.456000 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 17:24:12.460126 ignition[1307]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:24:12.463629 ignition[1307]: INFO : files: files passed Mar 17 17:24:12.463629 ignition[1307]: INFO : Ignition finished successfully Mar 17 17:24:12.504287 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:24:12.513500 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:24:12.523630 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:24:12.541867 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:24:12.542063 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:24:12.561178 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:12.561178 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:12.570800 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:24:12.575834 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:24:12.580704 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:24:12.591528 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:24:12.641952 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:24:12.644296 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:24:12.651047 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:24:12.653003 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:24:12.654963 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:24:12.662164 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:24:12.702559 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:24:12.716707 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:24:12.739816 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:12.744214 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:12.748545 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:24:12.750415 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:24:12.750658 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:24:12.753324 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:24:12.756480 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:24:12.766282 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:24:12.768485 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:24:12.770861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:24:12.773125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:24:12.775597 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:24:12.787871 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:24:12.789971 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:24:12.792042 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:24:12.793990 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:24:12.794256 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:24:12.804909 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:12.807124 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:12.809458 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:24:12.815826 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:12.818324 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:24:12.818548 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:24:12.821306 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:24:12.821544 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:24:12.833884 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:24:12.834093 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:24:12.847138 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:24:12.869025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:24:12.876739 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:24:12.880976 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:12.886478 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:24:12.905540 ignition[1359]: INFO : Ignition 2.20.0 Mar 17 17:24:12.905540 ignition[1359]: INFO : Stage: umount Mar 17 17:24:12.905540 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:24:12.905540 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 17 17:24:12.905540 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 17 17:24:12.886745 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:24:12.935003 ignition[1359]: INFO : PUT result: OK Mar 17 17:24:12.908954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:24:12.910945 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:24:12.944549 ignition[1359]: INFO : umount: umount passed Mar 17 17:24:12.944549 ignition[1359]: INFO : Ignition finished successfully Mar 17 17:24:12.942284 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:24:12.950077 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:24:12.950393 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:24:12.955427 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:24:12.955605 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:24:12.963049 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:24:12.963242 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:24:12.968243 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:24:12.968354 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:24:12.975405 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:24:12.975498 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:24:12.977463 systemd[1]: Stopped target network.target - Network. Mar 17 17:24:12.979101 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:24:12.979195 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:24:12.981441 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:24:12.983594 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:24:12.987099 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:12.990068 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:24:12.992117 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:24:13.008390 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:24:13.008481 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:24:13.010401 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:24:13.010478 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:24:13.012909 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:24:13.013002 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:24:13.024807 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:24:13.024905 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:24:13.026951 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:24:13.027035 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:24:13.029309 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:24:13.031520 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:24:13.047340 systemd-networkd[1116]: eth0: DHCPv6 lease lost Mar 17 17:24:13.050732 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:24:13.051022 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:24:13.053589 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:24:13.053664 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:13.067613 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:24:13.071327 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:24:13.071443 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:24:13.080507 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:13.091440 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:24:13.093415 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:24:13.104957 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:24:13.105094 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:13.107719 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:24:13.107827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:13.111567 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:24:13.111664 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:13.139994 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:24:13.141764 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:13.148413 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:24:13.148544 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:13.154805 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:24:13.154903 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:13.157345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:24:13.157449 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:24:13.161466 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:24:13.161566 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:24:13.165650 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:24:13.165739 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:24:13.195640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:24:13.198079 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:24:13.198200 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:13.200657 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:24:13.200743 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:13.205752 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:24:13.205865 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:13.211612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:24:13.211715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:13.221461 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:24:13.221686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:24:13.224807 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:24:13.224991 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:24:13.244062 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:24:13.261652 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:24:13.309724 systemd[1]: Switching root. Mar 17 17:24:13.348578 systemd-journald[251]: Journal stopped Mar 17 17:24:15.931967 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 17 17:24:15.932086 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:24:15.932136 kernel: SELinux: policy capability open_perms=1 Mar 17 17:24:15.936673 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:24:15.936724 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:24:15.936757 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:24:15.936800 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:24:15.936830 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:24:15.936860 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:24:15.936891 kernel: audit: type=1403 audit(1742232254.075:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:24:15.936927 systemd[1]: Successfully loaded SELinux policy in 76.201ms. Mar 17 17:24:15.936980 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.828ms. Mar 17 17:24:15.937019 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:24:15.937052 systemd[1]: Detected virtualization amazon. Mar 17 17:24:15.937084 systemd[1]: Detected architecture arm64. Mar 17 17:24:15.937116 systemd[1]: Detected first boot. Mar 17 17:24:15.937148 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:24:15.937180 zram_generator::config[1418]: No configuration found. Mar 17 17:24:15.937216 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:24:15.937269 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:24:15.937308 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 17 17:24:15.937363 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:24:15.937402 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:24:15.937438 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:24:15.937468 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:24:15.937499 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:24:15.937532 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:24:15.937564 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:24:15.937597 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:24:15.937632 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:24:15.937667 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:24:15.937699 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:24:15.937730 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:24:15.937762 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:24:15.937795 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:24:15.937827 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:24:15.937857 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:24:15.937886 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:24:15.937920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:24:15.937951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:24:15.937980 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:24:15.938011 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:24:15.938061 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:24:15.938099 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:24:15.938129 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:24:15.938159 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:24:15.938193 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:24:15.938222 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:24:15.948521 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:24:15.948555 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:24:15.948587 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:24:15.948622 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:24:15.948652 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:24:15.948685 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:24:15.948714 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:24:15.948754 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:24:15.948786 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:24:15.948816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:24:15.948848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:24:15.948878 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:24:15.948907 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:24:15.948937 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:24:15.948969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:24:15.949001 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:24:15.949035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:24:15.949067 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:24:15.949098 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:24:15.949131 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:24:15.949189 kernel: fuse: init (API version 7.39) Mar 17 17:24:15.949245 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:24:15.949283 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:24:15.949313 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:24:15.949351 kernel: loop: module loaded Mar 17 17:24:15.949382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:24:15.949414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:24:15.949445 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:24:15.949475 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:24:15.949507 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:24:15.949536 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:24:15.949566 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:24:15.949598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:24:15.949632 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:24:15.949665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:24:15.949697 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:24:15.949726 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:24:15.949755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:24:15.949784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:24:15.949813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:24:15.949843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:24:15.950669 systemd-journald[1522]: Collecting audit messages is disabled. Mar 17 17:24:15.950730 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:24:15.950762 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:24:15.950791 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:24:15.950824 kernel: ACPI: bus type drm_connector registered Mar 17 17:24:15.950855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:24:15.950884 systemd-journald[1522]: Journal started Mar 17 17:24:15.950932 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec25de4a601b53e412ec71132e6dc7ac) is 8.0M, max 75.3M, 67.3M free. Mar 17 17:24:15.958196 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:24:15.960878 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:24:15.964621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:24:15.969073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:24:15.972155 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:24:15.975973 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:24:16.004878 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:24:16.017463 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:24:16.029316 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:24:16.033448 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:24:16.056307 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:24:16.082833 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:24:16.085383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:24:16.089529 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:24:16.091717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:24:16.100512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:24:16.118378 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec25de4a601b53e412ec71132e6dc7ac is 106.385ms for 896 entries. Mar 17 17:24:16.118378 systemd-journald[1522]: System Journal (/var/log/journal/ec25de4a601b53e412ec71132e6dc7ac) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:24:16.247970 systemd-journald[1522]: Received client request to flush runtime journal. Mar 17 17:24:16.129558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:24:16.142108 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:24:16.149618 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:24:16.156052 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:24:16.168031 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:24:16.221202 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:24:16.233603 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:24:16.260815 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:24:16.264476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:24:16.298472 udevadm[1578]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:24:16.301694 systemd-tmpfiles[1570]: ACLs are not supported, ignoring. Mar 17 17:24:16.301724 systemd-tmpfiles[1570]: ACLs are not supported, ignoring. Mar 17 17:24:16.312776 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:24:16.327619 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:24:16.388046 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:24:16.398754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:24:16.432194 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Mar 17 17:24:16.432260 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Mar 17 17:24:16.442445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:24:17.124031 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:24:17.137659 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:24:17.197344 systemd-udevd[1598]: Using default interface naming scheme 'v255'. Mar 17 17:24:17.309420 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:24:17.322582 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:24:17.357534 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:24:17.433498 (udev-worker)[1602]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:24:17.439087 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 17 17:24:17.529839 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:24:17.699453 systemd-networkd[1603]: lo: Link UP Mar 17 17:24:17.699992 systemd-networkd[1603]: lo: Gained carrier Mar 17 17:24:17.703199 systemd-networkd[1603]: Enumeration completed Mar 17 17:24:17.704157 systemd-networkd[1603]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:17.704346 systemd-networkd[1603]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:24:17.706460 systemd-networkd[1603]: eth0: Link UP Mar 17 17:24:17.706997 systemd-networkd[1603]: eth0: Gained carrier Mar 17 17:24:17.707192 systemd-networkd[1603]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:24:17.708617 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:24:17.711493 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:24:17.723407 systemd-networkd[1603]: eth0: DHCPv4 address 172.31.17.190/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 17 17:24:17.723575 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:24:17.753271 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1610) Mar 17 17:24:17.952929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:24:17.956656 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:24:17.974561 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 17 17:24:17.990492 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:24:18.028296 lvm[1727]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:24:18.065009 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:24:18.068851 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:24:18.079596 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:24:18.098968 lvm[1730]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:24:18.137866 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:24:18.140562 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:24:18.143454 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:24:18.143513 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:24:18.145855 systemd[1]: Reached target machines.target - Containers. Mar 17 17:24:18.149774 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:24:18.160609 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:24:18.165462 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:24:18.167852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:24:18.171845 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:24:18.188675 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:24:18.197523 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:24:18.203152 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:24:18.231279 kernel: loop0: detected capacity change from 0 to 53784 Mar 17 17:24:18.236637 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:24:18.249944 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:24:18.251421 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:24:18.279287 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:24:18.320267 kernel: loop1: detected capacity change from 0 to 116808 Mar 17 17:24:18.439296 kernel: loop2: detected capacity change from 0 to 194096 Mar 17 17:24:18.551273 kernel: loop3: detected capacity change from 0 to 113536 Mar 17 17:24:18.673284 kernel: loop4: detected capacity change from 0 to 53784 Mar 17 17:24:18.697682 kernel: loop5: detected capacity change from 0 to 116808 Mar 17 17:24:18.719266 kernel: loop6: detected capacity change from 0 to 194096 Mar 17 17:24:18.752291 kernel: loop7: detected capacity change from 0 to 113536 Mar 17 17:24:18.764737 (sd-merge)[1752]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 17 17:24:18.765727 (sd-merge)[1752]: Merged extensions into '/usr'. Mar 17 17:24:18.774532 systemd[1]: Reloading requested from client PID 1738 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:24:18.774766 systemd[1]: Reloading... Mar 17 17:24:18.896341 zram_generator::config[1783]: No configuration found. Mar 17 17:24:19.160833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:24:19.300788 systemd[1]: Reloading finished in 525 ms. Mar 17 17:24:19.326998 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:24:19.343485 systemd[1]: Starting ensure-sysext.service... Mar 17 17:24:19.349569 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:24:19.365422 systemd[1]: Reloading requested from client PID 1837 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:24:19.365619 systemd[1]: Reloading... Mar 17 17:24:19.420798 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:24:19.421485 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:24:19.424148 systemd-tmpfiles[1838]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:24:19.424961 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Mar 17 17:24:19.425308 systemd-tmpfiles[1838]: ACLs are not supported, ignoring. Mar 17 17:24:19.434001 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:24:19.434273 systemd-tmpfiles[1838]: Skipping /boot Mar 17 17:24:19.461702 systemd-tmpfiles[1838]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:24:19.461897 systemd-tmpfiles[1838]: Skipping /boot Mar 17 17:24:19.563271 zram_generator::config[1870]: No configuration found. Mar 17 17:24:19.696418 systemd-networkd[1603]: eth0: Gained IPv6LL Mar 17 17:24:19.699265 ldconfig[1734]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:24:19.799154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:24:19.936182 systemd[1]: Reloading finished in 569 ms. Mar 17 17:24:19.961147 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:24:19.964334 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:24:19.972458 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:24:19.992698 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:24:20.007549 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:24:20.019529 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:24:20.033661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:24:20.048471 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:24:20.068824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:24:20.074788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:24:20.084787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:24:20.104341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:24:20.107621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:24:20.128808 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:24:20.132707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:24:20.133064 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:24:20.147901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:24:20.157729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:24:20.161721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:24:20.162163 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:24:20.168157 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:24:20.184185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:24:20.191420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:24:20.196093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:24:20.200641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:24:20.204106 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:24:20.204905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:24:20.221371 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:24:20.221788 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:24:20.236703 systemd[1]: Finished ensure-sysext.service. Mar 17 17:24:20.245112 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:24:20.245285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:24:20.255520 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:24:20.270053 augenrules[1976]: No rules Mar 17 17:24:20.276510 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:24:20.277021 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:24:20.310632 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:24:20.332398 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:24:20.336645 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:24:20.346419 systemd-resolved[1935]: Positive Trust Anchors: Mar 17 17:24:20.347033 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:24:20.347179 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:24:20.355055 systemd-resolved[1935]: Defaulting to hostname 'linux'. Mar 17 17:24:20.358474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:24:20.360768 systemd[1]: Reached target network.target - Network. Mar 17 17:24:20.362513 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:24:20.364563 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:24:20.366790 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:24:20.368899 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:24:20.371201 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:24:20.373824 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:24:20.376085 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:24:20.378472 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:24:20.380875 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:24:20.380926 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:24:20.382661 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:24:20.386315 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:24:20.391765 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:24:20.396123 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:24:20.403333 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:24:20.405491 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:24:20.407495 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:24:20.409572 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:24:20.409770 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:24:20.409922 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:24:20.414403 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:24:20.426494 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:24:20.435556 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:24:20.452155 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:24:20.458559 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:24:20.460825 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:24:20.474329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:20.490381 jq[1993]: false Mar 17 17:24:20.492064 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:24:20.522078 systemd[1]: Started ntpd.service - Network Time Service. Mar 17 17:24:20.531973 extend-filesystems[1994]: Found loop4 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found loop5 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found loop6 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found loop7 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1p1 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1p2 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1p3 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found usr Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1p4 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1p6 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1p7 Mar 17 17:24:20.538002 extend-filesystems[1994]: Found nvme0n1p9 Mar 17 17:24:20.538002 extend-filesystems[1994]: Checking size of /dev/nvme0n1p9 Mar 17 17:24:20.535102 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:24:20.543701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:24:20.592838 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 17 17:24:20.611037 dbus-daemon[1991]: [system] SELinux support is enabled Mar 17 17:24:20.607504 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:24:20.623135 dbus-daemon[1991]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1603 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 17 17:24:20.625534 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:24:20.648113 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:24:20.652955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:24:20.667590 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:24:20.677545 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:24:20.682095 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:24:20.691919 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:24:20.693533 extend-filesystems[1994]: Resized partition /dev/nvme0n1p9 Mar 17 17:24:20.692817 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:24:20.709121 extend-filesystems[2028]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:24:20.719451 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Mar 17 17:24:20.727063 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:24:20.733715 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:24:20.756412 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:24:20.758872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:24:20.773279 jq[2025]: true Mar 17 17:24:20.776751 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:24:20.821659 ntpd[1999]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: ntpd 4.2.8p17@1.4004-o Mon Mar 17 15:34:53 UTC 2025 (1): Starting Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: ---------------------------------------------------- Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: corporation. Support and training for ntp-4 are Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: available at https://www.nwtime.org/support Mar 17 17:24:20.823772 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: ---------------------------------------------------- Mar 17 17:24:20.821717 ntpd[1999]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 17 17:24:20.821738 ntpd[1999]: ---------------------------------------------------- Mar 17 17:24:20.821758 ntpd[1999]: ntp-4 is maintained by Network Time Foundation, Mar 17 17:24:20.821777 ntpd[1999]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 17 17:24:20.821796 ntpd[1999]: corporation. Support and training for ntp-4 are Mar 17 17:24:20.821814 ntpd[1999]: available at https://www.nwtime.org/support Mar 17 17:24:20.821833 ntpd[1999]: ---------------------------------------------------- Mar 17 17:24:20.832507 ntpd[1999]: proto: precision = 0.108 usec (-23) Mar 17 17:24:20.835494 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: proto: precision = 0.108 usec (-23) Mar 17 17:24:20.835494 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: basedate set to 2025-03-05 Mar 17 17:24:20.835494 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: gps base set to 2025-03-09 (week 2357) Mar 17 17:24:20.832934 ntpd[1999]: basedate set to 2025-03-05 Mar 17 17:24:20.832958 ntpd[1999]: gps base set to 2025-03-09 (week 2357) Mar 17 17:24:20.841287 ntpd[1999]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:24:20.847257 jq[2042]: true Mar 17 17:24:20.847641 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Listen and drop on 0 v6wildcard [::]:123 Mar 17 17:24:20.847641 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:24:20.847641 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:24:20.847641 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Listen normally on 3 eth0 172.31.17.190:123 Mar 17 17:24:20.847641 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Listen normally on 4 lo [::1]:123 Mar 17 17:24:20.847641 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Listen normally on 5 eth0 [fe80::4f9:c4ff:fe0d:1185%2]:123 Mar 17 17:24:20.847641 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: Listening on routing socket on fd #22 for interface updates Mar 17 17:24:20.841405 ntpd[1999]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 17 17:24:20.841667 ntpd[1999]: Listen normally on 2 lo 127.0.0.1:123 Mar 17 17:24:20.841727 ntpd[1999]: Listen normally on 3 eth0 172.31.17.190:123 Mar 17 17:24:20.841791 ntpd[1999]: Listen normally on 4 lo [::1]:123 Mar 17 17:24:20.841863 ntpd[1999]: Listen normally on 5 eth0 [fe80::4f9:c4ff:fe0d:1185%2]:123 Mar 17 17:24:20.841924 ntpd[1999]: Listening on routing socket on fd #22 for interface updates Mar 17 17:24:20.866458 (ntainerd)[2048]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:24:20.879969 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:20.882728 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:20.882728 ntpd[1999]: 17 Mar 17:24:20 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:20.880028 ntpd[1999]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 17 17:24:20.914355 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:24:20.900572 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 17:24:20.918885 tar[2030]: linux-arm64/helm Mar 17 17:24:20.914411 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:24:20.936704 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 17 17:24:20.939412 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:24:20.939448 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:24:20.952146 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Mar 17 17:24:20.966280 update_engine[2021]: I20250317 17:24:20.964417 2021 main.cc:92] Flatcar Update Engine starting Mar 17 17:24:20.978575 update_engine[2021]: I20250317 17:24:20.974498 2021 update_check_scheduler.cc:74] Next update check in 9m59s Mar 17 17:24:20.970363 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:24:20.975417 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:24:20.987580 extend-filesystems[2028]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 17 17:24:20.987580 extend-filesystems[2028]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:24:20.987580 extend-filesystems[2028]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Mar 17 17:24:21.014532 extend-filesystems[1994]: Resized filesystem in /dev/nvme0n1p9 Mar 17 17:24:21.002657 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:24:21.021346 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:24:21.039960 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:24:21.043219 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 17 17:24:21.070479 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 17 17:24:21.180900 bash[2095]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:24:21.173688 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:24:21.205589 systemd[1]: Starting sshkeys.service... Mar 17 17:24:21.267682 systemd-logind[2020]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 17:24:21.267766 systemd-logind[2020]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 17 17:24:21.269948 systemd-logind[2020]: New seat seat0. Mar 17 17:24:21.272976 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:24:21.280413 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:24:21.298267 coreos-metadata[1990]: Mar 17 17:24:21.296 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.316 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.318 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.318 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.321 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.321 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.323 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.323 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.331 INFO Fetch failed with 404: resource not found Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.331 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.336 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.336 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.337 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.339 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.339 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.341 INFO Fetch successful Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.341 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 17 17:24:21.359817 coreos-metadata[1990]: Mar 17 17:24:21.345 INFO Fetch successful Mar 17 17:24:21.353675 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:24:21.378281 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2097) Mar 17 17:24:21.417181 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:24:21.420301 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:24:21.454972 amazon-ssm-agent[2082]: Initializing new seelog logger Mar 17 17:24:21.459258 amazon-ssm-agent[2082]: New Seelog Logger Creation Complete Mar 17 17:24:21.459258 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.459258 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 processing appconfig overrides Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 processing appconfig overrides Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 processing appconfig overrides Mar 17 17:24:21.460966 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO Proxy environment variables: Mar 17 17:24:21.468536 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.469159 amazon-ssm-agent[2082]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 17 17:24:21.469448 amazon-ssm-agent[2082]: 2025/03/17 17:24:21 processing appconfig overrides Mar 17 17:24:21.543279 locksmithd[2074]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:24:21.576173 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO https_proxy: Mar 17 17:24:21.630697 coreos-metadata[2108]: Mar 17 17:24:21.630 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 17 17:24:21.634261 coreos-metadata[2108]: Mar 17 17:24:21.632 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 17 17:24:21.635516 coreos-metadata[2108]: Mar 17 17:24:21.635 INFO Fetch successful Mar 17 17:24:21.635795 coreos-metadata[2108]: Mar 17 17:24:21.635 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 17:24:21.641719 coreos-metadata[2108]: Mar 17 17:24:21.641 INFO Fetch successful Mar 17 17:24:21.649036 unknown[2108]: wrote ssh authorized keys file for user: core Mar 17 17:24:21.682152 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO http_proxy: Mar 17 17:24:21.791131 update-ssh-keys[2193]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:24:21.796898 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:24:21.806541 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO no_proxy: Mar 17 17:24:21.819000 systemd[1]: Finished sshkeys.service. Mar 17 17:24:21.851835 containerd[2048]: time="2025-03-17T17:24:21.847154870Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:24:21.922521 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO Checking if agent identity type OnPrem can be assumed Mar 17 17:24:22.025271 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO Checking if agent identity type EC2 can be assumed Mar 17 17:24:22.035290 containerd[2048]: time="2025-03-17T17:24:22.031438847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.046544879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.046636379Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.046674167Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.046976471Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.047008247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.047128259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.047155055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.047535779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.047564807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.047595311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048273 containerd[2048]: time="2025-03-17T17:24:22.047638511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048814 containerd[2048]: time="2025-03-17T17:24:22.047801015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:22.048814 containerd[2048]: time="2025-03-17T17:24:22.048184127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:24:22.055801 containerd[2048]: time="2025-03-17T17:24:22.055700915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:24:22.055929 containerd[2048]: time="2025-03-17T17:24:22.055790279Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:24:22.056209 containerd[2048]: time="2025-03-17T17:24:22.056139767Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:24:22.056419 containerd[2048]: time="2025-03-17T17:24:22.056314271Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.065098415Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.065220851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.065275787Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.065312339Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.065362907Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.065608619Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066182027Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066416291Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066449939Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066482519Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066513467Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066544535Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066573683Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068254 containerd[2048]: time="2025-03-17T17:24:22.066623303Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066657143Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066690431Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066719615Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066747395Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066787163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066821219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066864431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066896159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066924875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066957851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.066986051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.067015055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.067044815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.068901 containerd[2048]: time="2025-03-17T17:24:22.067080623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.069515 containerd[2048]: time="2025-03-17T17:24:22.067121183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.069515 containerd[2048]: time="2025-03-17T17:24:22.067150835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.069515 containerd[2048]: time="2025-03-17T17:24:22.067179119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.069515 containerd[2048]: time="2025-03-17T17:24:22.067210547Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.072442343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.072594527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.072626483Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074471111Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074544935Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074574263Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074645099Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074670203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074724179Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074749535Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:24:22.075500 containerd[2048]: time="2025-03-17T17:24:22.074796743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:24:22.092463 containerd[2048]: time="2025-03-17T17:24:22.075698831Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:24:22.092463 containerd[2048]: time="2025-03-17T17:24:22.081470771Z" level=info msg="Connect containerd service" Mar 17 17:24:22.092463 containerd[2048]: time="2025-03-17T17:24:22.081590135Z" level=info msg="using legacy CRI server" Mar 17 17:24:22.092463 containerd[2048]: time="2025-03-17T17:24:22.081610883Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:24:22.092463 containerd[2048]: time="2025-03-17T17:24:22.081993671Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:24:22.092463 containerd[2048]: time="2025-03-17T17:24:22.087776795Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.092955251Z" level=info msg="Start subscribing containerd event" Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.098366327Z" level=info msg="Start recovering state" Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.098633579Z" level=info msg="Start event monitor" Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.098670611Z" level=info msg="Start snapshots syncer" Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.099315791Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.099345035Z" level=info msg="Start streaming server" Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.094041851Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.100579943Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:24:22.104255 containerd[2048]: time="2025-03-17T17:24:22.100754003Z" level=info msg="containerd successfully booted in 0.255222s" Mar 17 17:24:22.100918 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:24:22.125282 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO Agent will take identity from EC2 Mar 17 17:24:22.227266 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:24:22.248030 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 17 17:24:22.248814 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 17 17:24:22.255403 dbus-daemon[1991]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2067 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 17 17:24:22.271312 systemd[1]: Starting polkit.service - Authorization Manager... Mar 17 17:24:22.308489 polkitd[2244]: Started polkitd version 121 Mar 17 17:24:22.325152 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:24:22.328410 polkitd[2244]: Loading rules from directory /etc/polkit-1/rules.d Mar 17 17:24:22.328539 polkitd[2244]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 17 17:24:22.330928 polkitd[2244]: Finished loading, compiling and executing 2 rules Mar 17 17:24:22.333863 systemd[1]: Started polkit.service - Authorization Manager. Mar 17 17:24:22.333591 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 17 17:24:22.336080 polkitd[2244]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 17 17:24:22.377670 systemd-resolved[1935]: System hostname changed to 'ip-172-31-17-190'. Mar 17 17:24:22.377678 systemd-hostnamed[2067]: Hostname set to (transient) Mar 17 17:24:22.427911 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 17 17:24:22.529321 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 17 17:24:22.630388 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 17 17:24:22.731720 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [amazon-ssm-agent] Starting Core Agent Mar 17 17:24:22.830609 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 17 17:24:22.931211 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [Registrar] Starting registrar module Mar 17 17:24:22.941782 sshd_keygen[2047]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:24:23.018692 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:24:23.026178 tar[2030]: linux-arm64/LICENSE Mar 17 17:24:23.026817 tar[2030]: linux-arm64/README.md Mar 17 17:24:23.034595 amazon-ssm-agent[2082]: 2025-03-17 17:24:21 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 17 17:24:23.032830 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:24:23.068000 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:24:23.068944 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:24:23.077153 amazon-ssm-agent[2082]: 2025-03-17 17:24:23 INFO [EC2Identity] EC2 registration was successful. Mar 17 17:24:23.077403 amazon-ssm-agent[2082]: 2025-03-17 17:24:23 INFO [CredentialRefresher] credentialRefresher has started Mar 17 17:24:23.077529 amazon-ssm-agent[2082]: 2025-03-17 17:24:23 INFO [CredentialRefresher] Starting credentials refresher loop Mar 17 17:24:23.077627 amazon-ssm-agent[2082]: 2025-03-17 17:24:23 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 17 17:24:23.084675 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:24:23.089035 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:24:23.117197 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:24:23.129894 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:24:23.133820 amazon-ssm-agent[2082]: 2025-03-17 17:24:23 INFO [CredentialRefresher] Next credential rotation will be in 31.733313522333333 minutes Mar 17 17:24:23.140806 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:24:23.143297 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:24:23.683623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:23.686829 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:24:23.691055 systemd[1]: Startup finished in 10.336s (kernel) + 9.689s (userspace) = 20.025s. Mar 17 17:24:23.707428 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:24:24.117861 amazon-ssm-agent[2082]: 2025-03-17 17:24:24 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 17 17:24:24.220303 amazon-ssm-agent[2082]: 2025-03-17 17:24:24 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2295) started Mar 17 17:24:24.320612 amazon-ssm-agent[2082]: 2025-03-17 17:24:24 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 17 17:24:24.988068 kubelet[2285]: E0317 17:24:24.987977 2285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:24:24.992576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:24:24.993033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:24:27.610055 systemd-resolved[1935]: Clock change detected. Flushing caches. Mar 17 17:24:27.946873 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:24:27.957249 systemd[1]: Started sshd@0-172.31.17.190:22-139.178.68.195:38336.service - OpenSSH per-connection server daemon (139.178.68.195:38336). Mar 17 17:24:28.173016 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 38336 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:28.175828 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:28.191190 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:24:28.202216 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:24:28.206831 systemd-logind[2020]: New session 1 of user core. Mar 17 17:24:28.229262 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:24:28.243827 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:24:28.250512 (systemd)[2315]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:24:28.463816 systemd[2315]: Queued start job for default target default.target. Mar 17 17:24:28.465115 systemd[2315]: Created slice app.slice - User Application Slice. Mar 17 17:24:28.465162 systemd[2315]: Reached target paths.target - Paths. Mar 17 17:24:28.465193 systemd[2315]: Reached target timers.target - Timers. Mar 17 17:24:28.472992 systemd[2315]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:24:28.491092 systemd[2315]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:24:28.491231 systemd[2315]: Reached target sockets.target - Sockets. Mar 17 17:24:28.491264 systemd[2315]: Reached target basic.target - Basic System. Mar 17 17:24:28.491365 systemd[2315]: Reached target default.target - Main User Target. Mar 17 17:24:28.491431 systemd[2315]: Startup finished in 229ms. Mar 17 17:24:28.491932 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:24:28.501859 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:24:28.651455 systemd[1]: Started sshd@1-172.31.17.190:22-139.178.68.195:38344.service - OpenSSH per-connection server daemon (139.178.68.195:38344). Mar 17 17:24:28.851361 sshd[2327]: Accepted publickey for core from 139.178.68.195 port 38344 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:28.853857 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:28.863156 systemd-logind[2020]: New session 2 of user core. Mar 17 17:24:28.870409 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:24:29.000990 sshd[2330]: Connection closed by 139.178.68.195 port 38344 Mar 17 17:24:29.001868 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:29.006431 systemd-logind[2020]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:24:29.010172 systemd[1]: sshd@1-172.31.17.190:22-139.178.68.195:38344.service: Deactivated successfully. Mar 17 17:24:29.015353 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:24:29.017327 systemd-logind[2020]: Removed session 2. Mar 17 17:24:29.031357 systemd[1]: Started sshd@2-172.31.17.190:22-139.178.68.195:38360.service - OpenSSH per-connection server daemon (139.178.68.195:38360). Mar 17 17:24:29.214504 sshd[2335]: Accepted publickey for core from 139.178.68.195 port 38360 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:29.216869 sshd-session[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:29.224332 systemd-logind[2020]: New session 3 of user core. Mar 17 17:24:29.232350 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:24:29.352846 sshd[2338]: Connection closed by 139.178.68.195 port 38360 Mar 17 17:24:29.353739 sshd-session[2335]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:29.361389 systemd-logind[2020]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:24:29.362667 systemd[1]: sshd@2-172.31.17.190:22-139.178.68.195:38360.service: Deactivated successfully. Mar 17 17:24:29.368495 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:24:29.370613 systemd-logind[2020]: Removed session 3. Mar 17 17:24:29.387311 systemd[1]: Started sshd@3-172.31.17.190:22-139.178.68.195:38370.service - OpenSSH per-connection server daemon (139.178.68.195:38370). Mar 17 17:24:29.572871 sshd[2343]: Accepted publickey for core from 139.178.68.195 port 38370 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:29.575564 sshd-session[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:29.584054 systemd-logind[2020]: New session 4 of user core. Mar 17 17:24:29.593284 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:24:29.723309 sshd[2346]: Connection closed by 139.178.68.195 port 38370 Mar 17 17:24:29.724180 sshd-session[2343]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:29.731003 systemd[1]: sshd@3-172.31.17.190:22-139.178.68.195:38370.service: Deactivated successfully. Mar 17 17:24:29.735487 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:24:29.735886 systemd-logind[2020]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:24:29.739248 systemd-logind[2020]: Removed session 4. Mar 17 17:24:29.758254 systemd[1]: Started sshd@4-172.31.17.190:22-139.178.68.195:38384.service - OpenSSH per-connection server daemon (139.178.68.195:38384). Mar 17 17:24:29.941456 sshd[2351]: Accepted publickey for core from 139.178.68.195 port 38384 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:29.943601 sshd-session[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:29.951639 systemd-logind[2020]: New session 5 of user core. Mar 17 17:24:29.957408 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:24:30.077123 sudo[2355]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:24:30.077721 sudo[2355]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:30.093990 sudo[2355]: pam_unix(sudo:session): session closed for user root Mar 17 17:24:30.117763 sshd[2354]: Connection closed by 139.178.68.195 port 38384 Mar 17 17:24:30.120197 sshd-session[2351]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:30.128058 systemd-logind[2020]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:24:30.128586 systemd[1]: sshd@4-172.31.17.190:22-139.178.68.195:38384.service: Deactivated successfully. Mar 17 17:24:30.133269 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:24:30.136130 systemd-logind[2020]: Removed session 5. Mar 17 17:24:30.151250 systemd[1]: Started sshd@5-172.31.17.190:22-139.178.68.195:38398.service - OpenSSH per-connection server daemon (139.178.68.195:38398). Mar 17 17:24:30.334289 sshd[2360]: Accepted publickey for core from 139.178.68.195 port 38398 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:30.337065 sshd-session[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:30.345509 systemd-logind[2020]: New session 6 of user core. Mar 17 17:24:30.357322 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:24:30.464210 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:24:30.465363 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:30.471914 sudo[2365]: pam_unix(sudo:session): session closed for user root Mar 17 17:24:30.482158 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:24:30.482865 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:30.504382 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:24:30.568324 augenrules[2387]: No rules Mar 17 17:24:30.571492 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:24:30.572066 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:24:30.576044 sudo[2364]: pam_unix(sudo:session): session closed for user root Mar 17 17:24:30.599755 sshd[2363]: Connection closed by 139.178.68.195 port 38398 Mar 17 17:24:30.601861 sshd-session[2360]: pam_unix(sshd:session): session closed for user core Mar 17 17:24:30.608475 systemd-logind[2020]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:24:30.610085 systemd[1]: sshd@5-172.31.17.190:22-139.178.68.195:38398.service: Deactivated successfully. Mar 17 17:24:30.614978 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:24:30.616862 systemd-logind[2020]: Removed session 6. Mar 17 17:24:30.632270 systemd[1]: Started sshd@6-172.31.17.190:22-139.178.68.195:38408.service - OpenSSH per-connection server daemon (139.178.68.195:38408). Mar 17 17:24:30.824304 sshd[2396]: Accepted publickey for core from 139.178.68.195 port 38408 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:24:30.826639 sshd-session[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:24:30.835204 systemd-logind[2020]: New session 7 of user core. Mar 17 17:24:30.842313 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:24:30.949030 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:24:30.949680 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:24:31.652535 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:24:31.652557 (dockerd)[2418]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:24:32.123942 dockerd[2418]: time="2025-03-17T17:24:32.123712218Z" level=info msg="Starting up" Mar 17 17:24:32.606938 dockerd[2418]: time="2025-03-17T17:24:32.606370508Z" level=info msg="Loading containers: start." Mar 17 17:24:32.916810 kernel: Initializing XFRM netlink socket Mar 17 17:24:32.964045 (udev-worker)[2440]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:24:33.064181 systemd-networkd[1603]: docker0: Link UP Mar 17 17:24:33.104145 dockerd[2418]: time="2025-03-17T17:24:33.104069647Z" level=info msg="Loading containers: done." Mar 17 17:24:33.129112 dockerd[2418]: time="2025-03-17T17:24:33.129047191Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:24:33.129776 dockerd[2418]: time="2025-03-17T17:24:33.129187771Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:24:33.129776 dockerd[2418]: time="2025-03-17T17:24:33.129370531Z" level=info msg="Daemon has completed initialization" Mar 17 17:24:33.188048 dockerd[2418]: time="2025-03-17T17:24:33.187083127Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:24:33.187440 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:24:34.566487 containerd[2048]: time="2025-03-17T17:24:34.566410990Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:24:35.030859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:24:35.039143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:35.176101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361726715.mount: Deactivated successfully. Mar 17 17:24:35.418057 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:35.443486 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:24:35.598160 kubelet[2638]: E0317 17:24:35.598079 2638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:24:35.609901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:24:35.610576 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:24:36.745678 containerd[2048]: time="2025-03-17T17:24:36.745617349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:36.747705 containerd[2048]: time="2025-03-17T17:24:36.747608857Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793524" Mar 17 17:24:36.748444 containerd[2048]: time="2025-03-17T17:24:36.748359037Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:36.754244 containerd[2048]: time="2025-03-17T17:24:36.754190557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:36.756762 containerd[2048]: time="2025-03-17T17:24:36.756509509Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.190026795s" Mar 17 17:24:36.756762 containerd[2048]: time="2025-03-17T17:24:36.756567505Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 17:24:36.797434 containerd[2048]: time="2025-03-17T17:24:36.797301685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:24:38.390521 containerd[2048]: time="2025-03-17T17:24:38.390431509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:38.392587 containerd[2048]: time="2025-03-17T17:24:38.392496301Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861167" Mar 17 17:24:38.393840 containerd[2048]: time="2025-03-17T17:24:38.393729121Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:38.399977 containerd[2048]: time="2025-03-17T17:24:38.399892669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:38.402264 containerd[2048]: time="2025-03-17T17:24:38.402195205Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.604537408s" Mar 17 17:24:38.402555 containerd[2048]: time="2025-03-17T17:24:38.402412477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 17:24:38.446035 containerd[2048]: time="2025-03-17T17:24:38.445968973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:24:39.573125 containerd[2048]: time="2025-03-17T17:24:39.573061707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:39.575481 containerd[2048]: time="2025-03-17T17:24:39.575384943Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264636" Mar 17 17:24:39.576554 containerd[2048]: time="2025-03-17T17:24:39.576470679Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:39.582073 containerd[2048]: time="2025-03-17T17:24:39.581991723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:39.584725 containerd[2048]: time="2025-03-17T17:24:39.584302815Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.13824641s" Mar 17 17:24:39.584725 containerd[2048]: time="2025-03-17T17:24:39.584357883Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 17:24:39.625665 containerd[2048]: time="2025-03-17T17:24:39.625601715Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:24:40.849078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount164806245.mount: Deactivated successfully. Mar 17 17:24:41.366409 containerd[2048]: time="2025-03-17T17:24:41.366340636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:41.367856 containerd[2048]: time="2025-03-17T17:24:41.367752880Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771848" Mar 17 17:24:41.369726 containerd[2048]: time="2025-03-17T17:24:41.369652300Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:41.375002 containerd[2048]: time="2025-03-17T17:24:41.374916592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:41.375970 containerd[2048]: time="2025-03-17T17:24:41.375742156Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.750077573s" Mar 17 17:24:41.375970 containerd[2048]: time="2025-03-17T17:24:41.375811108Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 17:24:41.418601 containerd[2048]: time="2025-03-17T17:24:41.418302352Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:24:41.942390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282801536.mount: Deactivated successfully. Mar 17 17:24:43.030942 containerd[2048]: time="2025-03-17T17:24:43.030865804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:43.033197 containerd[2048]: time="2025-03-17T17:24:43.033118744Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Mar 17 17:24:43.034644 containerd[2048]: time="2025-03-17T17:24:43.034561888Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:43.041974 containerd[2048]: time="2025-03-17T17:24:43.041886280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:43.044467 containerd[2048]: time="2025-03-17T17:24:43.044240776Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.625878004s" Mar 17 17:24:43.044467 containerd[2048]: time="2025-03-17T17:24:43.044297188Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 17:24:43.086938 containerd[2048]: time="2025-03-17T17:24:43.086621608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:24:43.612319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4150356929.mount: Deactivated successfully. Mar 17 17:24:43.622375 containerd[2048]: time="2025-03-17T17:24:43.621749599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:43.623605 containerd[2048]: time="2025-03-17T17:24:43.623518483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Mar 17 17:24:43.625037 containerd[2048]: time="2025-03-17T17:24:43.624961291Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:43.629210 containerd[2048]: time="2025-03-17T17:24:43.629129563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:43.631265 containerd[2048]: time="2025-03-17T17:24:43.630933139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 544.254651ms" Mar 17 17:24:43.631265 containerd[2048]: time="2025-03-17T17:24:43.630980659Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 17:24:43.671103 containerd[2048]: time="2025-03-17T17:24:43.670773499Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:24:44.241037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232434504.mount: Deactivated successfully. Mar 17 17:24:45.658929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:24:45.675119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:46.632095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:46.647508 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:24:46.767661 kubelet[2823]: E0317 17:24:46.767558 2823 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:24:46.771868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:24:46.772232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:24:47.864163 containerd[2048]: time="2025-03-17T17:24:47.864074208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:47.866342 containerd[2048]: time="2025-03-17T17:24:47.866258124Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Mar 17 17:24:47.867222 containerd[2048]: time="2025-03-17T17:24:47.867141444Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:47.873102 containerd[2048]: time="2025-03-17T17:24:47.873020808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:24:47.875936 containerd[2048]: time="2025-03-17T17:24:47.875578428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.204722489s" Mar 17 17:24:47.875936 containerd[2048]: time="2025-03-17T17:24:47.875637012Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 17:24:52.201305 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 17 17:24:56.908906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:24:56.921612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:56.971599 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:24:56.971966 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:24:56.972894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:56.990199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:57.020931 systemd[1]: Reloading requested from client PID 2917 ('systemctl') (unit session-7.scope)... Mar 17 17:24:57.021137 systemd[1]: Reloading... Mar 17 17:24:57.256855 zram_generator::config[2962]: No configuration found. Mar 17 17:24:57.506210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:24:57.661685 systemd[1]: Reloading finished in 639 ms. Mar 17 17:24:57.749143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:57.754532 (kubelet)[3023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:24:57.755383 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:57.760621 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:24:57.761259 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:57.771649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:24:58.056107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:24:58.070562 (kubelet)[3038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:24:58.156335 kubelet[3038]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:24:58.156953 kubelet[3038]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:24:58.157047 kubelet[3038]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:24:58.157431 kubelet[3038]: I0317 17:24:58.157367 3038 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:24:58.658171 kubelet[3038]: I0317 17:24:58.658124 3038 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:24:58.659815 kubelet[3038]: I0317 17:24:58.658357 3038 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:24:58.659815 kubelet[3038]: I0317 17:24:58.658741 3038 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:24:58.683900 kubelet[3038]: E0317 17:24:58.683848 3038 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.17.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.684612 kubelet[3038]: I0317 17:24:58.684566 3038 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:24:58.700140 kubelet[3038]: I0317 17:24:58.700091 3038 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:24:58.702972 kubelet[3038]: I0317 17:24:58.702822 3038 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:24:58.704204 kubelet[3038]: I0317 17:24:58.703906 3038 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-190","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:24:58.704457 kubelet[3038]: I0317 17:24:58.704438 3038 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:24:58.704571 kubelet[3038]: I0317 17:24:58.704554 3038 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:24:58.704913 kubelet[3038]: I0317 17:24:58.704893 3038 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:24:58.706507 kubelet[3038]: I0317 17:24:58.706481 3038 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:24:58.707534 kubelet[3038]: I0317 17:24:58.706613 3038 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:24:58.707534 kubelet[3038]: I0317 17:24:58.706715 3038 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:24:58.707534 kubelet[3038]: I0317 17:24:58.706753 3038 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:24:58.708103 kubelet[3038]: I0317 17:24:58.708039 3038 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:24:58.709832 kubelet[3038]: I0317 17:24:58.708400 3038 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:24:58.709832 kubelet[3038]: W0317 17:24:58.708538 3038 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:24:58.709832 kubelet[3038]: I0317 17:24:58.709604 3038 server.go:1264] "Started kubelet" Mar 17 17:24:58.710101 kubelet[3038]: W0317 17:24:58.709846 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.710101 kubelet[3038]: E0317 17:24:58.709922 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.710101 kubelet[3038]: W0317 17:24:58.710050 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-190&limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.710253 kubelet[3038]: E0317 17:24:58.710109 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-190&limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.717983 kubelet[3038]: I0317 17:24:58.717917 3038 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:24:58.722964 kubelet[3038]: I0317 17:24:58.722621 3038 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:24:58.724433 kubelet[3038]: I0317 17:24:58.724368 3038 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:24:58.726150 kubelet[3038]: I0317 17:24:58.726049 3038 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:24:58.726893 kubelet[3038]: I0317 17:24:58.726447 3038 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:24:58.729732 kubelet[3038]: E0317 17:24:58.729049 3038 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.190:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.190:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-190.182da70b80cbce2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-190,UID:ip-172-31-17-190,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-190,},FirstTimestamp:2025-03-17 17:24:58.70956907 +0000 UTC m=+0.631828432,LastTimestamp:2025-03-17 17:24:58.70956907 +0000 UTC m=+0.631828432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-190,}" Mar 17 17:24:58.729732 kubelet[3038]: I0317 17:24:58.729377 3038 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:24:58.729732 kubelet[3038]: I0317 17:24:58.729533 3038 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:24:58.729732 kubelet[3038]: I0317 17:24:58.729641 3038 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:24:58.731314 kubelet[3038]: W0317 17:24:58.730744 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.731314 kubelet[3038]: E0317 17:24:58.730870 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.734154 kubelet[3038]: E0317 17:24:58.734083 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-190?timeout=10s\": dial tcp 172.31.17.190:6443: connect: connection refused" interval="200ms" Mar 17 17:24:58.735667 kubelet[3038]: I0317 17:24:58.734273 3038 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:24:58.735667 kubelet[3038]: I0317 17:24:58.735018 3038 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:24:58.735667 kubelet[3038]: E0317 17:24:58.734535 3038 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:24:58.737089 kubelet[3038]: I0317 17:24:58.737038 3038 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:24:58.760156 kubelet[3038]: I0317 17:24:58.760092 3038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:24:58.766251 kubelet[3038]: I0317 17:24:58.766195 3038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:24:58.766251 kubelet[3038]: I0317 17:24:58.766262 3038 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:24:58.766251 kubelet[3038]: I0317 17:24:58.766292 3038 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:24:58.766251 kubelet[3038]: E0317 17:24:58.766354 3038 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:24:58.770492 kubelet[3038]: W0317 17:24:58.770412 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.770655 kubelet[3038]: E0317 17:24:58.770505 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:58.802010 kubelet[3038]: I0317 17:24:58.801927 3038 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:24:58.802010 kubelet[3038]: I0317 17:24:58.801961 3038 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:24:58.802010 kubelet[3038]: I0317 17:24:58.801995 3038 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:24:58.807220 kubelet[3038]: I0317 17:24:58.807173 3038 policy_none.go:49] "None policy: Start" Mar 17 17:24:58.808988 kubelet[3038]: I0317 17:24:58.808483 3038 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:24:58.808988 kubelet[3038]: I0317 17:24:58.808526 3038 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:24:58.821623 kubelet[3038]: I0317 17:24:58.821561 3038 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:24:58.821998 kubelet[3038]: I0317 17:24:58.821921 3038 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:24:58.822163 kubelet[3038]: I0317 17:24:58.822125 3038 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:24:58.833911 kubelet[3038]: I0317 17:24:58.833867 3038 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-190" Mar 17 17:24:58.834651 kubelet[3038]: E0317 17:24:58.834544 3038 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.190:6443/api/v1/nodes\": dial tcp 172.31.17.190:6443: connect: connection refused" node="ip-172-31-17-190" Mar 17 17:24:58.834913 kubelet[3038]: E0317 17:24:58.834880 3038 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-190\" not found" Mar 17 17:24:58.867483 kubelet[3038]: I0317 17:24:58.867416 3038 topology_manager.go:215] "Topology Admit Handler" podUID="a58cc655e509fa594d32023e80991b1e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-190" Mar 17 17:24:58.869852 kubelet[3038]: I0317 17:24:58.869469 3038 topology_manager.go:215] "Topology Admit Handler" podUID="70f115e38adef10c7da04f8c4362ed2c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-190" Mar 17 17:24:58.873427 kubelet[3038]: I0317 17:24:58.873102 3038 topology_manager.go:215] "Topology Admit Handler" podUID="6cbe98d535968044acbebbd8da17bbe0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-190" Mar 17 17:24:58.936368 kubelet[3038]: E0317 17:24:58.936206 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-190?timeout=10s\": dial tcp 172.31.17.190:6443: connect: connection refused" interval="400ms" Mar 17 17:24:59.031769 kubelet[3038]: I0317 17:24:59.031586 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6cbe98d535968044acbebbd8da17bbe0-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-190\" (UID: \"6cbe98d535968044acbebbd8da17bbe0\") " pod="kube-system/kube-scheduler-ip-172-31-17-190" Mar 17 17:24:59.031769 kubelet[3038]: I0317 17:24:59.031653 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a58cc655e509fa594d32023e80991b1e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-190\" (UID: \"a58cc655e509fa594d32023e80991b1e\") " pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:24:59.031769 kubelet[3038]: I0317 17:24:59.031701 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:24:59.031769 kubelet[3038]: I0317 17:24:59.031737 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:24:59.032090 kubelet[3038]: I0317 17:24:59.031811 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:24:59.032090 kubelet[3038]: I0317 17:24:59.031850 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:24:59.032090 kubelet[3038]: I0317 17:24:59.031884 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a58cc655e509fa594d32023e80991b1e-ca-certs\") pod \"kube-apiserver-ip-172-31-17-190\" (UID: \"a58cc655e509fa594d32023e80991b1e\") " pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:24:59.032090 kubelet[3038]: I0317 17:24:59.031924 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a58cc655e509fa594d32023e80991b1e-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-190\" (UID: \"a58cc655e509fa594d32023e80991b1e\") " pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:24:59.032090 kubelet[3038]: I0317 17:24:59.031960 3038 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:24:59.037901 kubelet[3038]: I0317 17:24:59.037527 3038 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-190" Mar 17 17:24:59.038246 kubelet[3038]: E0317 17:24:59.038118 3038 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.190:6443/api/v1/nodes\": dial tcp 172.31.17.190:6443: connect: connection refused" node="ip-172-31-17-190" Mar 17 17:24:59.185161 containerd[2048]: time="2025-03-17T17:24:59.185110352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-190,Uid:70f115e38adef10c7da04f8c4362ed2c,Namespace:kube-system,Attempt:0,}" Mar 17 17:24:59.188224 containerd[2048]: time="2025-03-17T17:24:59.188082968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-190,Uid:a58cc655e509fa594d32023e80991b1e,Namespace:kube-system,Attempt:0,}" Mar 17 17:24:59.191342 containerd[2048]: time="2025-03-17T17:24:59.191009684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-190,Uid:6cbe98d535968044acbebbd8da17bbe0,Namespace:kube-system,Attempt:0,}" Mar 17 17:24:59.337460 kubelet[3038]: E0317 17:24:59.337403 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-190?timeout=10s\": dial tcp 172.31.17.190:6443: connect: connection refused" interval="800ms" Mar 17 17:24:59.440908 kubelet[3038]: I0317 17:24:59.440730 3038 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-190" Mar 17 17:24:59.441334 kubelet[3038]: E0317 17:24:59.441281 3038 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.190:6443/api/v1/nodes\": dial tcp 172.31.17.190:6443: connect: connection refused" node="ip-172-31-17-190" Mar 17 17:24:59.577606 kubelet[3038]: W0317 17:24:59.577478 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:59.577606 kubelet[3038]: E0317 17:24:59.577570 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.17.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:59.711679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449389161.mount: Deactivated successfully. Mar 17 17:24:59.728446 containerd[2048]: time="2025-03-17T17:24:59.726725915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:24:59.734493 containerd[2048]: time="2025-03-17T17:24:59.734424923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 17 17:24:59.739081 containerd[2048]: time="2025-03-17T17:24:59.739029059Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:24:59.743288 containerd[2048]: time="2025-03-17T17:24:59.743238179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:24:59.746131 containerd[2048]: time="2025-03-17T17:24:59.746086403Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:24:59.749191 containerd[2048]: time="2025-03-17T17:24:59.749134835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:24:59.751599 containerd[2048]: time="2025-03-17T17:24:59.751550207Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:24:59.754995 containerd[2048]: time="2025-03-17T17:24:59.754946231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:24:59.759458 containerd[2048]: time="2025-03-17T17:24:59.759410603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.295787ms" Mar 17 17:24:59.766581 kubelet[3038]: W0317 17:24:59.765939 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:59.766581 kubelet[3038]: E0317 17:24:59.766048 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.17.190:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:24:59.766831 containerd[2048]: time="2025-03-17T17:24:59.766220747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.436691ms" Mar 17 17:24:59.768013 containerd[2048]: time="2025-03-17T17:24:59.767948999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.754071ms" Mar 17 17:24:59.979195 containerd[2048]: time="2025-03-17T17:24:59.978944916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:24:59.979816 containerd[2048]: time="2025-03-17T17:24:59.979559784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:24:59.979816 containerd[2048]: time="2025-03-17T17:24:59.979682424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:24:59.982646 containerd[2048]: time="2025-03-17T17:24:59.982211244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:24:59.986575 containerd[2048]: time="2025-03-17T17:24:59.986101836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:24:59.986575 containerd[2048]: time="2025-03-17T17:24:59.986327436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:24:59.989729 containerd[2048]: time="2025-03-17T17:24:59.989408916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:24:59.989729 containerd[2048]: time="2025-03-17T17:24:59.989529708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:24:59.990176 containerd[2048]: time="2025-03-17T17:24:59.989860164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:24:59.990176 containerd[2048]: time="2025-03-17T17:24:59.989970480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:24:59.990546 containerd[2048]: time="2025-03-17T17:24:59.990371784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:24:59.991361 containerd[2048]: time="2025-03-17T17:24:59.991254492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:00.031556 kubelet[3038]: W0317 17:25:00.031466 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-190&limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:25:00.031708 kubelet[3038]: E0317 17:25:00.031567 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.17.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-190&limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:25:00.139263 containerd[2048]: time="2025-03-17T17:25:00.138732873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-190,Uid:a58cc655e509fa594d32023e80991b1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e22ba5c9ecd30f3eea6a1fcfb85db3e063f0ed54fffead1869cbff150bed550\"" Mar 17 17:25:00.139985 kubelet[3038]: E0317 17:25:00.139884 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-190?timeout=10s\": dial tcp 172.31.17.190:6443: connect: connection refused" interval="1.6s" Mar 17 17:25:00.150713 containerd[2048]: time="2025-03-17T17:25:00.150657093Z" level=info msg="CreateContainer within sandbox \"9e22ba5c9ecd30f3eea6a1fcfb85db3e063f0ed54fffead1869cbff150bed550\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:25:00.156188 containerd[2048]: time="2025-03-17T17:25:00.156133137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-190,Uid:6cbe98d535968044acbebbd8da17bbe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"736ee269104ec78d30e3747f4ab2775eba491141ae690c0f99dab6a7ae3864a1\"" Mar 17 17:25:00.173352 containerd[2048]: time="2025-03-17T17:25:00.173122797Z" level=info msg="CreateContainer within sandbox \"736ee269104ec78d30e3747f4ab2775eba491141ae690c0f99dab6a7ae3864a1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:25:00.190209 containerd[2048]: time="2025-03-17T17:25:00.189938673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-190,Uid:70f115e38adef10c7da04f8c4362ed2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"102eb0f4b32da5b5a94d86d6f1e88857a0294f3269dabd4f1fcf5c20c659ca55\"" Mar 17 17:25:00.196926 containerd[2048]: time="2025-03-17T17:25:00.196634985Z" level=info msg="CreateContainer within sandbox \"102eb0f4b32da5b5a94d86d6f1e88857a0294f3269dabd4f1fcf5c20c659ca55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:25:00.198199 kubelet[3038]: W0317 17:25:00.198076 3038 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:25:00.198199 kubelet[3038]: E0317 17:25:00.198166 3038 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.17.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.190:6443: connect: connection refused Mar 17 17:25:00.203177 containerd[2048]: time="2025-03-17T17:25:00.203097021Z" level=info msg="CreateContainer within sandbox \"9e22ba5c9ecd30f3eea6a1fcfb85db3e063f0ed54fffead1869cbff150bed550\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a6d075ad1dddc26a5939026f456f60b5c5fc1d5e9ff3a5e393fde361c19ca1e\"" Mar 17 17:25:00.204815 containerd[2048]: time="2025-03-17T17:25:00.204424605Z" level=info msg="StartContainer for \"9a6d075ad1dddc26a5939026f456f60b5c5fc1d5e9ff3a5e393fde361c19ca1e\"" Mar 17 17:25:00.228009 containerd[2048]: time="2025-03-17T17:25:00.227947966Z" level=info msg="CreateContainer within sandbox \"736ee269104ec78d30e3747f4ab2775eba491141ae690c0f99dab6a7ae3864a1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3b918d86f9127e4747fcc707317768a47c75fa8e4354231db44ec734ab8c812d\"" Mar 17 17:25:00.231157 containerd[2048]: time="2025-03-17T17:25:00.229612642Z" level=info msg="StartContainer for \"3b918d86f9127e4747fcc707317768a47c75fa8e4354231db44ec734ab8c812d\"" Mar 17 17:25:00.250459 containerd[2048]: time="2025-03-17T17:25:00.250297750Z" level=info msg="CreateContainer within sandbox \"102eb0f4b32da5b5a94d86d6f1e88857a0294f3269dabd4f1fcf5c20c659ca55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5db1e833d8fbe88f71c9048e5b4a223505ef78949b24642e6b6d0e0bec48fb53\"" Mar 17 17:25:00.251825 containerd[2048]: time="2025-03-17T17:25:00.251324350Z" level=info msg="StartContainer for \"5db1e833d8fbe88f71c9048e5b4a223505ef78949b24642e6b6d0e0bec48fb53\"" Mar 17 17:25:00.252894 kubelet[3038]: I0317 17:25:00.252843 3038 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-190" Mar 17 17:25:00.253379 kubelet[3038]: E0317 17:25:00.253328 3038 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.17.190:6443/api/v1/nodes\": dial tcp 172.31.17.190:6443: connect: connection refused" node="ip-172-31-17-190" Mar 17 17:25:00.369176 containerd[2048]: time="2025-03-17T17:25:00.368842294Z" level=info msg="StartContainer for \"9a6d075ad1dddc26a5939026f456f60b5c5fc1d5e9ff3a5e393fde361c19ca1e\" returns successfully" Mar 17 17:25:00.500280 containerd[2048]: time="2025-03-17T17:25:00.499528163Z" level=info msg="StartContainer for \"5db1e833d8fbe88f71c9048e5b4a223505ef78949b24642e6b6d0e0bec48fb53\" returns successfully" Mar 17 17:25:00.540482 containerd[2048]: time="2025-03-17T17:25:00.540401303Z" level=info msg="StartContainer for \"3b918d86f9127e4747fcc707317768a47c75fa8e4354231db44ec734ab8c812d\" returns successfully" Mar 17 17:25:01.858529 kubelet[3038]: I0317 17:25:01.858481 3038 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-190" Mar 17 17:25:04.490617 kubelet[3038]: E0317 17:25:04.490531 3038 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-190\" not found" node="ip-172-31-17-190" Mar 17 17:25:04.596944 kubelet[3038]: I0317 17:25:04.596884 3038 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-190" Mar 17 17:25:04.710520 kubelet[3038]: I0317 17:25:04.710460 3038 apiserver.go:52] "Watching apiserver" Mar 17 17:25:04.730684 kubelet[3038]: I0317 17:25:04.730597 3038 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:25:06.467098 update_engine[2021]: I20250317 17:25:06.466975 2021 update_attempter.cc:509] Updating boot flags... Mar 17 17:25:06.567973 systemd[1]: Reloading requested from client PID 3336 ('systemctl') (unit session-7.scope)... Mar 17 17:25:06.568036 systemd[1]: Reloading... Mar 17 17:25:06.621827 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3331) Mar 17 17:25:06.914822 zram_generator::config[3457]: No configuration found. Mar 17 17:25:06.969031 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3321) Mar 17 17:25:07.229820 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3321) Mar 17 17:25:07.312466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:25:07.529753 systemd[1]: Reloading finished in 961 ms. Mar 17 17:25:07.711704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:07.747231 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:25:07.747880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:07.761554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:25:08.067342 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:25:08.089981 (kubelet)[3694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:25:08.204954 kubelet[3694]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:08.204954 kubelet[3694]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:25:08.204954 kubelet[3694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:25:08.205587 kubelet[3694]: I0317 17:25:08.205118 3694 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:25:08.213640 sudo[3706]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 17:25:08.214379 sudo[3706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 17 17:25:08.220439 kubelet[3694]: I0317 17:25:08.219049 3694 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:25:08.220439 kubelet[3694]: I0317 17:25:08.219097 3694 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:25:08.220439 kubelet[3694]: I0317 17:25:08.219491 3694 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:25:08.222835 kubelet[3694]: I0317 17:25:08.222757 3694 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:25:08.225327 kubelet[3694]: I0317 17:25:08.225269 3694 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:25:08.239404 kubelet[3694]: I0317 17:25:08.239351 3694 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:25:08.240630 kubelet[3694]: I0317 17:25:08.240568 3694 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:25:08.242177 kubelet[3694]: I0317 17:25:08.240627 3694 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-190","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:25:08.242177 kubelet[3694]: I0317 17:25:08.240983 3694 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:25:08.242177 kubelet[3694]: I0317 17:25:08.241004 3694 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:25:08.242177 kubelet[3694]: I0317 17:25:08.241063 3694 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:08.242613 kubelet[3694]: I0317 17:25:08.241563 3694 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:25:08.242613 kubelet[3694]: I0317 17:25:08.242511 3694 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:25:08.242613 kubelet[3694]: I0317 17:25:08.242570 3694 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:25:08.242613 kubelet[3694]: I0317 17:25:08.242599 3694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:25:08.246812 kubelet[3694]: I0317 17:25:08.245057 3694 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:25:08.246812 kubelet[3694]: I0317 17:25:08.245355 3694 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:25:08.247350 kubelet[3694]: I0317 17:25:08.247309 3694 server.go:1264] "Started kubelet" Mar 17 17:25:08.257213 kubelet[3694]: I0317 17:25:08.257159 3694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:25:08.269817 kubelet[3694]: I0317 17:25:08.268556 3694 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:25:08.275207 kubelet[3694]: I0317 17:25:08.271134 3694 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:25:08.285652 kubelet[3694]: I0317 17:25:08.284702 3694 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:25:08.300859 kubelet[3694]: I0317 17:25:08.294856 3694 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:25:08.306287 kubelet[3694]: I0317 17:25:08.294017 3694 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:25:08.310049 kubelet[3694]: I0317 17:25:08.309987 3694 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:25:08.310331 kubelet[3694]: I0317 17:25:08.310300 3694 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:25:08.326906 kubelet[3694]: I0317 17:25:08.326481 3694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:25:08.334109 kubelet[3694]: I0317 17:25:08.332314 3694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:25:08.334109 kubelet[3694]: I0317 17:25:08.332392 3694 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:25:08.334109 kubelet[3694]: I0317 17:25:08.332424 3694 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:25:08.334109 kubelet[3694]: E0317 17:25:08.332500 3694 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:25:08.363764 kubelet[3694]: E0317 17:25:08.363706 3694 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:25:08.370553 kubelet[3694]: I0317 17:25:08.363754 3694 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:25:08.370553 kubelet[3694]: I0317 17:25:08.370525 3694 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:25:08.370874 kubelet[3694]: I0317 17:25:08.370667 3694 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:25:08.400894 kubelet[3694]: E0317 17:25:08.400859 3694 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Mar 17 17:25:08.403016 kubelet[3694]: I0317 17:25:08.402980 3694 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-17-190" Mar 17 17:25:08.430985 kubelet[3694]: I0317 17:25:08.430838 3694 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-17-190" Mar 17 17:25:08.432367 kubelet[3694]: I0317 17:25:08.432161 3694 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-17-190" Mar 17 17:25:08.432998 kubelet[3694]: E0317 17:25:08.432847 3694 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:25:08.589962 kubelet[3694]: I0317 17:25:08.589107 3694 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:25:08.589962 kubelet[3694]: I0317 17:25:08.589132 3694 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:25:08.589962 kubelet[3694]: I0317 17:25:08.589167 3694 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:25:08.589962 kubelet[3694]: I0317 17:25:08.589406 3694 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:25:08.589962 kubelet[3694]: I0317 17:25:08.589427 3694 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:25:08.589962 kubelet[3694]: I0317 17:25:08.589464 3694 policy_none.go:49] "None policy: Start" Mar 17 17:25:08.593906 kubelet[3694]: I0317 17:25:08.593855 3694 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:25:08.593906 kubelet[3694]: I0317 17:25:08.593910 3694 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:25:08.594373 kubelet[3694]: I0317 17:25:08.594271 3694 state_mem.go:75] "Updated machine memory state" Mar 17 17:25:08.599823 kubelet[3694]: I0317 17:25:08.596682 3694 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:25:08.599823 kubelet[3694]: I0317 17:25:08.596996 3694 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:25:08.599823 kubelet[3694]: I0317 17:25:08.598618 3694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:25:08.633820 kubelet[3694]: I0317 17:25:08.633325 3694 topology_manager.go:215] "Topology Admit Handler" podUID="a58cc655e509fa594d32023e80991b1e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-17-190" Mar 17 17:25:08.633820 kubelet[3694]: I0317 17:25:08.633493 3694 topology_manager.go:215] "Topology Admit Handler" podUID="70f115e38adef10c7da04f8c4362ed2c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-17-190" Mar 17 17:25:08.635298 kubelet[3694]: I0317 17:25:08.633673 3694 topology_manager.go:215] "Topology Admit Handler" podUID="6cbe98d535968044acbebbd8da17bbe0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-17-190" Mar 17 17:25:08.654966 kubelet[3694]: E0317 17:25:08.653776 3694 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-190\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:25:08.724321 kubelet[3694]: I0317 17:25:08.723837 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6cbe98d535968044acbebbd8da17bbe0-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-190\" (UID: \"6cbe98d535968044acbebbd8da17bbe0\") " pod="kube-system/kube-scheduler-ip-172-31-17-190" Mar 17 17:25:08.724321 kubelet[3694]: I0317 17:25:08.723909 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a58cc655e509fa594d32023e80991b1e-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-190\" (UID: \"a58cc655e509fa594d32023e80991b1e\") " pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:25:08.724321 kubelet[3694]: I0317 17:25:08.723952 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:25:08.724321 kubelet[3694]: I0317 17:25:08.723992 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:25:08.724321 kubelet[3694]: I0317 17:25:08.724037 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:25:08.724684 kubelet[3694]: I0317 17:25:08.724076 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:25:08.724684 kubelet[3694]: I0317 17:25:08.724111 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70f115e38adef10c7da04f8c4362ed2c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-190\" (UID: \"70f115e38adef10c7da04f8c4362ed2c\") " pod="kube-system/kube-controller-manager-ip-172-31-17-190" Mar 17 17:25:08.724684 kubelet[3694]: I0317 17:25:08.724147 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a58cc655e509fa594d32023e80991b1e-ca-certs\") pod \"kube-apiserver-ip-172-31-17-190\" (UID: \"a58cc655e509fa594d32023e80991b1e\") " pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:25:08.724684 kubelet[3694]: I0317 17:25:08.724185 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a58cc655e509fa594d32023e80991b1e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-190\" (UID: \"a58cc655e509fa594d32023e80991b1e\") " pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:25:09.170539 sudo[3706]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:09.245016 kubelet[3694]: I0317 17:25:09.244532 3694 apiserver.go:52] "Watching apiserver" Mar 17 17:25:09.311260 kubelet[3694]: I0317 17:25:09.311193 3694 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:25:09.451586 kubelet[3694]: E0317 17:25:09.450652 3694 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-190\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-190" Mar 17 17:25:09.497591 kubelet[3694]: I0317 17:25:09.496741 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-190" podStartSLOduration=4.496718228 podStartE2EDuration="4.496718228s" podCreationTimestamp="2025-03-17 17:25:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:09.485111887 +0000 UTC m=+1.386232663" watchObservedRunningTime="2025-03-17 17:25:09.496718228 +0000 UTC m=+1.397838992" Mar 17 17:25:09.515965 kubelet[3694]: I0317 17:25:09.515776 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-190" podStartSLOduration=1.515755028 podStartE2EDuration="1.515755028s" podCreationTimestamp="2025-03-17 17:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:09.498263912 +0000 UTC m=+1.399384676" watchObservedRunningTime="2025-03-17 17:25:09.515755028 +0000 UTC m=+1.416875828" Mar 17 17:25:09.531620 kubelet[3694]: I0317 17:25:09.531329 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-190" podStartSLOduration=1.53131094 podStartE2EDuration="1.53131094s" podCreationTimestamp="2025-03-17 17:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:09.515716616 +0000 UTC m=+1.416837404" watchObservedRunningTime="2025-03-17 17:25:09.53131094 +0000 UTC m=+1.432431728" Mar 17 17:25:11.727842 sudo[2400]: pam_unix(sudo:session): session closed for user root Mar 17 17:25:11.751715 sshd[2399]: Connection closed by 139.178.68.195 port 38408 Mar 17 17:25:11.751535 sshd-session[2396]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:11.760913 systemd[1]: sshd@6-172.31.17.190:22-139.178.68.195:38408.service: Deactivated successfully. Mar 17 17:25:11.766419 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:25:11.769888 systemd-logind[2020]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:25:11.773088 systemd-logind[2020]: Removed session 7. Mar 17 17:25:23.393440 kubelet[3694]: I0317 17:25:23.393192 3694 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:25:23.395717 kubelet[3694]: I0317 17:25:23.394513 3694 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:25:23.395816 containerd[2048]: time="2025-03-17T17:25:23.394199229Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:25:24.407947 kubelet[3694]: I0317 17:25:24.405370 3694 topology_manager.go:215] "Topology Admit Handler" podUID="5ddc34bd-1757-4db7-8586-4bb6533aa9e7" podNamespace="kube-system" podName="kube-proxy-mvjrd" Mar 17 17:25:24.418878 kubelet[3694]: I0317 17:25:24.417264 3694 topology_manager.go:215] "Topology Admit Handler" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" podNamespace="kube-system" podName="cilium-mz67n" Mar 17 17:25:24.431976 kubelet[3694]: I0317 17:25:24.431285 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ddc34bd-1757-4db7-8586-4bb6533aa9e7-kube-proxy\") pod \"kube-proxy-mvjrd\" (UID: \"5ddc34bd-1757-4db7-8586-4bb6533aa9e7\") " pod="kube-system/kube-proxy-mvjrd" Mar 17 17:25:24.432250 kubelet[3694]: I0317 17:25:24.432188 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ddc34bd-1757-4db7-8586-4bb6533aa9e7-lib-modules\") pod \"kube-proxy-mvjrd\" (UID: \"5ddc34bd-1757-4db7-8586-4bb6533aa9e7\") " pod="kube-system/kube-proxy-mvjrd" Mar 17 17:25:24.432949 kubelet[3694]: I0317 17:25:24.432352 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ddc34bd-1757-4db7-8586-4bb6533aa9e7-xtables-lock\") pod \"kube-proxy-mvjrd\" (UID: \"5ddc34bd-1757-4db7-8586-4bb6533aa9e7\") " pod="kube-system/kube-proxy-mvjrd" Mar 17 17:25:24.434852 kubelet[3694]: I0317 17:25:24.433100 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtd2g\" (UniqueName: \"kubernetes.io/projected/5ddc34bd-1757-4db7-8586-4bb6533aa9e7-kube-api-access-vtd2g\") pod \"kube-proxy-mvjrd\" (UID: \"5ddc34bd-1757-4db7-8586-4bb6533aa9e7\") " pod="kube-system/kube-proxy-mvjrd" Mar 17 17:25:24.533919 kubelet[3694]: I0317 17:25:24.533872 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-hostproc\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.536839 kubelet[3694]: I0317 17:25:24.534923 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-cgroup\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.536839 kubelet[3694]: I0317 17:25:24.534982 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-xtables-lock\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.536839 kubelet[3694]: I0317 17:25:24.535020 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee9c295-6f97-4d34-8747-582ca0447a7b-clustermesh-secrets\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.536839 kubelet[3694]: I0317 17:25:24.535058 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-hubble-tls\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.536839 kubelet[3694]: I0317 17:25:24.535118 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-lib-modules\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.536839 kubelet[3694]: I0317 17:25:24.535171 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cni-path\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.537277 kubelet[3694]: I0317 17:25:24.535204 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-etc-cni-netd\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.537277 kubelet[3694]: I0317 17:25:24.535258 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-run\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.537277 kubelet[3694]: I0317 17:25:24.535293 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-bpf-maps\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.537277 kubelet[3694]: I0317 17:25:24.535327 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-config-path\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.537277 kubelet[3694]: I0317 17:25:24.535368 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-net\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.537277 kubelet[3694]: I0317 17:25:24.535405 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-kernel\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.537585 kubelet[3694]: I0317 17:25:24.535441 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpxwk\" (UniqueName: \"kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-kube-api-access-cpxwk\") pod \"cilium-mz67n\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " pod="kube-system/cilium-mz67n" Mar 17 17:25:24.544830 kubelet[3694]: I0317 17:25:24.541386 3694 topology_manager.go:215] "Topology Admit Handler" podUID="c0ddce70-2b22-4337-ad0d-e55462248687" podNamespace="kube-system" podName="cilium-operator-599987898-lfh85" Mar 17 17:25:24.636766 kubelet[3694]: I0317 17:25:24.635842 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl9jh\" (UniqueName: \"kubernetes.io/projected/c0ddce70-2b22-4337-ad0d-e55462248687-kube-api-access-wl9jh\") pod \"cilium-operator-599987898-lfh85\" (UID: \"c0ddce70-2b22-4337-ad0d-e55462248687\") " pod="kube-system/cilium-operator-599987898-lfh85" Mar 17 17:25:24.675836 kubelet[3694]: I0317 17:25:24.670975 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ddce70-2b22-4337-ad0d-e55462248687-cilium-config-path\") pod \"cilium-operator-599987898-lfh85\" (UID: \"c0ddce70-2b22-4337-ad0d-e55462248687\") " pod="kube-system/cilium-operator-599987898-lfh85" Mar 17 17:25:24.738203 containerd[2048]: time="2025-03-17T17:25:24.738154907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mvjrd,Uid:5ddc34bd-1757-4db7-8586-4bb6533aa9e7,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:24.747053 containerd[2048]: time="2025-03-17T17:25:24.746981219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mz67n,Uid:6ee9c295-6f97-4d34-8747-582ca0447a7b,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:24.831069 containerd[2048]: time="2025-03-17T17:25:24.830812212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:24.831069 containerd[2048]: time="2025-03-17T17:25:24.831002916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:24.831451 containerd[2048]: time="2025-03-17T17:25:24.831091224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:24.833474 containerd[2048]: time="2025-03-17T17:25:24.832958520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:24.850815 containerd[2048]: time="2025-03-17T17:25:24.850572828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:24.851342 containerd[2048]: time="2025-03-17T17:25:24.851174508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:24.851342 containerd[2048]: time="2025-03-17T17:25:24.851284140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:24.852041 containerd[2048]: time="2025-03-17T17:25:24.851841180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:24.864484 containerd[2048]: time="2025-03-17T17:25:24.863983428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lfh85,Uid:c0ddce70-2b22-4337-ad0d-e55462248687,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:24.983753 containerd[2048]: time="2025-03-17T17:25:24.983605440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mvjrd,Uid:5ddc34bd-1757-4db7-8586-4bb6533aa9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca7c06ead8384ccb5552a31aeccb37daa3dae2c41db373014650301a4c8c69a\"" Mar 17 17:25:24.995961 containerd[2048]: time="2025-03-17T17:25:24.992569585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:24.995961 containerd[2048]: time="2025-03-17T17:25:24.995445433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:24.995961 containerd[2048]: time="2025-03-17T17:25:24.995488141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:24.996347 containerd[2048]: time="2025-03-17T17:25:24.996284029Z" level=info msg="CreateContainer within sandbox \"6ca7c06ead8384ccb5552a31aeccb37daa3dae2c41db373014650301a4c8c69a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:25:24.999023 containerd[2048]: time="2025-03-17T17:25:24.998816809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:25.015450 containerd[2048]: time="2025-03-17T17:25:25.015272949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mz67n,Uid:6ee9c295-6f97-4d34-8747-582ca0447a7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\"" Mar 17 17:25:25.028384 containerd[2048]: time="2025-03-17T17:25:25.028108917Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 17:25:25.063721 containerd[2048]: time="2025-03-17T17:25:25.063127305Z" level=info msg="CreateContainer within sandbox \"6ca7c06ead8384ccb5552a31aeccb37daa3dae2c41db373014650301a4c8c69a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3cf63903736efa31fe4be78cccbca842697991bc1eb8bf7f3974327f061c0ce0\"" Mar 17 17:25:25.065132 containerd[2048]: time="2025-03-17T17:25:25.065054553Z" level=info msg="StartContainer for \"3cf63903736efa31fe4be78cccbca842697991bc1eb8bf7f3974327f061c0ce0\"" Mar 17 17:25:25.119467 containerd[2048]: time="2025-03-17T17:25:25.119393661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lfh85,Uid:c0ddce70-2b22-4337-ad0d-e55462248687,Namespace:kube-system,Attempt:0,} returns sandbox id \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\"" Mar 17 17:25:25.202701 containerd[2048]: time="2025-03-17T17:25:25.202598638Z" level=info msg="StartContainer for \"3cf63903736efa31fe4be78cccbca842697991bc1eb8bf7f3974327f061c0ce0\" returns successfully" Mar 17 17:25:32.740468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876358720.mount: Deactivated successfully. Mar 17 17:25:35.309741 containerd[2048]: time="2025-03-17T17:25:35.309659096Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:35.311652 containerd[2048]: time="2025-03-17T17:25:35.311579060Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 17 17:25:35.314222 containerd[2048]: time="2025-03-17T17:25:35.314146676Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:35.318476 containerd[2048]: time="2025-03-17T17:25:35.318285440Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.290107859s" Mar 17 17:25:35.318476 containerd[2048]: time="2025-03-17T17:25:35.318348368Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 17:25:35.322082 containerd[2048]: time="2025-03-17T17:25:35.321261380Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 17:25:35.324041 containerd[2048]: time="2025-03-17T17:25:35.323082812Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:25:35.353389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236158282.mount: Deactivated successfully. Mar 17 17:25:35.358969 containerd[2048]: time="2025-03-17T17:25:35.358897388Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\"" Mar 17 17:25:35.359952 containerd[2048]: time="2025-03-17T17:25:35.359805692Z" level=info msg="StartContainer for \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\"" Mar 17 17:25:35.464605 containerd[2048]: time="2025-03-17T17:25:35.464360685Z" level=info msg="StartContainer for \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\" returns successfully" Mar 17 17:25:35.583621 kubelet[3694]: I0317 17:25:35.582337 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mvjrd" podStartSLOduration=11.582313425 podStartE2EDuration="11.582313425s" podCreationTimestamp="2025-03-17 17:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:25.506825711 +0000 UTC m=+17.407946499" watchObservedRunningTime="2025-03-17 17:25:35.582313425 +0000 UTC m=+27.483434189" Mar 17 17:25:36.342169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018-rootfs.mount: Deactivated successfully. Mar 17 17:25:36.755858 containerd[2048]: time="2025-03-17T17:25:36.755737331Z" level=info msg="shim disconnected" id=acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018 namespace=k8s.io Mar 17 17:25:36.755858 containerd[2048]: time="2025-03-17T17:25:36.755854343Z" level=warning msg="cleaning up after shim disconnected" id=acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018 namespace=k8s.io Mar 17 17:25:36.756714 containerd[2048]: time="2025-03-17T17:25:36.755877347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:36.778250 containerd[2048]: time="2025-03-17T17:25:36.777203987Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:25:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:25:37.534249 containerd[2048]: time="2025-03-17T17:25:37.533763707Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:25:37.589527 containerd[2048]: time="2025-03-17T17:25:37.589454519Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\"" Mar 17 17:25:37.592370 containerd[2048]: time="2025-03-17T17:25:37.591717119Z" level=info msg="StartContainer for \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\"" Mar 17 17:25:37.723826 containerd[2048]: time="2025-03-17T17:25:37.723707460Z" level=info msg="StartContainer for \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\" returns successfully" Mar 17 17:25:37.746329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:25:37.748209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:37.748338 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:37.759626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:25:37.828869 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:25:37.848378 containerd[2048]: time="2025-03-17T17:25:37.848244948Z" level=info msg="shim disconnected" id=68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716 namespace=k8s.io Mar 17 17:25:37.848378 containerd[2048]: time="2025-03-17T17:25:37.848329668Z" level=warning msg="cleaning up after shim disconnected" id=68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716 namespace=k8s.io Mar 17 17:25:37.848378 containerd[2048]: time="2025-03-17T17:25:37.848349132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:38.543129 containerd[2048]: time="2025-03-17T17:25:38.542436612Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:25:38.569392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716-rootfs.mount: Deactivated successfully. Mar 17 17:25:38.593819 containerd[2048]: time="2025-03-17T17:25:38.589702104Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\"" Mar 17 17:25:38.593819 containerd[2048]: time="2025-03-17T17:25:38.591663804Z" level=info msg="StartContainer for \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\"" Mar 17 17:25:38.612640 containerd[2048]: time="2025-03-17T17:25:38.612559968Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:38.615060 containerd[2048]: time="2025-03-17T17:25:38.614938260Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 17 17:25:38.617979 containerd[2048]: time="2025-03-17T17:25:38.617922060Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:25:38.632250 containerd[2048]: time="2025-03-17T17:25:38.632191752Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.310771708s" Mar 17 17:25:38.632478 containerd[2048]: time="2025-03-17T17:25:38.632443860Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 17:25:38.641504 containerd[2048]: time="2025-03-17T17:25:38.641443212Z" level=info msg="CreateContainer within sandbox \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 17:25:38.677439 containerd[2048]: time="2025-03-17T17:25:38.677355552Z" level=info msg="CreateContainer within sandbox \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\"" Mar 17 17:25:38.681047 containerd[2048]: time="2025-03-17T17:25:38.680890621Z" level=info msg="StartContainer for \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\"" Mar 17 17:25:38.756006 containerd[2048]: time="2025-03-17T17:25:38.755156653Z" level=info msg="StartContainer for \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\" returns successfully" Mar 17 17:25:38.829983 containerd[2048]: time="2025-03-17T17:25:38.829757185Z" level=error msg="collecting metrics for a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c" error="cgroups: cgroup deleted: unknown" Mar 17 17:25:38.836062 containerd[2048]: time="2025-03-17T17:25:38.835891729Z" level=info msg="StartContainer for \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\" returns successfully" Mar 17 17:25:38.909352 containerd[2048]: time="2025-03-17T17:25:38.909182594Z" level=info msg="shim disconnected" id=a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c namespace=k8s.io Mar 17 17:25:38.909352 containerd[2048]: time="2025-03-17T17:25:38.909267398Z" level=warning msg="cleaning up after shim disconnected" id=a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c namespace=k8s.io Mar 17 17:25:38.909352 containerd[2048]: time="2025-03-17T17:25:38.909291926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:39.565257 containerd[2048]: time="2025-03-17T17:25:39.565196029Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:25:39.575720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c-rootfs.mount: Deactivated successfully. Mar 17 17:25:39.622043 containerd[2048]: time="2025-03-17T17:25:39.620368129Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\"" Mar 17 17:25:39.623461 containerd[2048]: time="2025-03-17T17:25:39.623390029Z" level=info msg="StartContainer for \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\"" Mar 17 17:25:39.718916 kubelet[3694]: I0317 17:25:39.718201 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-lfh85" podStartSLOduration=2.205433063 podStartE2EDuration="15.718172594s" podCreationTimestamp="2025-03-17 17:25:24 +0000 UTC" firstStartedPulling="2025-03-17 17:25:25.121287561 +0000 UTC m=+17.022408325" lastFinishedPulling="2025-03-17 17:25:38.634027104 +0000 UTC m=+30.535147856" observedRunningTime="2025-03-17 17:25:39.714307274 +0000 UTC m=+31.615428062" watchObservedRunningTime="2025-03-17 17:25:39.718172594 +0000 UTC m=+31.619293454" Mar 17 17:25:39.992630 containerd[2048]: time="2025-03-17T17:25:39.992311743Z" level=info msg="StartContainer for \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\" returns successfully" Mar 17 17:25:40.090929 containerd[2048]: time="2025-03-17T17:25:40.090444192Z" level=info msg="shim disconnected" id=5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0 namespace=k8s.io Mar 17 17:25:40.090929 containerd[2048]: time="2025-03-17T17:25:40.090518892Z" level=warning msg="cleaning up after shim disconnected" id=5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0 namespace=k8s.io Mar 17 17:25:40.090929 containerd[2048]: time="2025-03-17T17:25:40.090537696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:25:40.569608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0-rootfs.mount: Deactivated successfully. Mar 17 17:25:40.576672 containerd[2048]: time="2025-03-17T17:25:40.576379562Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:25:40.612487 containerd[2048]: time="2025-03-17T17:25:40.612416234Z" level=info msg="CreateContainer within sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\"" Mar 17 17:25:40.613379 containerd[2048]: time="2025-03-17T17:25:40.613296770Z" level=info msg="StartContainer for \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\"" Mar 17 17:25:40.727267 containerd[2048]: time="2025-03-17T17:25:40.726967467Z" level=info msg="StartContainer for \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\" returns successfully" Mar 17 17:25:41.021472 kubelet[3694]: I0317 17:25:41.021184 3694 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:25:41.059572 kubelet[3694]: I0317 17:25:41.059438 3694 topology_manager.go:215] "Topology Admit Handler" podUID="48ab0974-a523-48d1-ac63-e57262564646" podNamespace="kube-system" podName="coredns-7db6d8ff4d-snnqx" Mar 17 17:25:41.065259 kubelet[3694]: I0317 17:25:41.065190 3694 topology_manager.go:215] "Topology Admit Handler" podUID="65bf339c-a8c0-4050-b349-6fa91104eac6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vfrbp" Mar 17 17:25:41.187228 kubelet[3694]: I0317 17:25:41.186953 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48ab0974-a523-48d1-ac63-e57262564646-config-volume\") pod \"coredns-7db6d8ff4d-snnqx\" (UID: \"48ab0974-a523-48d1-ac63-e57262564646\") " pod="kube-system/coredns-7db6d8ff4d-snnqx" Mar 17 17:25:41.187228 kubelet[3694]: I0317 17:25:41.187024 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65bf339c-a8c0-4050-b349-6fa91104eac6-config-volume\") pod \"coredns-7db6d8ff4d-vfrbp\" (UID: \"65bf339c-a8c0-4050-b349-6fa91104eac6\") " pod="kube-system/coredns-7db6d8ff4d-vfrbp" Mar 17 17:25:41.187228 kubelet[3694]: I0317 17:25:41.187067 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpvgc\" (UniqueName: \"kubernetes.io/projected/48ab0974-a523-48d1-ac63-e57262564646-kube-api-access-vpvgc\") pod \"coredns-7db6d8ff4d-snnqx\" (UID: \"48ab0974-a523-48d1-ac63-e57262564646\") " pod="kube-system/coredns-7db6d8ff4d-snnqx" Mar 17 17:25:41.187228 kubelet[3694]: I0317 17:25:41.187105 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4ch\" (UniqueName: \"kubernetes.io/projected/65bf339c-a8c0-4050-b349-6fa91104eac6-kube-api-access-sg4ch\") pod \"coredns-7db6d8ff4d-vfrbp\" (UID: \"65bf339c-a8c0-4050-b349-6fa91104eac6\") " pod="kube-system/coredns-7db6d8ff4d-vfrbp" Mar 17 17:25:41.410209 containerd[2048]: time="2025-03-17T17:25:41.410148962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-snnqx,Uid:48ab0974-a523-48d1-ac63-e57262564646,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:41.414819 containerd[2048]: time="2025-03-17T17:25:41.414682166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vfrbp,Uid:65bf339c-a8c0-4050-b349-6fa91104eac6,Namespace:kube-system,Attempt:0,}" Mar 17 17:25:41.681882 kubelet[3694]: I0317 17:25:41.680214 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mz67n" podStartSLOduration=7.382680592 podStartE2EDuration="17.680189871s" podCreationTimestamp="2025-03-17 17:25:24 +0000 UTC" firstStartedPulling="2025-03-17 17:25:25.022609353 +0000 UTC m=+16.923730105" lastFinishedPulling="2025-03-17 17:25:35.320118548 +0000 UTC m=+27.221239384" observedRunningTime="2025-03-17 17:25:41.676414239 +0000 UTC m=+33.577535087" watchObservedRunningTime="2025-03-17 17:25:41.680189871 +0000 UTC m=+33.581310647" Mar 17 17:25:43.661401 (udev-worker)[4489]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:43.664318 (udev-worker)[4481]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:43.679020 systemd-networkd[1603]: cilium_host: Link UP Mar 17 17:25:43.680218 systemd-networkd[1603]: cilium_net: Link UP Mar 17 17:25:43.680637 systemd-networkd[1603]: cilium_net: Gained carrier Mar 17 17:25:43.681067 systemd-networkd[1603]: cilium_host: Gained carrier Mar 17 17:25:43.833324 (udev-worker)[4535]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:43.843101 systemd-networkd[1603]: cilium_vxlan: Link UP Mar 17 17:25:43.843120 systemd-networkd[1603]: cilium_vxlan: Gained carrier Mar 17 17:25:44.220114 systemd-networkd[1603]: cilium_net: Gained IPv6LL Mar 17 17:25:44.220576 systemd-networkd[1603]: cilium_host: Gained IPv6LL Mar 17 17:25:44.331825 kernel: NET: Registered PF_ALG protocol family Mar 17 17:25:45.052116 systemd-networkd[1603]: cilium_vxlan: Gained IPv6LL Mar 17 17:25:45.663986 systemd-networkd[1603]: lxc_health: Link UP Mar 17 17:25:45.673453 systemd-networkd[1603]: lxc_health: Gained carrier Mar 17 17:25:46.056200 systemd-networkd[1603]: lxcefcdea334d8b: Link UP Mar 17 17:25:46.064858 kernel: eth0: renamed from tmp33741 Mar 17 17:25:46.082403 systemd-networkd[1603]: lxcefcdea334d8b: Gained carrier Mar 17 17:25:46.085942 systemd-networkd[1603]: lxcf26b2c9ac691: Link UP Mar 17 17:25:46.098723 (udev-worker)[4534]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:25:46.110773 kernel: eth0: renamed from tmp9de60 Mar 17 17:25:46.123422 systemd-networkd[1603]: lxcf26b2c9ac691: Gained carrier Mar 17 17:25:46.971972 systemd-networkd[1603]: lxc_health: Gained IPv6LL Mar 17 17:25:47.292051 systemd-networkd[1603]: lxcf26b2c9ac691: Gained IPv6LL Mar 17 17:25:47.484014 systemd-networkd[1603]: lxcefcdea334d8b: Gained IPv6LL Mar 17 17:25:49.610247 ntpd[1999]: Listen normally on 6 cilium_host 192.168.0.45:123 Mar 17 17:25:49.611584 ntpd[1999]: 17 Mar 17:25:49 ntpd[1999]: Listen normally on 6 cilium_host 192.168.0.45:123 Mar 17 17:25:49.611584 ntpd[1999]: 17 Mar 17:25:49 ntpd[1999]: Listen normally on 7 cilium_net [fe80::182a:e6ff:fec9:eb6a%4]:123 Mar 17 17:25:49.611584 ntpd[1999]: 17 Mar 17:25:49 ntpd[1999]: Listen normally on 8 cilium_host [fe80::c4f9:c5ff:fe36:e189%5]:123 Mar 17 17:25:49.611584 ntpd[1999]: 17 Mar 17:25:49 ntpd[1999]: Listen normally on 9 cilium_vxlan [fe80::a851:5cff:fe9a:2782%6]:123 Mar 17 17:25:49.611584 ntpd[1999]: 17 Mar 17:25:49 ntpd[1999]: Listen normally on 10 lxc_health [fe80::a820:99ff:fea8:36b5%8]:123 Mar 17 17:25:49.611584 ntpd[1999]: 17 Mar 17:25:49 ntpd[1999]: Listen normally on 11 lxcefcdea334d8b [fe80::ce5:9fff:fe61:1160%10]:123 Mar 17 17:25:49.611584 ntpd[1999]: 17 Mar 17:25:49 ntpd[1999]: Listen normally on 12 lxcf26b2c9ac691 [fe80::3c1a:deff:fed0:ee96%12]:123 Mar 17 17:25:49.610871 ntpd[1999]: Listen normally on 7 cilium_net [fe80::182a:e6ff:fec9:eb6a%4]:123 Mar 17 17:25:49.610972 ntpd[1999]: Listen normally on 8 cilium_host [fe80::c4f9:c5ff:fe36:e189%5]:123 Mar 17 17:25:49.611042 ntpd[1999]: Listen normally on 9 cilium_vxlan [fe80::a851:5cff:fe9a:2782%6]:123 Mar 17 17:25:49.611117 ntpd[1999]: Listen normally on 10 lxc_health [fe80::a820:99ff:fea8:36b5%8]:123 Mar 17 17:25:49.611185 ntpd[1999]: Listen normally on 11 lxcefcdea334d8b [fe80::ce5:9fff:fe61:1160%10]:123 Mar 17 17:25:49.611251 ntpd[1999]: Listen normally on 12 lxcf26b2c9ac691 [fe80::3c1a:deff:fed0:ee96%12]:123 Mar 17 17:25:51.444391 systemd[1]: Started sshd@7-172.31.17.190:22-139.178.68.195:42420.service - OpenSSH per-connection server daemon (139.178.68.195:42420). Mar 17 17:25:51.635372 sshd[4884]: Accepted publickey for core from 139.178.68.195 port 42420 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:51.638544 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:51.649445 systemd-logind[2020]: New session 8 of user core. Mar 17 17:25:51.660706 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:25:51.982325 sshd[4887]: Connection closed by 139.178.68.195 port 42420 Mar 17 17:25:51.983204 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:51.996592 systemd[1]: sshd@7-172.31.17.190:22-139.178.68.195:42420.service: Deactivated successfully. Mar 17 17:25:51.999913 systemd-logind[2020]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:25:52.019742 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:25:52.031368 systemd-logind[2020]: Removed session 8. Mar 17 17:25:54.758612 containerd[2048]: time="2025-03-17T17:25:54.758145028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:54.758612 containerd[2048]: time="2025-03-17T17:25:54.758251576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:54.758612 containerd[2048]: time="2025-03-17T17:25:54.758280652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:54.770442 containerd[2048]: time="2025-03-17T17:25:54.761693752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:54.780181 containerd[2048]: time="2025-03-17T17:25:54.771872164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:25:54.780181 containerd[2048]: time="2025-03-17T17:25:54.772096156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:25:54.780181 containerd[2048]: time="2025-03-17T17:25:54.772539664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:54.780181 containerd[2048]: time="2025-03-17T17:25:54.773491588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:25:55.011552 containerd[2048]: time="2025-03-17T17:25:55.011198882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-snnqx,Uid:48ab0974-a523-48d1-ac63-e57262564646,Namespace:kube-system,Attempt:0,} returns sandbox id \"337410c8bf40aa0eacc5ccd2865ed2bafe9e2ff0fed98b8d90adbe4fc1054666\"" Mar 17 17:25:55.018122 containerd[2048]: time="2025-03-17T17:25:55.016015598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vfrbp,Uid:65bf339c-a8c0-4050-b349-6fa91104eac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"9de606d4e1f61f792ebc9918b4cee6fb95e1aeffcb46786a114d887ddf0efaef\"" Mar 17 17:25:55.030514 containerd[2048]: time="2025-03-17T17:25:55.030429974Z" level=info msg="CreateContainer within sandbox \"337410c8bf40aa0eacc5ccd2865ed2bafe9e2ff0fed98b8d90adbe4fc1054666\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:25:55.034129 containerd[2048]: time="2025-03-17T17:25:55.033870026Z" level=info msg="CreateContainer within sandbox \"9de606d4e1f61f792ebc9918b4cee6fb95e1aeffcb46786a114d887ddf0efaef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:25:55.094092 containerd[2048]: time="2025-03-17T17:25:55.094019258Z" level=info msg="CreateContainer within sandbox \"9de606d4e1f61f792ebc9918b4cee6fb95e1aeffcb46786a114d887ddf0efaef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7be415dc0300e702af5f4f8f24cd782822513ab7ab8203a3e664e78b324035a0\"" Mar 17 17:25:55.098167 containerd[2048]: time="2025-03-17T17:25:55.096769718Z" level=info msg="StartContainer for \"7be415dc0300e702af5f4f8f24cd782822513ab7ab8203a3e664e78b324035a0\"" Mar 17 17:25:55.098167 containerd[2048]: time="2025-03-17T17:25:55.097294862Z" level=info msg="CreateContainer within sandbox \"337410c8bf40aa0eacc5ccd2865ed2bafe9e2ff0fed98b8d90adbe4fc1054666\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b7aaafe60a85e28959fc3eb49b537f94811fe80dd2a79d20c2728bc057b9efa\"" Mar 17 17:25:55.099693 containerd[2048]: time="2025-03-17T17:25:55.099241022Z" level=info msg="StartContainer for \"8b7aaafe60a85e28959fc3eb49b537f94811fe80dd2a79d20c2728bc057b9efa\"" Mar 17 17:25:55.269715 containerd[2048]: time="2025-03-17T17:25:55.269428551Z" level=info msg="StartContainer for \"7be415dc0300e702af5f4f8f24cd782822513ab7ab8203a3e664e78b324035a0\" returns successfully" Mar 17 17:25:55.288382 containerd[2048]: time="2025-03-17T17:25:55.288327807Z" level=info msg="StartContainer for \"8b7aaafe60a85e28959fc3eb49b537f94811fe80dd2a79d20c2728bc057b9efa\" returns successfully" Mar 17 17:25:55.680458 kubelet[3694]: I0317 17:25:55.680357 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vfrbp" podStartSLOduration=31.680335793 podStartE2EDuration="31.680335793s" podCreationTimestamp="2025-03-17 17:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:55.679869665 +0000 UTC m=+47.580990477" watchObservedRunningTime="2025-03-17 17:25:55.680335793 +0000 UTC m=+47.581456557" Mar 17 17:25:55.736170 kubelet[3694]: I0317 17:25:55.734372 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-snnqx" podStartSLOduration=31.734347229 podStartE2EDuration="31.734347229s" podCreationTimestamp="2025-03-17 17:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:25:55.706143473 +0000 UTC m=+47.607264261" watchObservedRunningTime="2025-03-17 17:25:55.734347229 +0000 UTC m=+47.635467993" Mar 17 17:25:55.779770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601566532.mount: Deactivated successfully. Mar 17 17:25:57.013464 systemd[1]: Started sshd@8-172.31.17.190:22-139.178.68.195:34816.service - OpenSSH per-connection server daemon (139.178.68.195:34816). Mar 17 17:25:57.209360 sshd[5073]: Accepted publickey for core from 139.178.68.195 port 34816 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:25:57.211996 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:25:57.219363 systemd-logind[2020]: New session 9 of user core. Mar 17 17:25:57.227294 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:25:57.472644 sshd[5076]: Connection closed by 139.178.68.195 port 34816 Mar 17 17:25:57.473534 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Mar 17 17:25:57.479719 systemd[1]: sshd@8-172.31.17.190:22-139.178.68.195:34816.service: Deactivated successfully. Mar 17 17:25:57.486549 systemd-logind[2020]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:25:57.488186 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:25:57.490108 systemd-logind[2020]: Removed session 9. Mar 17 17:26:02.502296 systemd[1]: Started sshd@9-172.31.17.190:22-139.178.68.195:34832.service - OpenSSH per-connection server daemon (139.178.68.195:34832). Mar 17 17:26:02.694620 sshd[5091]: Accepted publickey for core from 139.178.68.195 port 34832 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:02.697201 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:02.705016 systemd-logind[2020]: New session 10 of user core. Mar 17 17:26:02.716708 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:26:02.960368 sshd[5094]: Connection closed by 139.178.68.195 port 34832 Mar 17 17:26:02.961288 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:02.967712 systemd[1]: sshd@9-172.31.17.190:22-139.178.68.195:34832.service: Deactivated successfully. Mar 17 17:26:02.976320 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:26:02.979262 systemd-logind[2020]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:26:02.981189 systemd-logind[2020]: Removed session 10. Mar 17 17:26:07.992261 systemd[1]: Started sshd@10-172.31.17.190:22-139.178.68.195:57486.service - OpenSSH per-connection server daemon (139.178.68.195:57486). Mar 17 17:26:08.189976 sshd[5107]: Accepted publickey for core from 139.178.68.195 port 57486 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:08.192414 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:08.200350 systemd-logind[2020]: New session 11 of user core. Mar 17 17:26:08.207466 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:26:08.489251 sshd[5110]: Connection closed by 139.178.68.195 port 57486 Mar 17 17:26:08.489822 sshd-session[5107]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:08.498110 systemd[1]: sshd@10-172.31.17.190:22-139.178.68.195:57486.service: Deactivated successfully. Mar 17 17:26:08.503839 systemd-logind[2020]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:26:08.504909 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:26:08.509097 systemd-logind[2020]: Removed session 11. Mar 17 17:26:13.520261 systemd[1]: Started sshd@11-172.31.17.190:22-139.178.68.195:57500.service - OpenSSH per-connection server daemon (139.178.68.195:57500). Mar 17 17:26:13.707038 sshd[5123]: Accepted publickey for core from 139.178.68.195 port 57500 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:13.709451 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:13.717989 systemd-logind[2020]: New session 12 of user core. Mar 17 17:26:13.726308 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:26:13.970467 sshd[5126]: Connection closed by 139.178.68.195 port 57500 Mar 17 17:26:13.971451 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:13.977638 systemd[1]: sshd@11-172.31.17.190:22-139.178.68.195:57500.service: Deactivated successfully. Mar 17 17:26:13.985910 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:26:13.987744 systemd-logind[2020]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:26:13.989751 systemd-logind[2020]: Removed session 12. Mar 17 17:26:14.003287 systemd[1]: Started sshd@12-172.31.17.190:22-139.178.68.195:57504.service - OpenSSH per-connection server daemon (139.178.68.195:57504). Mar 17 17:26:14.191558 sshd[5137]: Accepted publickey for core from 139.178.68.195 port 57504 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:14.194126 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:14.201580 systemd-logind[2020]: New session 13 of user core. Mar 17 17:26:14.212424 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:26:14.534381 sshd[5140]: Connection closed by 139.178.68.195 port 57504 Mar 17 17:26:14.537675 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:14.550761 systemd[1]: sshd@12-172.31.17.190:22-139.178.68.195:57504.service: Deactivated successfully. Mar 17 17:26:14.562604 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:26:14.578108 systemd-logind[2020]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:26:14.584486 systemd[1]: Started sshd@13-172.31.17.190:22-139.178.68.195:57510.service - OpenSSH per-connection server daemon (139.178.68.195:57510). Mar 17 17:26:14.589980 systemd-logind[2020]: Removed session 13. Mar 17 17:26:14.782261 sshd[5149]: Accepted publickey for core from 139.178.68.195 port 57510 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:14.784701 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:14.792576 systemd-logind[2020]: New session 14 of user core. Mar 17 17:26:14.803505 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:26:15.048513 sshd[5152]: Connection closed by 139.178.68.195 port 57510 Mar 17 17:26:15.049613 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:15.057431 systemd[1]: sshd@13-172.31.17.190:22-139.178.68.195:57510.service: Deactivated successfully. Mar 17 17:26:15.064239 systemd-logind[2020]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:26:15.065168 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:26:15.068634 systemd-logind[2020]: Removed session 14. Mar 17 17:26:20.080395 systemd[1]: Started sshd@14-172.31.17.190:22-139.178.68.195:49794.service - OpenSSH per-connection server daemon (139.178.68.195:49794). Mar 17 17:26:20.265726 sshd[5162]: Accepted publickey for core from 139.178.68.195 port 49794 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:20.268237 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:20.277072 systemd-logind[2020]: New session 15 of user core. Mar 17 17:26:20.283389 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:26:20.532711 sshd[5165]: Connection closed by 139.178.68.195 port 49794 Mar 17 17:26:20.533629 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:20.541576 systemd[1]: sshd@14-172.31.17.190:22-139.178.68.195:49794.service: Deactivated successfully. Mar 17 17:26:20.549365 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:26:20.551058 systemd-logind[2020]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:26:20.552771 systemd-logind[2020]: Removed session 15. Mar 17 17:26:25.569438 systemd[1]: Started sshd@15-172.31.17.190:22-139.178.68.195:49802.service - OpenSSH per-connection server daemon (139.178.68.195:49802). Mar 17 17:26:25.750938 sshd[5178]: Accepted publickey for core from 139.178.68.195 port 49802 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:25.753326 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:25.763261 systemd-logind[2020]: New session 16 of user core. Mar 17 17:26:25.773246 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:26:26.043218 sshd[5181]: Connection closed by 139.178.68.195 port 49802 Mar 17 17:26:26.044103 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:26.052402 systemd[1]: sshd@15-172.31.17.190:22-139.178.68.195:49802.service: Deactivated successfully. Mar 17 17:26:26.060362 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:26:26.062221 systemd-logind[2020]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:26:26.064160 systemd-logind[2020]: Removed session 16. Mar 17 17:26:31.073269 systemd[1]: Started sshd@16-172.31.17.190:22-139.178.68.195:43712.service - OpenSSH per-connection server daemon (139.178.68.195:43712). Mar 17 17:26:31.271524 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 43712 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:31.274161 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:31.282980 systemd-logind[2020]: New session 17 of user core. Mar 17 17:26:31.290304 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:26:31.540296 sshd[5196]: Connection closed by 139.178.68.195 port 43712 Mar 17 17:26:31.541426 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:31.546740 systemd[1]: sshd@16-172.31.17.190:22-139.178.68.195:43712.service: Deactivated successfully. Mar 17 17:26:31.556485 systemd-logind[2020]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:26:31.557925 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:26:31.559668 systemd-logind[2020]: Removed session 17. Mar 17 17:26:31.571277 systemd[1]: Started sshd@17-172.31.17.190:22-139.178.68.195:43714.service - OpenSSH per-connection server daemon (139.178.68.195:43714). Mar 17 17:26:31.767091 sshd[5207]: Accepted publickey for core from 139.178.68.195 port 43714 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:31.769523 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:31.777550 systemd-logind[2020]: New session 18 of user core. Mar 17 17:26:31.787279 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:26:32.104535 sshd[5210]: Connection closed by 139.178.68.195 port 43714 Mar 17 17:26:32.105177 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:32.114541 systemd-logind[2020]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:26:32.115568 systemd[1]: sshd@17-172.31.17.190:22-139.178.68.195:43714.service: Deactivated successfully. Mar 17 17:26:32.122603 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:26:32.124689 systemd-logind[2020]: Removed session 18. Mar 17 17:26:32.137279 systemd[1]: Started sshd@18-172.31.17.190:22-139.178.68.195:43720.service - OpenSSH per-connection server daemon (139.178.68.195:43720). Mar 17 17:26:32.329705 sshd[5218]: Accepted publickey for core from 139.178.68.195 port 43720 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:32.332293 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:32.343008 systemd-logind[2020]: New session 19 of user core. Mar 17 17:26:32.349366 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:26:35.048046 sshd[5221]: Connection closed by 139.178.68.195 port 43720 Mar 17 17:26:35.050509 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:35.063917 systemd-logind[2020]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:26:35.068655 systemd[1]: sshd@18-172.31.17.190:22-139.178.68.195:43720.service: Deactivated successfully. Mar 17 17:26:35.078692 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:26:35.096952 systemd[1]: Started sshd@19-172.31.17.190:22-139.178.68.195:43734.service - OpenSSH per-connection server daemon (139.178.68.195:43734). Mar 17 17:26:35.098916 systemd-logind[2020]: Removed session 19. Mar 17 17:26:35.294378 sshd[5237]: Accepted publickey for core from 139.178.68.195 port 43734 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:35.296821 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:35.305051 systemd-logind[2020]: New session 20 of user core. Mar 17 17:26:35.311460 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:26:35.861776 sshd[5240]: Connection closed by 139.178.68.195 port 43734 Mar 17 17:26:35.862865 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:35.871707 systemd[1]: sshd@19-172.31.17.190:22-139.178.68.195:43734.service: Deactivated successfully. Mar 17 17:26:35.878000 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:26:35.879591 systemd-logind[2020]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:26:35.882902 systemd-logind[2020]: Removed session 20. Mar 17 17:26:35.895266 systemd[1]: Started sshd@20-172.31.17.190:22-139.178.68.195:40864.service - OpenSSH per-connection server daemon (139.178.68.195:40864). Mar 17 17:26:36.094857 sshd[5250]: Accepted publickey for core from 139.178.68.195 port 40864 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:36.097278 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:36.105339 systemd-logind[2020]: New session 21 of user core. Mar 17 17:26:36.111342 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:26:36.349697 sshd[5253]: Connection closed by 139.178.68.195 port 40864 Mar 17 17:26:36.350090 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:36.359213 systemd[1]: sshd@20-172.31.17.190:22-139.178.68.195:40864.service: Deactivated successfully. Mar 17 17:26:36.364826 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:26:36.366618 systemd-logind[2020]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:26:36.369351 systemd-logind[2020]: Removed session 21. Mar 17 17:26:41.387208 systemd[1]: Started sshd@21-172.31.17.190:22-139.178.68.195:40880.service - OpenSSH per-connection server daemon (139.178.68.195:40880). Mar 17 17:26:41.575185 sshd[5264]: Accepted publickey for core from 139.178.68.195 port 40880 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:41.577746 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:41.585846 systemd-logind[2020]: New session 22 of user core. Mar 17 17:26:41.595297 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:26:41.837824 sshd[5267]: Connection closed by 139.178.68.195 port 40880 Mar 17 17:26:41.838758 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:41.844394 systemd[1]: sshd@21-172.31.17.190:22-139.178.68.195:40880.service: Deactivated successfully. Mar 17 17:26:41.852583 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:26:41.858275 systemd-logind[2020]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:26:41.860452 systemd-logind[2020]: Removed session 22. Mar 17 17:26:46.879358 systemd[1]: Started sshd@22-172.31.17.190:22-139.178.68.195:59170.service - OpenSSH per-connection server daemon (139.178.68.195:59170). Mar 17 17:26:47.073361 sshd[5282]: Accepted publickey for core from 139.178.68.195 port 59170 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:47.076009 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:47.083501 systemd-logind[2020]: New session 23 of user core. Mar 17 17:26:47.090434 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:26:47.331458 sshd[5285]: Connection closed by 139.178.68.195 port 59170 Mar 17 17:26:47.332424 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:47.340242 systemd[1]: sshd@22-172.31.17.190:22-139.178.68.195:59170.service: Deactivated successfully. Mar 17 17:26:47.345662 systemd-logind[2020]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:26:47.346277 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:26:47.350353 systemd-logind[2020]: Removed session 23. Mar 17 17:26:52.362655 systemd[1]: Started sshd@23-172.31.17.190:22-139.178.68.195:59180.service - OpenSSH per-connection server daemon (139.178.68.195:59180). Mar 17 17:26:52.551960 sshd[5296]: Accepted publickey for core from 139.178.68.195 port 59180 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:52.554421 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:52.561869 systemd-logind[2020]: New session 24 of user core. Mar 17 17:26:52.570336 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:26:52.814441 sshd[5299]: Connection closed by 139.178.68.195 port 59180 Mar 17 17:26:52.815375 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:52.823478 systemd[1]: sshd@23-172.31.17.190:22-139.178.68.195:59180.service: Deactivated successfully. Mar 17 17:26:52.830587 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:26:52.831026 systemd-logind[2020]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:26:52.834417 systemd-logind[2020]: Removed session 24. Mar 17 17:26:57.848318 systemd[1]: Started sshd@24-172.31.17.190:22-139.178.68.195:39510.service - OpenSSH per-connection server daemon (139.178.68.195:39510). Mar 17 17:26:58.042896 sshd[5312]: Accepted publickey for core from 139.178.68.195 port 39510 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:58.045398 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:58.053514 systemd-logind[2020]: New session 25 of user core. Mar 17 17:26:58.061294 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:26:58.304307 sshd[5315]: Connection closed by 139.178.68.195 port 39510 Mar 17 17:26:58.305319 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Mar 17 17:26:58.310515 systemd[1]: sshd@24-172.31.17.190:22-139.178.68.195:39510.service: Deactivated successfully. Mar 17 17:26:58.320397 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:26:58.323230 systemd-logind[2020]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:26:58.325010 systemd-logind[2020]: Removed session 25. Mar 17 17:26:58.334707 systemd[1]: Started sshd@25-172.31.17.190:22-139.178.68.195:39526.service - OpenSSH per-connection server daemon (139.178.68.195:39526). Mar 17 17:26:58.529019 sshd[5326]: Accepted publickey for core from 139.178.68.195 port 39526 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:26:58.531461 sshd-session[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:26:58.539847 systemd-logind[2020]: New session 26 of user core. Mar 17 17:26:58.547431 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:27:01.189944 containerd[2048]: time="2025-03-17T17:27:01.188765502Z" level=info msg="StopContainer for \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\" with timeout 30 (s)" Mar 17 17:27:01.193411 containerd[2048]: time="2025-03-17T17:27:01.192694734Z" level=info msg="Stop container \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\" with signal terminated" Mar 17 17:27:01.229401 containerd[2048]: time="2025-03-17T17:27:01.229298539Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:27:01.247687 containerd[2048]: time="2025-03-17T17:27:01.247526755Z" level=info msg="StopContainer for \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\" with timeout 2 (s)" Mar 17 17:27:01.248533 containerd[2048]: time="2025-03-17T17:27:01.248396191Z" level=info msg="Stop container \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\" with signal terminated" Mar 17 17:27:01.266150 systemd-networkd[1603]: lxc_health: Link DOWN Mar 17 17:27:01.266164 systemd-networkd[1603]: lxc_health: Lost carrier Mar 17 17:27:01.298946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc-rootfs.mount: Deactivated successfully. Mar 17 17:27:01.323833 containerd[2048]: time="2025-03-17T17:27:01.323507167Z" level=info msg="shim disconnected" id=fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc namespace=k8s.io Mar 17 17:27:01.324071 containerd[2048]: time="2025-03-17T17:27:01.323875279Z" level=warning msg="cleaning up after shim disconnected" id=fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc namespace=k8s.io Mar 17 17:27:01.324071 containerd[2048]: time="2025-03-17T17:27:01.323902339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:01.351066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd-rootfs.mount: Deactivated successfully. Mar 17 17:27:01.364074 containerd[2048]: time="2025-03-17T17:27:01.363980635Z" level=info msg="shim disconnected" id=0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd namespace=k8s.io Mar 17 17:27:01.364074 containerd[2048]: time="2025-03-17T17:27:01.364059871Z" level=warning msg="cleaning up after shim disconnected" id=0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd namespace=k8s.io Mar 17 17:27:01.364660 containerd[2048]: time="2025-03-17T17:27:01.364081987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:01.366135 containerd[2048]: time="2025-03-17T17:27:01.365899399Z" level=info msg="StopContainer for \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\" returns successfully" Mar 17 17:27:01.367755 containerd[2048]: time="2025-03-17T17:27:01.367690903Z" level=info msg="StopPodSandbox for \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\"" Mar 17 17:27:01.367943 containerd[2048]: time="2025-03-17T17:27:01.367841539Z" level=info msg="Container to stop \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:01.373533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3-shm.mount: Deactivated successfully. Mar 17 17:27:01.407659 containerd[2048]: time="2025-03-17T17:27:01.407604727Z" level=info msg="StopContainer for \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\" returns successfully" Mar 17 17:27:01.408852 containerd[2048]: time="2025-03-17T17:27:01.408809095Z" level=info msg="StopPodSandbox for \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\"" Mar 17 17:27:01.409709 containerd[2048]: time="2025-03-17T17:27:01.409604347Z" level=info msg="Container to stop \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:01.409977 containerd[2048]: time="2025-03-17T17:27:01.409943395Z" level=info msg="Container to stop \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:01.410087 containerd[2048]: time="2025-03-17T17:27:01.410059735Z" level=info msg="Container to stop \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:01.410219 containerd[2048]: time="2025-03-17T17:27:01.410189767Z" level=info msg="Container to stop \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:01.410940 containerd[2048]: time="2025-03-17T17:27:01.410893099Z" level=info msg="Container to stop \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 17:27:01.416082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059-shm.mount: Deactivated successfully. Mar 17 17:27:01.470655 containerd[2048]: time="2025-03-17T17:27:01.470455628Z" level=info msg="shim disconnected" id=f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3 namespace=k8s.io Mar 17 17:27:01.472415 containerd[2048]: time="2025-03-17T17:27:01.472340192Z" level=warning msg="cleaning up after shim disconnected" id=f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3 namespace=k8s.io Mar 17 17:27:01.472634 containerd[2048]: time="2025-03-17T17:27:01.472601744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:01.498414 containerd[2048]: time="2025-03-17T17:27:01.498339116Z" level=info msg="shim disconnected" id=2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059 namespace=k8s.io Mar 17 17:27:01.499034 containerd[2048]: time="2025-03-17T17:27:01.498748088Z" level=warning msg="cleaning up after shim disconnected" id=2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059 namespace=k8s.io Mar 17 17:27:01.499034 containerd[2048]: time="2025-03-17T17:27:01.498776420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:01.509481 containerd[2048]: time="2025-03-17T17:27:01.509411192Z" level=info msg="TearDown network for sandbox \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" successfully" Mar 17 17:27:01.509481 containerd[2048]: time="2025-03-17T17:27:01.509469620Z" level=info msg="StopPodSandbox for \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" returns successfully" Mar 17 17:27:01.530511 containerd[2048]: time="2025-03-17T17:27:01.530332988Z" level=info msg="TearDown network for sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" successfully" Mar 17 17:27:01.530777 containerd[2048]: time="2025-03-17T17:27:01.530735684Z" level=info msg="StopPodSandbox for \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" returns successfully" Mar 17 17:27:01.621833 kubelet[3694]: I0317 17:27:01.621612 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpxwk\" (UniqueName: \"kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-kube-api-access-cpxwk\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.621833 kubelet[3694]: I0317 17:27:01.621701 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-cgroup\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.621833 kubelet[3694]: I0317 17:27:01.621742 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl9jh\" (UniqueName: \"kubernetes.io/projected/c0ddce70-2b22-4337-ad0d-e55462248687-kube-api-access-wl9jh\") pod \"c0ddce70-2b22-4337-ad0d-e55462248687\" (UID: \"c0ddce70-2b22-4337-ad0d-e55462248687\") " Mar 17 17:27:01.623182 kubelet[3694]: I0317 17:27:01.622210 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ddce70-2b22-4337-ad0d-e55462248687-cilium-config-path\") pod \"c0ddce70-2b22-4337-ad0d-e55462248687\" (UID: \"c0ddce70-2b22-4337-ad0d-e55462248687\") " Mar 17 17:27:01.623182 kubelet[3694]: I0317 17:27:01.622296 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee9c295-6f97-4d34-8747-582ca0447a7b-clustermesh-secrets\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.623182 kubelet[3694]: I0317 17:27:01.622333 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-kernel\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.623182 kubelet[3694]: I0317 17:27:01.622837 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-etc-cni-netd\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.623182 kubelet[3694]: I0317 17:27:01.622936 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-lib-modules\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.624816 kubelet[3694]: I0317 17:27:01.622979 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-bpf-maps\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.624816 kubelet[3694]: I0317 17:27:01.624056 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-config-path\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.624816 kubelet[3694]: I0317 17:27:01.624095 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-hostproc\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.624816 kubelet[3694]: I0317 17:27:01.624133 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-hubble-tls\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.624816 kubelet[3694]: I0317 17:27:01.624168 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-xtables-lock\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.624816 kubelet[3694]: I0317 17:27:01.624202 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-run\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.625374 kubelet[3694]: I0317 17:27:01.624234 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-net\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.625374 kubelet[3694]: I0317 17:27:01.624270 3694 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cni-path\") pod \"6ee9c295-6f97-4d34-8747-582ca0447a7b\" (UID: \"6ee9c295-6f97-4d34-8747-582ca0447a7b\") " Mar 17 17:27:01.625374 kubelet[3694]: I0317 17:27:01.623629 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.625374 kubelet[3694]: I0317 17:27:01.623663 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.625374 kubelet[3694]: I0317 17:27:01.623689 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.625672 kubelet[3694]: I0317 17:27:01.623712 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.625672 kubelet[3694]: I0317 17:27:01.624374 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cni-path" (OuterVolumeSpecName: "cni-path") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.625672 kubelet[3694]: I0317 17:27:01.624401 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.627928 kubelet[3694]: I0317 17:27:01.627605 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-hostproc" (OuterVolumeSpecName: "hostproc") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.632231 kubelet[3694]: I0317 17:27:01.632046 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.632231 kubelet[3694]: I0317 17:27:01.632141 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.632231 kubelet[3694]: I0317 17:27:01.632189 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 17:27:01.634350 kubelet[3694]: I0317 17:27:01.634263 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-kube-api-access-cpxwk" (OuterVolumeSpecName: "kube-api-access-cpxwk") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "kube-api-access-cpxwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:27:01.635809 kubelet[3694]: I0317 17:27:01.635334 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ddce70-2b22-4337-ad0d-e55462248687-kube-api-access-wl9jh" (OuterVolumeSpecName: "kube-api-access-wl9jh") pod "c0ddce70-2b22-4337-ad0d-e55462248687" (UID: "c0ddce70-2b22-4337-ad0d-e55462248687"). InnerVolumeSpecName "kube-api-access-wl9jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:27:01.638468 kubelet[3694]: I0317 17:27:01.638400 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:27:01.639619 kubelet[3694]: I0317 17:27:01.639563 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0ddce70-2b22-4337-ad0d-e55462248687-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0ddce70-2b22-4337-ad0d-e55462248687" (UID: "c0ddce70-2b22-4337-ad0d-e55462248687"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 17:27:01.640415 kubelet[3694]: I0317 17:27:01.640265 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee9c295-6f97-4d34-8747-582ca0447a7b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 17:27:01.643141 kubelet[3694]: I0317 17:27:01.643083 3694 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6ee9c295-6f97-4d34-8747-582ca0447a7b" (UID: "6ee9c295-6f97-4d34-8747-582ca0447a7b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 17:27:01.724861 kubelet[3694]: I0317 17:27:01.724622 3694 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-hostproc\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.724861 kubelet[3694]: I0317 17:27:01.724679 3694 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-hubble-tls\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.725842 kubelet[3694]: I0317 17:27:01.724703 3694 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-xtables-lock\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.725842 kubelet[3694]: I0317 17:27:01.725418 3694 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-run\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.725842 kubelet[3694]: I0317 17:27:01.725448 3694 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-net\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.725842 kubelet[3694]: I0317 17:27:01.725652 3694 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cni-path\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.725842 kubelet[3694]: I0317 17:27:01.725672 3694 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cpxwk\" (UniqueName: \"kubernetes.io/projected/6ee9c295-6f97-4d34-8747-582ca0447a7b-kube-api-access-cpxwk\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.725842 kubelet[3694]: I0317 17:27:01.725820 3694 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-cgroup\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.725842 kubelet[3694]: I0317 17:27:01.725849 3694 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wl9jh\" (UniqueName: \"kubernetes.io/projected/c0ddce70-2b22-4337-ad0d-e55462248687-kube-api-access-wl9jh\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.726249 kubelet[3694]: I0317 17:27:01.725870 3694 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0ddce70-2b22-4337-ad0d-e55462248687-cilium-config-path\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.726249 kubelet[3694]: I0317 17:27:01.726053 3694 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ee9c295-6f97-4d34-8747-582ca0447a7b-clustermesh-secrets\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.726249 kubelet[3694]: I0317 17:27:01.726087 3694 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-host-proc-sys-kernel\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.726445 kubelet[3694]: I0317 17:27:01.726287 3694 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-etc-cni-netd\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.726445 kubelet[3694]: I0317 17:27:01.726310 3694 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ee9c295-6f97-4d34-8747-582ca0447a7b-cilium-config-path\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.726445 kubelet[3694]: I0317 17:27:01.726330 3694 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-lib-modules\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.726445 kubelet[3694]: I0317 17:27:01.726429 3694 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ee9c295-6f97-4d34-8747-582ca0447a7b-bpf-maps\") on node \"ip-172-31-17-190\" DevicePath \"\"" Mar 17 17:27:01.843488 kubelet[3694]: I0317 17:27:01.843386 3694 scope.go:117] "RemoveContainer" containerID="fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc" Mar 17 17:27:01.851236 containerd[2048]: time="2025-03-17T17:27:01.851183986Z" level=info msg="RemoveContainer for \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\"" Mar 17 17:27:01.862316 containerd[2048]: time="2025-03-17T17:27:01.862265290Z" level=info msg="RemoveContainer for \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\" returns successfully" Mar 17 17:27:01.864893 kubelet[3694]: I0317 17:27:01.864673 3694 scope.go:117] "RemoveContainer" containerID="fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc" Mar 17 17:27:01.867473 containerd[2048]: time="2025-03-17T17:27:01.865665802Z" level=error msg="ContainerStatus for \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\": not found" Mar 17 17:27:01.868774 kubelet[3694]: E0317 17:27:01.868603 3694 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\": not found" containerID="fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc" Mar 17 17:27:01.870107 kubelet[3694]: I0317 17:27:01.868672 3694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc"} err="failed to get container status \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"fe5bfa861634d8ca915c3fa367914eabf3813d498a687ddcd94077fbb39d52dc\": not found" Mar 17 17:27:01.871731 kubelet[3694]: I0317 17:27:01.871139 3694 scope.go:117] "RemoveContainer" containerID="0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd" Mar 17 17:27:01.881983 containerd[2048]: time="2025-03-17T17:27:01.881438038Z" level=info msg="RemoveContainer for \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\"" Mar 17 17:27:01.893652 containerd[2048]: time="2025-03-17T17:27:01.893464114Z" level=info msg="RemoveContainer for \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\" returns successfully" Mar 17 17:27:01.896031 kubelet[3694]: I0317 17:27:01.895928 3694 scope.go:117] "RemoveContainer" containerID="5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0" Mar 17 17:27:01.901319 containerd[2048]: time="2025-03-17T17:27:01.901134358Z" level=info msg="RemoveContainer for \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\"" Mar 17 17:27:01.907976 containerd[2048]: time="2025-03-17T17:27:01.907774066Z" level=info msg="RemoveContainer for \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\" returns successfully" Mar 17 17:27:01.908977 kubelet[3694]: I0317 17:27:01.908934 3694 scope.go:117] "RemoveContainer" containerID="a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c" Mar 17 17:27:01.914577 containerd[2048]: time="2025-03-17T17:27:01.914482966Z" level=info msg="RemoveContainer for \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\"" Mar 17 17:27:01.921579 containerd[2048]: time="2025-03-17T17:27:01.921489250Z" level=info msg="RemoveContainer for \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\" returns successfully" Mar 17 17:27:01.922046 kubelet[3694]: I0317 17:27:01.921994 3694 scope.go:117] "RemoveContainer" containerID="68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716" Mar 17 17:27:01.930141 containerd[2048]: time="2025-03-17T17:27:01.930083206Z" level=info msg="RemoveContainer for \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\"" Mar 17 17:27:01.937909 containerd[2048]: time="2025-03-17T17:27:01.937763302Z" level=info msg="RemoveContainer for \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\" returns successfully" Mar 17 17:27:01.938154 kubelet[3694]: I0317 17:27:01.938115 3694 scope.go:117] "RemoveContainer" containerID="acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018" Mar 17 17:27:01.940271 containerd[2048]: time="2025-03-17T17:27:01.940220350Z" level=info msg="RemoveContainer for \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\"" Mar 17 17:27:01.946027 containerd[2048]: time="2025-03-17T17:27:01.945971974Z" level=info msg="RemoveContainer for \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\" returns successfully" Mar 17 17:27:01.946495 kubelet[3694]: I0317 17:27:01.946450 3694 scope.go:117] "RemoveContainer" containerID="0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd" Mar 17 17:27:01.946874 containerd[2048]: time="2025-03-17T17:27:01.946819894Z" level=error msg="ContainerStatus for \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\": not found" Mar 17 17:27:01.947099 kubelet[3694]: E0317 17:27:01.947054 3694 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\": not found" containerID="0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd" Mar 17 17:27:01.947175 kubelet[3694]: I0317 17:27:01.947108 3694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd"} err="failed to get container status \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a24b4f26d5dd96adae4a1baf7736b522e0f7f564f8ece7bc09658b0dff509fd\": not found" Mar 17 17:27:01.947175 kubelet[3694]: I0317 17:27:01.947146 3694 scope.go:117] "RemoveContainer" containerID="5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0" Mar 17 17:27:01.947628 containerd[2048]: time="2025-03-17T17:27:01.947446990Z" level=error msg="ContainerStatus for \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\": not found" Mar 17 17:27:01.947708 kubelet[3694]: E0317 17:27:01.947656 3694 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\": not found" containerID="5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0" Mar 17 17:27:01.947820 kubelet[3694]: I0317 17:27:01.947699 3694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0"} err="failed to get container status \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e1e00cfdf200cd37c3f040e511cf0a05bfed38345e82c0547faec31803d5aa0\": not found" Mar 17 17:27:01.947820 kubelet[3694]: I0317 17:27:01.947734 3694 scope.go:117] "RemoveContainer" containerID="a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c" Mar 17 17:27:01.948280 containerd[2048]: time="2025-03-17T17:27:01.948168754Z" level=error msg="ContainerStatus for \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\": not found" Mar 17 17:27:01.948493 kubelet[3694]: E0317 17:27:01.948453 3694 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\": not found" containerID="a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c" Mar 17 17:27:01.948560 kubelet[3694]: I0317 17:27:01.948521 3694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c"} err="failed to get container status \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a73d6fbdbcb5cb4e102cf36de655cefb1ef2d1a09b742683985074c85daf7f2c\": not found" Mar 17 17:27:01.948636 kubelet[3694]: I0317 17:27:01.948557 3694 scope.go:117] "RemoveContainer" containerID="68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716" Mar 17 17:27:01.949148 containerd[2048]: time="2025-03-17T17:27:01.949101538Z" level=error msg="ContainerStatus for \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\": not found" Mar 17 17:27:01.949515 kubelet[3694]: E0317 17:27:01.949474 3694 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\": not found" containerID="68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716" Mar 17 17:27:01.949605 kubelet[3694]: I0317 17:27:01.949523 3694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716"} err="failed to get container status \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\": rpc error: code = NotFound desc = an error occurred when try to find container \"68c177b91f376a7be3e98996a01911041878f3d3bc562a9ccc52da2a9b5c8716\": not found" Mar 17 17:27:01.949605 kubelet[3694]: I0317 17:27:01.949557 3694 scope.go:117] "RemoveContainer" containerID="acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018" Mar 17 17:27:01.949965 containerd[2048]: time="2025-03-17T17:27:01.949889782Z" level=error msg="ContainerStatus for \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\": not found" Mar 17 17:27:01.950309 kubelet[3694]: E0317 17:27:01.950266 3694 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\": not found" containerID="acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018" Mar 17 17:27:01.950378 kubelet[3694]: I0317 17:27:01.950316 3694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018"} err="failed to get container status \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\": rpc error: code = NotFound desc = an error occurred when try to find container \"acc7596eb9c20d99dfc2dc0ef826093cd903e56e263f6f3daa710a1296781018\": not found" Mar 17 17:27:02.204101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3-rootfs.mount: Deactivated successfully. Mar 17 17:27:02.204637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059-rootfs.mount: Deactivated successfully. Mar 17 17:27:02.204893 systemd[1]: var-lib-kubelet-pods-c0ddce70\x2d2b22\x2d4337\x2dad0d\x2de55462248687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwl9jh.mount: Deactivated successfully. Mar 17 17:27:02.205121 systemd[1]: var-lib-kubelet-pods-6ee9c295\x2d6f97\x2d4d34\x2d8747\x2d582ca0447a7b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcpxwk.mount: Deactivated successfully. Mar 17 17:27:02.205333 systemd[1]: var-lib-kubelet-pods-6ee9c295\x2d6f97\x2d4d34\x2d8747\x2d582ca0447a7b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 17:27:02.205549 systemd[1]: var-lib-kubelet-pods-6ee9c295\x2d6f97\x2d4d34\x2d8747\x2d582ca0447a7b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 17:27:02.337840 kubelet[3694]: I0317 17:27:02.337506 3694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" path="/var/lib/kubelet/pods/6ee9c295-6f97-4d34-8747-582ca0447a7b/volumes" Mar 17 17:27:02.339303 kubelet[3694]: I0317 17:27:02.339253 3694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ddce70-2b22-4337-ad0d-e55462248687" path="/var/lib/kubelet/pods/c0ddce70-2b22-4337-ad0d-e55462248687/volumes" Mar 17 17:27:03.118074 sshd[5329]: Connection closed by 139.178.68.195 port 39526 Mar 17 17:27:03.118542 sshd-session[5326]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:03.127769 systemd[1]: sshd@25-172.31.17.190:22-139.178.68.195:39526.service: Deactivated successfully. Mar 17 17:27:03.133311 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:27:03.135362 systemd-logind[2020]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:27:03.142144 systemd-logind[2020]: Removed session 26. Mar 17 17:27:03.148286 systemd[1]: Started sshd@26-172.31.17.190:22-139.178.68.195:39540.service - OpenSSH per-connection server daemon (139.178.68.195:39540). Mar 17 17:27:03.342811 sshd[5495]: Accepted publickey for core from 139.178.68.195 port 39540 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:03.345330 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:03.354181 systemd-logind[2020]: New session 27 of user core. Mar 17 17:27:03.360439 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 17 17:27:03.610589 ntpd[1999]: Deleting interface #10 lxc_health, fe80::a820:99ff:fea8:36b5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Mar 17 17:27:03.611356 ntpd[1999]: 17 Mar 17:27:03 ntpd[1999]: Deleting interface #10 lxc_health, fe80::a820:99ff:fea8:36b5%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Mar 17 17:27:03.626923 kubelet[3694]: E0317 17:27:03.626831 3694 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:27:05.687339 sshd[5499]: Connection closed by 139.178.68.195 port 39540 Mar 17 17:27:05.688458 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:05.710525 systemd[1]: sshd@26-172.31.17.190:22-139.178.68.195:39540.service: Deactivated successfully. Mar 17 17:27:05.728946 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 17:27:05.730871 systemd-logind[2020]: Session 27 logged out. Waiting for processes to exit. Mar 17 17:27:05.753500 systemd[1]: Started sshd@27-172.31.17.190:22-139.178.68.195:58658.service - OpenSSH per-connection server daemon (139.178.68.195:58658). Mar 17 17:27:05.758005 systemd-logind[2020]: Removed session 27. Mar 17 17:27:05.765038 kubelet[3694]: I0317 17:27:05.764971 3694 topology_manager.go:215] "Topology Admit Handler" podUID="22c09a40-e714-4b1d-84af-f769087a75a8" podNamespace="kube-system" podName="cilium-b25pt" Mar 17 17:27:05.771700 kubelet[3694]: E0317 17:27:05.765063 3694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" containerName="cilium-agent" Mar 17 17:27:05.771700 kubelet[3694]: E0317 17:27:05.765086 3694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" containerName="apply-sysctl-overwrites" Mar 17 17:27:05.771700 kubelet[3694]: E0317 17:27:05.765102 3694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c0ddce70-2b22-4337-ad0d-e55462248687" containerName="cilium-operator" Mar 17 17:27:05.771700 kubelet[3694]: E0317 17:27:05.765117 3694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" containerName="mount-cgroup" Mar 17 17:27:05.771700 kubelet[3694]: E0317 17:27:05.765131 3694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" containerName="mount-bpf-fs" Mar 17 17:27:05.771700 kubelet[3694]: E0317 17:27:05.765146 3694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" containerName="clean-cilium-state" Mar 17 17:27:05.771700 kubelet[3694]: I0317 17:27:05.765189 3694 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ee9c295-6f97-4d34-8747-582ca0447a7b" containerName="cilium-agent" Mar 17 17:27:05.771700 kubelet[3694]: I0317 17:27:05.765205 3694 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ddce70-2b22-4337-ad0d-e55462248687" containerName="cilium-operator" Mar 17 17:27:05.855024 kubelet[3694]: I0317 17:27:05.854971 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-host-proc-sys-net\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855430 kubelet[3694]: I0317 17:27:05.855038 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-cilium-run\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855430 kubelet[3694]: I0317 17:27:05.855085 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-hostproc\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855430 kubelet[3694]: I0317 17:27:05.855168 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-cilium-cgroup\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855430 kubelet[3694]: I0317 17:27:05.855207 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-xtables-lock\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855430 kubelet[3694]: I0317 17:27:05.855289 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/22c09a40-e714-4b1d-84af-f769087a75a8-cilium-config-path\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855984 kubelet[3694]: I0317 17:27:05.855336 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kptb\" (UniqueName: \"kubernetes.io/projected/22c09a40-e714-4b1d-84af-f769087a75a8-kube-api-access-9kptb\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855984 kubelet[3694]: I0317 17:27:05.855854 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-cni-path\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.855984 kubelet[3694]: I0317 17:27:05.855919 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-bpf-maps\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.856450 kubelet[3694]: I0317 17:27:05.856215 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-lib-modules\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.856450 kubelet[3694]: I0317 17:27:05.856286 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/22c09a40-e714-4b1d-84af-f769087a75a8-clustermesh-secrets\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.856450 kubelet[3694]: I0317 17:27:05.856328 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/22c09a40-e714-4b1d-84af-f769087a75a8-cilium-ipsec-secrets\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.856450 kubelet[3694]: I0317 17:27:05.856395 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/22c09a40-e714-4b1d-84af-f769087a75a8-hubble-tls\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.856819 kubelet[3694]: I0317 17:27:05.856703 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-etc-cni-netd\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:05.856819 kubelet[3694]: I0317 17:27:05.856770 3694 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/22c09a40-e714-4b1d-84af-f769087a75a8-host-proc-sys-kernel\") pod \"cilium-b25pt\" (UID: \"22c09a40-e714-4b1d-84af-f769087a75a8\") " pod="kube-system/cilium-b25pt" Mar 17 17:27:06.048542 sshd[5509]: Accepted publickey for core from 139.178.68.195 port 58658 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:06.051965 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:06.066382 systemd-logind[2020]: New session 28 of user core. Mar 17 17:27:06.070463 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 17 17:27:06.114865 containerd[2048]: time="2025-03-17T17:27:06.114758207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b25pt,Uid:22c09a40-e714-4b1d-84af-f769087a75a8,Namespace:kube-system,Attempt:0,}" Mar 17 17:27:06.158902 containerd[2048]: time="2025-03-17T17:27:06.158178983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:27:06.158902 containerd[2048]: time="2025-03-17T17:27:06.158295023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:27:06.158902 containerd[2048]: time="2025-03-17T17:27:06.158349563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:06.158902 containerd[2048]: time="2025-03-17T17:27:06.158593223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:27:06.199987 sshd[5517]: Connection closed by 139.178.68.195 port 58658 Mar 17 17:27:06.201502 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:06.214411 systemd[1]: sshd@27-172.31.17.190:22-139.178.68.195:58658.service: Deactivated successfully. Mar 17 17:27:06.229006 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 17:27:06.233503 systemd-logind[2020]: Session 28 logged out. Waiting for processes to exit. Mar 17 17:27:06.248809 systemd[1]: Started sshd@28-172.31.17.190:22-139.178.68.195:58666.service - OpenSSH per-connection server daemon (139.178.68.195:58666). Mar 17 17:27:06.250768 systemd-logind[2020]: Removed session 28. Mar 17 17:27:06.266379 containerd[2048]: time="2025-03-17T17:27:06.266331768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b25pt,Uid:22c09a40-e714-4b1d-84af-f769087a75a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\"" Mar 17 17:27:06.278764 containerd[2048]: time="2025-03-17T17:27:06.278436336Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 17:27:06.303868 containerd[2048]: time="2025-03-17T17:27:06.303695064Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"adc66fd171f8c864e35ff13dee96552663ef9012ce3b5a69aecaaff9841ecbc8\"" Mar 17 17:27:06.308572 containerd[2048]: time="2025-03-17T17:27:06.306612960Z" level=info msg="StartContainer for \"adc66fd171f8c864e35ff13dee96552663ef9012ce3b5a69aecaaff9841ecbc8\"" Mar 17 17:27:06.406062 containerd[2048]: time="2025-03-17T17:27:06.405993924Z" level=info msg="StartContainer for \"adc66fd171f8c864e35ff13dee96552663ef9012ce3b5a69aecaaff9841ecbc8\" returns successfully" Mar 17 17:27:06.452276 sshd[5556]: Accepted publickey for core from 139.178.68.195 port 58666 ssh2: RSA SHA256:d/UruLZo/CsfcUUCH/x/bM9fcZFMuRhcbrxztEEs5OE Mar 17 17:27:06.457468 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:27:06.469952 systemd-logind[2020]: New session 29 of user core. Mar 17 17:27:06.475378 containerd[2048]: time="2025-03-17T17:27:06.475295977Z" level=info msg="shim disconnected" id=adc66fd171f8c864e35ff13dee96552663ef9012ce3b5a69aecaaff9841ecbc8 namespace=k8s.io Mar 17 17:27:06.475378 containerd[2048]: time="2025-03-17T17:27:06.475374745Z" level=warning msg="cleaning up after shim disconnected" id=adc66fd171f8c864e35ff13dee96552663ef9012ce3b5a69aecaaff9841ecbc8 namespace=k8s.io Mar 17 17:27:06.475648 containerd[2048]: time="2025-03-17T17:27:06.475396381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:06.477710 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 17 17:27:06.895013 containerd[2048]: time="2025-03-17T17:27:06.894304695Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 17:27:06.928990 containerd[2048]: time="2025-03-17T17:27:06.928930263Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46d7982cc5edf82cc9447bfb5457f88669750a0b3d1f05b9cb5d9c5227d1626a\"" Mar 17 17:27:06.929945 containerd[2048]: time="2025-03-17T17:27:06.929897463Z" level=info msg="StartContainer for \"46d7982cc5edf82cc9447bfb5457f88669750a0b3d1f05b9cb5d9c5227d1626a\"" Mar 17 17:27:07.038617 containerd[2048]: time="2025-03-17T17:27:07.038529707Z" level=info msg="StartContainer for \"46d7982cc5edf82cc9447bfb5457f88669750a0b3d1f05b9cb5d9c5227d1626a\" returns successfully" Mar 17 17:27:07.084676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46d7982cc5edf82cc9447bfb5457f88669750a0b3d1f05b9cb5d9c5227d1626a-rootfs.mount: Deactivated successfully. Mar 17 17:27:07.095374 containerd[2048]: time="2025-03-17T17:27:07.095288544Z" level=info msg="shim disconnected" id=46d7982cc5edf82cc9447bfb5457f88669750a0b3d1f05b9cb5d9c5227d1626a namespace=k8s.io Mar 17 17:27:07.095374 containerd[2048]: time="2025-03-17T17:27:07.095368452Z" level=warning msg="cleaning up after shim disconnected" id=46d7982cc5edf82cc9447bfb5457f88669750a0b3d1f05b9cb5d9c5227d1626a namespace=k8s.io Mar 17 17:27:07.095757 containerd[2048]: time="2025-03-17T17:27:07.095389548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:07.901843 containerd[2048]: time="2025-03-17T17:27:07.901725892Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 17:27:07.947627 containerd[2048]: time="2025-03-17T17:27:07.947372248Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a3065cf74909e0d2e7b1c104abcc0ebe70b446972e923c8bd9daf55824d8354\"" Mar 17 17:27:07.949092 containerd[2048]: time="2025-03-17T17:27:07.949021876Z" level=info msg="StartContainer for \"1a3065cf74909e0d2e7b1c104abcc0ebe70b446972e923c8bd9daf55824d8354\"" Mar 17 17:27:08.075744 containerd[2048]: time="2025-03-17T17:27:08.075672721Z" level=info msg="StartContainer for \"1a3065cf74909e0d2e7b1c104abcc0ebe70b446972e923c8bd9daf55824d8354\" returns successfully" Mar 17 17:27:08.117542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a3065cf74909e0d2e7b1c104abcc0ebe70b446972e923c8bd9daf55824d8354-rootfs.mount: Deactivated successfully. Mar 17 17:27:08.122586 containerd[2048]: time="2025-03-17T17:27:08.122472133Z" level=info msg="shim disconnected" id=1a3065cf74909e0d2e7b1c104abcc0ebe70b446972e923c8bd9daf55824d8354 namespace=k8s.io Mar 17 17:27:08.122586 containerd[2048]: time="2025-03-17T17:27:08.122566633Z" level=warning msg="cleaning up after shim disconnected" id=1a3065cf74909e0d2e7b1c104abcc0ebe70b446972e923c8bd9daf55824d8354 namespace=k8s.io Mar 17 17:27:08.122586 containerd[2048]: time="2025-03-17T17:27:08.122587933Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:08.372703 containerd[2048]: time="2025-03-17T17:27:08.372556730Z" level=info msg="StopPodSandbox for \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\"" Mar 17 17:27:08.373072 containerd[2048]: time="2025-03-17T17:27:08.372723422Z" level=info msg="TearDown network for sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" successfully" Mar 17 17:27:08.373072 containerd[2048]: time="2025-03-17T17:27:08.372747998Z" level=info msg="StopPodSandbox for \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" returns successfully" Mar 17 17:27:08.373931 containerd[2048]: time="2025-03-17T17:27:08.373855382Z" level=info msg="RemovePodSandbox for \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\"" Mar 17 17:27:08.373931 containerd[2048]: time="2025-03-17T17:27:08.373912754Z" level=info msg="Forcibly stopping sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\"" Mar 17 17:27:08.374172 containerd[2048]: time="2025-03-17T17:27:08.374012738Z" level=info msg="TearDown network for sandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" successfully" Mar 17 17:27:08.380497 containerd[2048]: time="2025-03-17T17:27:08.380345330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:27:08.381132 containerd[2048]: time="2025-03-17T17:27:08.380698598Z" level=info msg="RemovePodSandbox \"2ad22e705e12493136a01e4886d468a7b868482a98a0be9a8a9f568b9ebbd059\" returns successfully" Mar 17 17:27:08.383400 containerd[2048]: time="2025-03-17T17:27:08.383310002Z" level=info msg="StopPodSandbox for \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\"" Mar 17 17:27:08.384263 containerd[2048]: time="2025-03-17T17:27:08.383574374Z" level=info msg="TearDown network for sandbox \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" successfully" Mar 17 17:27:08.384263 containerd[2048]: time="2025-03-17T17:27:08.383598026Z" level=info msg="StopPodSandbox for \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" returns successfully" Mar 17 17:27:08.386645 containerd[2048]: time="2025-03-17T17:27:08.385225466Z" level=info msg="RemovePodSandbox for \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\"" Mar 17 17:27:08.386645 containerd[2048]: time="2025-03-17T17:27:08.385276238Z" level=info msg="Forcibly stopping sandbox \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\"" Mar 17 17:27:08.386645 containerd[2048]: time="2025-03-17T17:27:08.385370102Z" level=info msg="TearDown network for sandbox \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" successfully" Mar 17 17:27:08.394239 containerd[2048]: time="2025-03-17T17:27:08.394183706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:27:08.394639 containerd[2048]: time="2025-03-17T17:27:08.394598006Z" level=info msg="RemovePodSandbox \"f883be0a6a50ef7a2ce177bcc41767990917015f5ff724e99b8a6228899677e3\" returns successfully" Mar 17 17:27:08.630589 kubelet[3694]: E0317 17:27:08.629868 3694 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 17:27:08.909954 containerd[2048]: time="2025-03-17T17:27:08.909877829Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 17:27:08.943417 containerd[2048]: time="2025-03-17T17:27:08.943287281Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80171b42b82c0d13bb86a92c494cd60646a91944310aad49dedee2c8a12765dd\"" Mar 17 17:27:08.945010 containerd[2048]: time="2025-03-17T17:27:08.944336453Z" level=info msg="StartContainer for \"80171b42b82c0d13bb86a92c494cd60646a91944310aad49dedee2c8a12765dd\"" Mar 17 17:27:09.047884 containerd[2048]: time="2025-03-17T17:27:09.047682889Z" level=info msg="StartContainer for \"80171b42b82c0d13bb86a92c494cd60646a91944310aad49dedee2c8a12765dd\" returns successfully" Mar 17 17:27:09.083866 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80171b42b82c0d13bb86a92c494cd60646a91944310aad49dedee2c8a12765dd-rootfs.mount: Deactivated successfully. Mar 17 17:27:09.094205 containerd[2048]: time="2025-03-17T17:27:09.094131626Z" level=info msg="shim disconnected" id=80171b42b82c0d13bb86a92c494cd60646a91944310aad49dedee2c8a12765dd namespace=k8s.io Mar 17 17:27:09.094797 containerd[2048]: time="2025-03-17T17:27:09.094496978Z" level=warning msg="cleaning up after shim disconnected" id=80171b42b82c0d13bb86a92c494cd60646a91944310aad49dedee2c8a12765dd namespace=k8s.io Mar 17 17:27:09.094797 containerd[2048]: time="2025-03-17T17:27:09.094524494Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:09.914667 containerd[2048]: time="2025-03-17T17:27:09.914607534Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 17:27:09.946594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281167200.mount: Deactivated successfully. Mar 17 17:27:09.949185 containerd[2048]: time="2025-03-17T17:27:09.949101570Z" level=info msg="CreateContainer within sandbox \"be1e789c30c52f216431d2db2dbd2191cd19377dc4e0d86bf7faa4ae1ed6dc8e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2475b523562e50c9a88f0b85e2d43d66f25dc9e96fefeec9a8f70cf5eb479311\"" Mar 17 17:27:09.950903 containerd[2048]: time="2025-03-17T17:27:09.950828406Z" level=info msg="StartContainer for \"2475b523562e50c9a88f0b85e2d43d66f25dc9e96fefeec9a8f70cf5eb479311\"" Mar 17 17:27:10.015728 systemd[1]: run-containerd-runc-k8s.io-2475b523562e50c9a88f0b85e2d43d66f25dc9e96fefeec9a8f70cf5eb479311-runc.ciGdFI.mount: Deactivated successfully. Mar 17 17:27:10.066545 containerd[2048]: time="2025-03-17T17:27:10.066449414Z" level=info msg="StartContainer for \"2475b523562e50c9a88f0b85e2d43d66f25dc9e96fefeec9a8f70cf5eb479311\" returns successfully" Mar 17 17:27:10.793876 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 17 17:27:11.828853 kubelet[3694]: I0317 17:27:11.827430 3694 setters.go:580] "Node became not ready" node="ip-172-31-17-190" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T17:27:11Z","lastTransitionTime":"2025-03-17T17:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 17:27:12.334196 kubelet[3694]: E0317 17:27:12.333763 3694 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-snnqx" podUID="48ab0974-a523-48d1-ac63-e57262564646" Mar 17 17:27:15.017016 systemd-networkd[1603]: lxc_health: Link UP Mar 17 17:27:15.037764 systemd-networkd[1603]: lxc_health: Gained carrier Mar 17 17:27:15.040503 (udev-worker)[6349]: Network interface NamePolicy= disabled on kernel command line. Mar 17 17:27:16.154765 kubelet[3694]: I0317 17:27:16.153729 3694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b25pt" podStartSLOduration=11.153707241 podStartE2EDuration="11.153707241s" podCreationTimestamp="2025-03-17 17:27:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:27:10.946018387 +0000 UTC m=+122.847139175" watchObservedRunningTime="2025-03-17 17:27:16.153707241 +0000 UTC m=+128.054828017" Mar 17 17:27:16.828103 systemd-networkd[1603]: lxc_health: Gained IPv6LL Mar 17 17:27:19.610418 ntpd[1999]: Listen normally on 13 lxc_health [fe80::4c01:6bff:fe35:f8f4%14]:123 Mar 17 17:27:19.611091 ntpd[1999]: 17 Mar 17:27:19 ntpd[1999]: Listen normally on 13 lxc_health [fe80::4c01:6bff:fe35:f8f4%14]:123 Mar 17 17:27:20.101916 systemd[1]: run-containerd-runc-k8s.io-2475b523562e50c9a88f0b85e2d43d66f25dc9e96fefeec9a8f70cf5eb479311-runc.l3ZL5p.mount: Deactivated successfully. Mar 17 17:27:22.500597 sshd[5619]: Connection closed by 139.178.68.195 port 58666 Mar 17 17:27:22.501924 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Mar 17 17:27:22.511465 systemd[1]: sshd@28-172.31.17.190:22-139.178.68.195:58666.service: Deactivated successfully. Mar 17 17:27:22.524849 systemd-logind[2020]: Session 29 logged out. Waiting for processes to exit. Mar 17 17:27:22.527667 systemd[1]: session-29.scope: Deactivated successfully. Mar 17 17:27:22.533667 systemd-logind[2020]: Removed session 29. Mar 17 17:27:37.603241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5db1e833d8fbe88f71c9048e5b4a223505ef78949b24642e6b6d0e0bec48fb53-rootfs.mount: Deactivated successfully. Mar 17 17:27:37.646838 containerd[2048]: time="2025-03-17T17:27:37.646745659Z" level=info msg="shim disconnected" id=5db1e833d8fbe88f71c9048e5b4a223505ef78949b24642e6b6d0e0bec48fb53 namespace=k8s.io Mar 17 17:27:37.647520 containerd[2048]: time="2025-03-17T17:27:37.647390935Z" level=warning msg="cleaning up after shim disconnected" id=5db1e833d8fbe88f71c9048e5b4a223505ef78949b24642e6b6d0e0bec48fb53 namespace=k8s.io Mar 17 17:27:37.647520 containerd[2048]: time="2025-03-17T17:27:37.647423863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:38.002007 kubelet[3694]: I0317 17:27:38.001950 3694 scope.go:117] "RemoveContainer" containerID="5db1e833d8fbe88f71c9048e5b4a223505ef78949b24642e6b6d0e0bec48fb53" Mar 17 17:27:38.006074 containerd[2048]: time="2025-03-17T17:27:38.006017009Z" level=info msg="CreateContainer within sandbox \"102eb0f4b32da5b5a94d86d6f1e88857a0294f3269dabd4f1fcf5c20c659ca55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 17 17:27:38.032191 containerd[2048]: time="2025-03-17T17:27:38.032113973Z" level=info msg="CreateContainer within sandbox \"102eb0f4b32da5b5a94d86d6f1e88857a0294f3269dabd4f1fcf5c20c659ca55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2fd0449042d619eff9676d883a7197284783eec37f4dd0bd7a564f378b8981db\"" Mar 17 17:27:38.034770 containerd[2048]: time="2025-03-17T17:27:38.032849981Z" level=info msg="StartContainer for \"2fd0449042d619eff9676d883a7197284783eec37f4dd0bd7a564f378b8981db\"" Mar 17 17:27:38.152044 containerd[2048]: time="2025-03-17T17:27:38.151968822Z" level=info msg="StartContainer for \"2fd0449042d619eff9676d883a7197284783eec37f4dd0bd7a564f378b8981db\" returns successfully" Mar 17 17:27:41.338934 kubelet[3694]: E0317 17:27:41.338854 3694 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-190?timeout=10s\": context deadline exceeded" Mar 17 17:27:42.147757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b918d86f9127e4747fcc707317768a47c75fa8e4354231db44ec734ab8c812d-rootfs.mount: Deactivated successfully. Mar 17 17:27:42.157985 containerd[2048]: time="2025-03-17T17:27:42.157887310Z" level=info msg="shim disconnected" id=3b918d86f9127e4747fcc707317768a47c75fa8e4354231db44ec734ab8c812d namespace=k8s.io Mar 17 17:27:42.157985 containerd[2048]: time="2025-03-17T17:27:42.157978750Z" level=warning msg="cleaning up after shim disconnected" id=3b918d86f9127e4747fcc707317768a47c75fa8e4354231db44ec734ab8c812d namespace=k8s.io Mar 17 17:27:42.158745 containerd[2048]: time="2025-03-17T17:27:42.158000962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:27:43.021650 kubelet[3694]: I0317 17:27:43.021606 3694 scope.go:117] "RemoveContainer" containerID="3b918d86f9127e4747fcc707317768a47c75fa8e4354231db44ec734ab8c812d" Mar 17 17:27:43.025480 containerd[2048]: time="2025-03-17T17:27:43.025314646Z" level=info msg="CreateContainer within sandbox \"736ee269104ec78d30e3747f4ab2775eba491141ae690c0f99dab6a7ae3864a1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 17 17:27:43.051429 containerd[2048]: time="2025-03-17T17:27:43.051352234Z" level=info msg="CreateContainer within sandbox \"736ee269104ec78d30e3747f4ab2775eba491141ae690c0f99dab6a7ae3864a1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7db2a6c62b5b31086f3a5e1fd85fe93b51588030793d5f4d6a7c344a318f6e3d\"" Mar 17 17:27:43.052164 containerd[2048]: time="2025-03-17T17:27:43.052106290Z" level=info msg="StartContainer for \"7db2a6c62b5b31086f3a5e1fd85fe93b51588030793d5f4d6a7c344a318f6e3d\"" Mar 17 17:27:43.150102 systemd[1]: run-containerd-runc-k8s.io-7db2a6c62b5b31086f3a5e1fd85fe93b51588030793d5f4d6a7c344a318f6e3d-runc.lU8PPY.mount: Deactivated successfully. Mar 17 17:27:43.175396 containerd[2048]: time="2025-03-17T17:27:43.174767147Z" level=info msg="StartContainer for \"7db2a6c62b5b31086f3a5e1fd85fe93b51588030793d5f4d6a7c344a318f6e3d\" returns successfully" Mar 17 17:27:51.340035 kubelet[3694]: E0317 17:27:51.339563 3694 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-190?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"